1. Home
  2. deepfake

deepfake

Man Loses £75,000 To Deepfake Elon Musk Investment Scheme: Report

Man Loses £75,000 To Deepfake Elon Musk Investment Scheme: Report

A man says he’s lost his entire life savings and incurred extensive debt after falling for a fraudulent investment scheme featuring celebrity deepfakes. Kitchen builder Des Healey saw a fraudulent ad on Facebook featuring money-saving expert Martin Lewis and billionaire Elon Musk promoting a non-existent Bitcoin investment strategy, BBC reports. Healey replied to the ad […]

The post Man Loses £75,000 To Deepfake Elon Musk Investment Scheme: Report appeared first on The Daily Hodl.

Coinbase Says There Are Five Key Areas of the Crypto Market To Watch in 2025

2,500,000,000 Gmail Users Targeted in Viral AI Hack That Tricks Users Into Accepting ‘Security Alert’: Report

2,500,000,000 Gmail Users Targeted in Viral AI Hack That Tricks Users Into Accepting ‘Security Alert’: Report

A sophisticated new artificial intelligence (AI)-powered scam is targeting billions of users on the world’s largest email service. Microsoft security expert Sam Mitrovic writes in a new blog post about a “super realistic AI scam call” that mimics American-sounding voices to trick Gmail users into giving up their login credentials. The scam works by sending […]

The post 2,500,000,000 Gmail Users Targeted in Viral AI Hack That Tricks Users Into Accepting ‘Security Alert’: Report appeared first on The Daily Hodl.

Coinbase Says There Are Five Key Areas of the Crypto Market To Watch in 2025

California passes package of laws to combat election deepfakes

California governor Gavin Newsom signed three bills aimed at battling deepfake election content while on stage at a conference in San Francisco.

California Governor Gavin Newsom passed a tough new law to crack down on politically-themed artificial intelligence deepfakes during elections. 

It comes only weeks after Elon Musk re-posted a parody of a Kamala Harris campaign ad on X that garnered millions of views and used AI-powered voice manipulation to make it seem Harris called herself an incompetent presidential candidate. 

In late July, Newsom specifically pointed to Musk’s post and vowed to sign a bill “in a matter of weeks” banning the practice.

Read more

Coinbase Says There Are Five Key Areas of the Crypto Market To Watch in 2025

San Fran city attorney sues sites that ‘undress’ women with AI

Dek: AI-powered websites allowing users to create nonconsensual nude photos of women and girls were visited 200 million times in the first half of the year.

San Francisco’s City Attorney has filed a lawsuit against the owners of 16 websites that have allowed users to “nudify” women and young girls using AI.

The office of San Francisco City Attorney David Chiu on Aug. 15 said he was suing the owners of 16 of the “most-visited websites” that allow users to “undress” people in a photo to make “nonconsensual nude images of women and girls.”

A redacted version of the suit filed in the city’s Superior Court alleges the site owners include individuals and companies from Los Angeles, New Mexico, the United Kingdom and Estonia who have violated California and United States laws on deepfake porn, revenge porn and child sexual abuse material.

Read more

Coinbase Says There Are Five Key Areas of the Crypto Market To Watch in 2025

Scammers Use Deepfake Elon Musk Video to Steal Crypto During Bitcoin 2024 Event

Scammers Use Deepfake Elon Musk Video to Steal Crypto During Bitcoin 2024 EventAlthough the Bitcoin 2024 event has wrapped up, the past three days have seen a deepfake Elon Musk “double-your-money” scam on Youtube. Posing as the official Bitcoin 2024 livestream, the scam has tricked unsuspecting users into parting with their crypto assets. Over the weekend, multiple deepfake livestreams featuring Musk were active, with one specific video […]

Coinbase Says There Are Five Key Areas of the Crypto Market To Watch in 2025

Pro-Bitcoin DeSantis tagged over AI-faked photos in Trump smear campaign

The images depicting Donald Trump cuddling up to and kissing Anthony Fauci were labeled as being AI-generated on Twitter's disinformation alert feature.

Pro-Bitcoin (BTC) presidential bidder Ron DeSantis has been tagged for apparently using artificial intelligence-generated images in an ad campaign smearing rival and former president Donald Trump.

It comes amid a rise in AI-generated deep fakes being used in political ads and movements in recent months.

On June 5, DeSantis’ campaign tweeted a video purporting to show Trump’s close support of Anthony Fauci, the chief medical advisor to Trump when he was president of the United States.

Fauci is a contentious figure in GOP circles for, among other reasons, his handling of the federal response to the COVID-19 pandemic which many deemed to be heavy-handed.

The video features a collage of real images depicting Trump and Fauci mixed in with what appears to be AI-generated images of the pair hugging with some depicting Trump appearing to kiss Fauci.

Twitter’s Community Notes feature — the platform's community-driven misinformation-dispelling project — added a disclaimer to the tweet calling it "AI-generated images."

AFP Fact Check, a department within the news agency Agence France-Presse said the images had "the hallmarks of AI-generated imagery."

A screenshot from the video, the top left, bottom middle and bottom right images are AI-generated. Source: Twitter

DeSantis and Trump are facing off to take the Republican nominee for president. DeSantis kicked off his bid last month in a Twitter Space and promised to “protect” Bitcoin — current polling has him trailing Trump.

AI in the political sphere

Others in politics have used AI-generated media to attack rivals, even Trump’s campaign is guilty of using AI to smear DeSantis.

Shortly after DeSantis announced his presidential bid, Trump posted a video mocking DeSantis’ Twitter-based announcement, using deepfaked audio to create a fake Twitter Space featuring the likeness of DeSantis, Elon Musk, George Soros, Adolf Hitler, Satan, and Trump.

A screenshot of the video posted by Trump depicting a Twitter Space. Source: Instagram

In April, the Republican party released an ad with its predictions on what a second term for President Joe Biden would look like which was packed with AI-generated images that depicted a dystopian future.

Related: Forget Cambridge Analytica — Here’s how AI could threaten elections

New Zealand politics has also recently featured AI-made media with the country’s opposing National Party using generated images to attack the ruling Labour Party in multiple social posts in May.

The National Party used AI to generate Polynesian hospital workers in a social media campaign. Source: Instagram

One image depicts Polynesian hospital staff, another shows multiple masked men robbing a jewelry store and a third image depicts a woman in a house at night — all were generated using AI tools.

AI Eye: 25K traders bet on ChatGPT’s stock picks, AI sucks at dice throws, and more

Coinbase Says There Are Five Key Areas of the Crypto Market To Watch in 2025

Australia asks if ‘high-risk’ AI should be banned in surprise consultation

The Australian government suddenly announced a new eight-week consultation to ask how heavily it should police the AI sector.

The Australian government has announced a sudden eight-week consultation that will seek to understand whether any “high-risk” artificial intelligence tools should be banned.

Other regions, including the United States, the European Union and China, have also launched measures to understand and potentially mitigate risks associated with rapid AI development in recent months.

On June 1, Industry and Science Minister Ed Husic announced the release of two papers — a discussion paper on “Safe and Responsible AI in Australia” and a report on generative AI from the National Science and Technology Council.

The papers came alongside a consultation that will run until July 26.

The government is wanting feedback on how to support the “safe and responsible use of AI” and discusses if it should take either voluntary approaches such as ethical frameworks, if specific regulation is needed or undertake a mix of both approaches.

A map of options for potential AI governance with a spectrum from “voluntary” to “regulatory.” Source: Department of Industry, Science and Resources

A question in the consultation directly asks, “whether any high-risk AI applications or technologies should be banned completely?” and what criteria should be used to identify such AI tools that should be banned.

A draft risk matrix for AI models was included for feedback in the comprehensive discussion paper. While only to provide examples it categorized AI in self-driving cars as “high risk” while a generative AI tool used for a purpose such as creating medical patient records was considered “medium risk.”

Highlighted in the paper was the “positive” AI use in the medical, engineering and legal industries but also its “harmful” uses such as deepfake tools, use in creating fake news and cases where AI bots had encouraged self-harm.

The bias of AI models and “hallucinations” — nonsensical or false information generated by AI’s — were also brought up as issues.

Related: Microsoft’s CSO says AI will help humans flourish, cosigns doomsday letter anyway

The discussion paper claims AI adoption is “relatively low” in the country as it has “low levels of public trust.” It also pointed to AI regulation in other jurisdictions and Italy’s temporary ban on ChatGPT.

Meanwhile, the National Science and Technology Council report said that Australia has some advantageous AI capabilities in robotics and computer vision, but its “core fundamental capacity in [large language models] and related areas is relatively weak,” and added:

“The concentration of generative AI resources within a small number of large multinational and primarily US-based technology companies poses potentials [sic] risks to Australia.”

The report further discussed global AI regulation, gave examples of generative AI models, and opined they “will likely impact everything from banking and finance to public services, education and creative industries.”

AI Eye: 25K traders bet on ChatGPT’s stock picks, AI sucks at dice throws, and more

Coinbase Says There Are Five Key Areas of the Crypto Market To Watch in 2025

AI deepfakes are getting better at spoofing KYC verification: Binance exec

The technology is getting so advanced, deepfakes may soon become undetectable by a human verifier, said Jimmy Su, Binance's Chief Security Officer.

Deepfake technology used by crypto fraudsters to bypass know-your-customer (KYC) verification on crypto exchanges such as Binance is only going to get more advanced, Binance's chief security officer warns.

Deepfakes are made using artificial intelligence tools that use machine learning to create convincing audio, images or videos featuring a person’s likeness. While there are legitimate use cases for the technology, it can also be used for scams and hoaxes.

Speaking to Cointelegraph, Binance chief security officer Jimmy Su said there has been a rise in fraudsters using the tech to try and get past the exchange’s customer verification processes.

“The hacker will look for a normal picture of the victim online somewhere. Based on that, using deep fake tools, they’re able to produce videos to do the bypass.”

Su said the tools have become so advanced that they can even correctly respond to audio instructions designed to check whether the applicant is a human and can do so in real-time.

“Some of the verification requires the user, for example, to blink their left eye or look to the left or to the right, look up or look down. The deep fakes are advanced enough today that they can actually execute those commands,” he explained.

However, Su believes the faked videos are not at the level yet where they can fool a human operator.

“When we look at those videos, there are certain parts of it we can detect with the human eye,” for example, when the user is required to turn their head to the side,” said Su.

“AI will overcome [them] over time. So it's not something that we can always rely on.”

In August 2022, Binance’s chief communications officer Patrick Hillmann warned that a “sophisticated hacking team” was using his previous news interviews and TV appearances to create a “deepfake” version of him.

The deepfake version of Hillmann was then deployed to conduct Zoom meetings with various crypto project teams promising an opportunity to list their assets on Binance — for a price, of course.

“That's a very difficult problem to solve,” said Su, when asked about how to combat such attacks.

“Even if we can control our own videos, there are videos out there that are not owned by us. So one thing, again, is user education.”

Related: Binance off the hook from $8M Tinder ‘pig butchering’ lawsuit

Binance is planning to release a blog post series aimed at educating users about risk management.

In an early version of the blog post featuring a section on cybersecurity, Binance said that it uses AI and machine learning algorithms for its own purposes, including detecting unusual login patterns and transaction patterns and other "abnormal activity on the platform."

AI Eye: ‘Biggest ever’ leap in AI, cool new tools, AIs are the real DAOs

Coinbase Says There Are Five Key Areas of the Crypto Market To Watch in 2025

Forget Cambridge Analytica — Here’s how AI could threaten elections

While disinformation is an ongoing issue that social media has only contributed to, AI could make it much easier for bad actors to spread disinformation.

In 2018, the world was shocked to learn that British political consulting firm Cambridge Analytica had harvested the personal data of at least 50 million Facebook users without their consent and used it to influence elections in the United States and abroad.

An undercover investigation by Channel 4 News resulted in footage of the firm’s then CEO, Alexander Nix, suggesting it had no issues with deliberately misleading the public to support its political clients, saying:

“It sounds a dreadful thing to say, but these are things that don’t necessarily need to be true. As long as they’re believed”

The scandal was a wake-up call about the dangers of both social media and big data, as well as how fragile democracy can be in the face of the rapid technological change being experienced globally.

Artificial intelligence

How does artificial intelligence (AI) fit into this picture? Could it also be used to influence elections and threaten the integrity of democracies worldwide?

According to Trish McCluskey, associate professor at Deakin University, and many others, the answer is an emphatic yes.

McCluskey told Cointelegraph that large language models such as OpenAI’s ChatGPT “can generate indistinguishable content from human-written text,” which can contribute to disinformation campaigns or the dissemination of fake news online.

Among other examples of how AI can potentially threaten democracies, McCluskey highlighted AI’s capacity to produce deep fakes, which can fabricate videos of public figures like presidential candidates and manipulate public opinion.

While it is still generally easy to tell when a video is a deepfake, the technology is advancing rapidly and will eventually become indistinguishable from reality.

For example, a deepfake video of former FTX CEO Sam Bankman-Fried that linked to a phishing website shows how lips can often be out of sync with the words, leaving viewers feeling that something is not quite right.

Gary Marcu, an AI entrepreneur and co-author of the book Rebooting AI: Building Artificial Intelligence We Can Trust, agreed with McCluskey’s assessment, telling Cointelegraph that in the short term, the single most significant risk posed by AI is:

“The threat of massive, automated, plausible misinformation overwhelming democracy.”

A 2021 peer-reviewed paper by researchers Noémi Bontridder and Yves Poullet titled “The role of artificial intelligence in disinformation” also highlighted AI systems’ ability to contribute to disinformation and suggested it does so in two ways:

“First, they [AI] can be leveraged by malicious stakeholders in order to manipulate individuals in a particularly effective manner and at a huge scale. Secondly, they directly amplify the spread of such content.”

Additionally, today’s AI systems are only as good as the data fed into them, which can sometimes result in biased responses that can influence the opinion of users.

How to mitigate the risks

While it is clear that AI has the potential to threaten democracy and elections around the world, it is worth mentioning that AI can also play a positive role in democracy and combat disinformation.

For example, McCluskey stated that AI could be “used to detect and flag disinformation, to facilitate fact-checking, to monitor election integrity,” as well as educate and engage citizens in democratic processes.

“The key,” McCluskey adds, “is to ensure that AI technologies are developed and used responsibly, with appropriate regulations and safeguards in place.”

An example of regulations that can help mitigate AI’s ability to produce and disseminate disinformation is the European Union’s Digital Services Act (DSA).

Related: OpenAI CEO to testify before Congress alongside ‘AI pause’ advocate and IBM exec

When the DSA comes into effect entirely, large online platforms like Twitter and Facebook will be required to meet a list of obligations that intend to minimize disinformation, among other things, or be subject to fines of up to 6% of their annual turnover.

The DSA also introduces increased transparency requirements for these online platforms, which require them to disclose how it recommends content to users — often done using AI algorithms — as well as how it moderate content.

Bontridder and Poullet noted that firms are increasingly using AI to moderate content, which they suggested may be “particularly problematic,” as AI has the potential to over-moderate and impinge on free speech.

The DSA only applies to operations in the European Union; McCluskey notes that as a global phenomenon, international cooperation would be necessary to regulate AI and combat disinformation.

Magazine: $3.4B of Bitcoin in a popcorn tin — The Silk Road hacker’s story

McCluskey suggested this could occur via “international agreements on AI ethics, standards for data privacy, or joint efforts to track and combat disinformation campaigns.”

Ultimately, McCluskey said that “combating the risk of AI contributing to disinformation will require a multifaceted approach,” involving “government regulation, self-regulation by tech companies, international cooperation, public education, technological solutions, media literacy and ongoing research.”

Coinbase Says There Are Five Key Areas of the Crypto Market To Watch in 2025

Sam Bankman-Fried deepfake attempts to scam investors impacted by FTX

A faked video the FTX founder created by scammers has circulated on Twitter with users poking fun at its poor production quality.

A faked video of Sam Bankman-Fried, the former CEO of cryptocurrency exchange FTX, has circulated on Twitter attempting to scam investors affected by the exchange’s bankruptcy.

Created using programs to emulate Bankman-Fried’s likeness and voice, the poorly made “deepfake” video attempts to direct users to a malicious site under the promise of a “giveaway” that will “double your cryptocurrency.”

The video uses appears to be old interview footage of Bankman-Fried and used a voice emulator to create the illusion of him saying “as you know our F-DEX [sic] exchange is going bankrupt, but I hasten to inform all users that you should not panic.”

The fake Bankman-Fried then directs users to a website saying FTX has “prepared a giveaway for you in which you can double your cryptocurrency” in an apparent "double-your-crypto" scam where users send crypto under the promise they'll receive double back.

A now-suspended Twitter account with the handle “S4GE_ETH” is understood to have been compromised, leading to scammers posting a link to the scam website — which now appears to have been taken offline.

The crypto community has pointed to the fact that scammers were able to pay a small fee in order to get Twitter’s “blue tick” verification in order to appear authentic.

Meanwhile, the video received widespread mockery for its poor production quality with one Twitter user ridiculing how the scam production pronounced “FTX” in the video, saying they’re “definitely using [...] ‘Effed-X’ from now on.”

At the same time, it gave many the opportunity to criticize the FTX founder, one user said “fake [Bankman-Fried] at least admits FTX is bankrupt” and YouTuber Stephen Findeisen shared the video saying he “can’t tell who lies more” between the real and fake Bankman-Fried.

Related: Crypto scammers are using black market identities to avoid detection: CertiK

Authorities in Singapore on Nov. 19 warned affected FTX users and investors to be vigilant as websites offering services promising to assist in recovering crypto stuck on the exchange are scams that mostly steal information such as account logins.

The Singapore Police Force warned of such a website which prompted FTX users to log in with their account credentials that claimed to be hosted by the United States Department of Justice.

Others have attempted to profit from the attention FTX and its former CEO are receiving. On Nov. 14, shortly after Bankman-Fried tweeted “What” without further explanation, some noticed the launch of a so-called “meme token” called WHAT.

“Deepfake” videos have long been used by cryptocurrency scammers to try to con unwitting investors. In May, faked videos of Elon Musk promoting a crypto platform surfaced on Twitter using footage from a TED Talk the month prior.

The video caught Musk’s attention at the time, who responded: “Yikes. Def not me.”

Coinbase Says There Are Five Key Areas of the Crypto Market To Watch in 2025