1. Home
  2. deepfake

deepfake

Pro-Bitcoin DeSantis tagged over AI-faked photos in Trump smear campaign

The images depicting Donald Trump cuddling up to and kissing Anthony Fauci were labeled as being AI-generated on Twitter's disinformation alert feature.

Pro-Bitcoin (BTC) presidential bidder Ron DeSantis has been tagged for apparently using artificial intelligence-generated images in an ad campaign smearing rival and former president Donald Trump.

It comes amid a rise in AI-generated deep fakes being used in political ads and movements in recent months.

On June 5, DeSantis’ campaign tweeted a video purporting to show Trump’s close support of Anthony Fauci, the chief medical advisor to Trump when he was president of the United States.

Fauci is a contentious figure in GOP circles for, among other reasons, his handling of the federal response to the COVID-19 pandemic which many deemed to be heavy-handed.

The video features a collage of real images depicting Trump and Fauci mixed in with what appears to be AI-generated images of the pair hugging with some depicting Trump appearing to kiss Fauci.

Twitter’s Community Notes feature — the platform's community-driven misinformation-dispelling project — added a disclaimer to the tweet calling it "AI-generated images."

AFP Fact Check, a department within the news agency Agence France-Presse said the images had "the hallmarks of AI-generated imagery."

A screenshot from the video, the top left, bottom middle and bottom right images are AI-generated. Source: Twitter

DeSantis and Trump are facing off to take the Republican nominee for president. DeSantis kicked off his bid last month in a Twitter Space and promised to “protect” Bitcoin — current polling has him trailing Trump.

AI in the political sphere

Others in politics have used AI-generated media to attack rivals, even Trump’s campaign is guilty of using AI to smear DeSantis.

Shortly after DeSantis announced his presidential bid, Trump posted a video mocking DeSantis’ Twitter-based announcement, using deepfaked audio to create a fake Twitter Space featuring the likeness of DeSantis, Elon Musk, George Soros, Adolf Hitler, Satan, and Trump.

A screenshot of the video posted by Trump depicting a Twitter Space. Source: Instagram

In April, the Republican party released an ad with its predictions on what a second term for President Joe Biden would look like which was packed with AI-generated images that depicted a dystopian future.

Related: Forget Cambridge Analytica — Here’s how AI could threaten elections

New Zealand politics has also recently featured AI-made media with the country’s opposing National Party using generated images to attack the ruling Labour Party in multiple social posts in May.

The National Party used AI to generate Polynesian hospital workers in a social media campaign. Source: Instagram

One image depicts Polynesian hospital staff, another shows multiple masked men robbing a jewelry store and a third image depicts a woman in a house at night — all were generated using AI tools.

AI Eye: 25K traders bet on ChatGPT’s stock picks, AI sucks at dice throws, and more

Gala Games hit by $200 million in possible inside job

Australia asks if ‘high-risk’ AI should be banned in surprise consultation

The Australian government suddenly announced a new eight-week consultation to ask how heavily it should police the AI sector.

The Australian government has announced a sudden eight-week consultation that will seek to understand whether any “high-risk” artificial intelligence tools should be banned.

Other regions, including the United States, the European Union and China, have also launched measures to understand and potentially mitigate risks associated with rapid AI development in recent months.

On June 1, Industry and Science Minister Ed Husic announced the release of two papers — a discussion paper on “Safe and Responsible AI in Australia” and a report on generative AI from the National Science and Technology Council.

The papers came alongside a consultation that will run until July 26.

The government is wanting feedback on how to support the “safe and responsible use of AI” and discusses if it should take either voluntary approaches such as ethical frameworks, if specific regulation is needed or undertake a mix of both approaches.

A map of options for potential AI governance with a spectrum from “voluntary” to “regulatory.” Source: Department of Industry, Science and Resources

A question in the consultation directly asks, “whether any high-risk AI applications or technologies should be banned completely?” and what criteria should be used to identify such AI tools that should be banned.

A draft risk matrix for AI models was included for feedback in the comprehensive discussion paper. While only to provide examples it categorized AI in self-driving cars as “high risk” while a generative AI tool used for a purpose such as creating medical patient records was considered “medium risk.”

Highlighted in the paper was the “positive” AI use in the medical, engineering and legal industries but also its “harmful” uses such as deepfake tools, use in creating fake news and cases where AI bots had encouraged self-harm.

The bias of AI models and “hallucinations” — nonsensical or false information generated by AI’s — were also brought up as issues.

Related: Microsoft’s CSO says AI will help humans flourish, cosigns doomsday letter anyway

The discussion paper claims AI adoption is “relatively low” in the country as it has “low levels of public trust.” It also pointed to AI regulation in other jurisdictions and Italy’s temporary ban on ChatGPT.

Meanwhile, the National Science and Technology Council report said that Australia has some advantageous AI capabilities in robotics and computer vision, but its “core fundamental capacity in [large language models] and related areas is relatively weak,” and added:

“The concentration of generative AI resources within a small number of large multinational and primarily US-based technology companies poses potentials [sic] risks to Australia.”

The report further discussed global AI regulation, gave examples of generative AI models, and opined they “will likely impact everything from banking and finance to public services, education and creative industries.”

AI Eye: 25K traders bet on ChatGPT’s stock picks, AI sucks at dice throws, and more

Gala Games hit by $200 million in possible inside job

AI deepfakes are getting better at spoofing KYC verification: Binance exec

The technology is getting so advanced, deepfakes may soon become undetectable by a human verifier, said Jimmy Su, Binance's Chief Security Officer.

Deepfake technology used by crypto fraudsters to bypass know-your-customer (KYC) verification on crypto exchanges such as Binance is only going to get more advanced, Binance's chief security officer warns.

Deepfakes are made using artificial intelligence tools that use machine learning to create convincing audio, images or videos featuring a person’s likeness. While there are legitimate use cases for the technology, it can also be used for scams and hoaxes.

Speaking to Cointelegraph, Binance chief security officer Jimmy Su said there has been a rise in fraudsters using the tech to try and get past the exchange’s customer verification processes.

“The hacker will look for a normal picture of the victim online somewhere. Based on that, using deep fake tools, they’re able to produce videos to do the bypass.”

Su said the tools have become so advanced that they can even correctly respond to audio instructions designed to check whether the applicant is a human and can do so in real-time.

“Some of the verification requires the user, for example, to blink their left eye or look to the left or to the right, look up or look down. The deep fakes are advanced enough today that they can actually execute those commands,” he explained.

However, Su believes the faked videos are not at the level yet where they can fool a human operator.

“When we look at those videos, there are certain parts of it we can detect with the human eye,” for example, when the user is required to turn their head to the side,” said Su.

“AI will overcome [them] over time. So it's not something that we can always rely on.”

In August 2022, Binance’s chief communications officer Patrick Hillmann warned that a “sophisticated hacking team” was using his previous news interviews and TV appearances to create a “deepfake” version of him.

The deepfake version of Hillmann was then deployed to conduct Zoom meetings with various crypto project teams promising an opportunity to list their assets on Binance — for a price, of course.

“That's a very difficult problem to solve,” said Su, when asked about how to combat such attacks.

“Even if we can control our own videos, there are videos out there that are not owned by us. So one thing, again, is user education.”

Related: Binance off the hook from $8M Tinder ‘pig butchering’ lawsuit

Binance is planning to release a blog post series aimed at educating users about risk management.

In an early version of the blog post featuring a section on cybersecurity, Binance said that it uses AI and machine learning algorithms for its own purposes, including detecting unusual login patterns and transaction patterns and other "abnormal activity on the platform."

AI Eye: ‘Biggest ever’ leap in AI, cool new tools, AIs are the real DAOs

Gala Games hit by $200 million in possible inside job

Forget Cambridge Analytica — Here’s how AI could threaten elections

While disinformation is an ongoing issue that social media has only contributed to, AI could make it much easier for bad actors to spread disinformation.

In 2018, the world was shocked to learn that British political consulting firm Cambridge Analytica had harvested the personal data of at least 50 million Facebook users without their consent and used it to influence elections in the United States and abroad.

An undercover investigation by Channel 4 News resulted in footage of the firm’s then CEO, Alexander Nix, suggesting it had no issues with deliberately misleading the public to support its political clients, saying:

“It sounds a dreadful thing to say, but these are things that don’t necessarily need to be true. As long as they’re believed”

The scandal was a wake-up call about the dangers of both social media and big data, as well as how fragile democracy can be in the face of the rapid technological change being experienced globally.

Artificial intelligence

How does artificial intelligence (AI) fit into this picture? Could it also be used to influence elections and threaten the integrity of democracies worldwide?

According to Trish McCluskey, associate professor at Deakin University, and many others, the answer is an emphatic yes.

McCluskey told Cointelegraph that large language models such as OpenAI’s ChatGPT “can generate indistinguishable content from human-written text,” which can contribute to disinformation campaigns or the dissemination of fake news online.

Among other examples of how AI can potentially threaten democracies, McCluskey highlighted AI’s capacity to produce deep fakes, which can fabricate videos of public figures like presidential candidates and manipulate public opinion.

While it is still generally easy to tell when a video is a deepfake, the technology is advancing rapidly and will eventually become indistinguishable from reality.

For example, a deepfake video of former FTX CEO Sam Bankman-Fried that linked to a phishing website shows how lips can often be out of sync with the words, leaving viewers feeling that something is not quite right.

Gary Marcu, an AI entrepreneur and co-author of the book Rebooting AI: Building Artificial Intelligence We Can Trust, agreed with McCluskey’s assessment, telling Cointelegraph that in the short term, the single most significant risk posed by AI is:

“The threat of massive, automated, plausible misinformation overwhelming democracy.”

A 2021 peer-reviewed paper by researchers Noémi Bontridder and Yves Poullet titled “The role of artificial intelligence in disinformation” also highlighted AI systems’ ability to contribute to disinformation and suggested it does so in two ways:

“First, they [AI] can be leveraged by malicious stakeholders in order to manipulate individuals in a particularly effective manner and at a huge scale. Secondly, they directly amplify the spread of such content.”

Additionally, today’s AI systems are only as good as the data fed into them, which can sometimes result in biased responses that can influence the opinion of users.

How to mitigate the risks

While it is clear that AI has the potential to threaten democracy and elections around the world, it is worth mentioning that AI can also play a positive role in democracy and combat disinformation.

For example, McCluskey stated that AI could be “used to detect and flag disinformation, to facilitate fact-checking, to monitor election integrity,” as well as educate and engage citizens in democratic processes.

“The key,” McCluskey adds, “is to ensure that AI technologies are developed and used responsibly, with appropriate regulations and safeguards in place.”

An example of regulations that can help mitigate AI’s ability to produce and disseminate disinformation is the European Union’s Digital Services Act (DSA).

Related: OpenAI CEO to testify before Congress alongside ‘AI pause’ advocate and IBM exec

When the DSA comes into effect entirely, large online platforms like Twitter and Facebook will be required to meet a list of obligations that intend to minimize disinformation, among other things, or be subject to fines of up to 6% of their annual turnover.

The DSA also introduces increased transparency requirements for these online platforms, which require them to disclose how it recommends content to users — often done using AI algorithms — as well as how it moderate content.

Bontridder and Poullet noted that firms are increasingly using AI to moderate content, which they suggested may be “particularly problematic,” as AI has the potential to over-moderate and impinge on free speech.

The DSA only applies to operations in the European Union; McCluskey notes that as a global phenomenon, international cooperation would be necessary to regulate AI and combat disinformation.

Magazine: $3.4B of Bitcoin in a popcorn tin — The Silk Road hacker’s story

McCluskey suggested this could occur via “international agreements on AI ethics, standards for data privacy, or joint efforts to track and combat disinformation campaigns.”

Ultimately, McCluskey said that “combating the risk of AI contributing to disinformation will require a multifaceted approach,” involving “government regulation, self-regulation by tech companies, international cooperation, public education, technological solutions, media literacy and ongoing research.”

Gala Games hit by $200 million in possible inside job

Sam Bankman-Fried deepfake attempts to scam investors impacted by FTX

A faked video the FTX founder created by scammers has circulated on Twitter with users poking fun at its poor production quality.

A faked video of Sam Bankman-Fried, the former CEO of cryptocurrency exchange FTX, has circulated on Twitter attempting to scam investors affected by the exchange’s bankruptcy.

Created using programs to emulate Bankman-Fried’s likeness and voice, the poorly made “deepfake” video attempts to direct users to a malicious site under the promise of a “giveaway” that will “double your cryptocurrency.”

The video uses appears to be old interview footage of Bankman-Fried and used a voice emulator to create the illusion of him saying “as you know our F-DEX [sic] exchange is going bankrupt, but I hasten to inform all users that you should not panic.”

The fake Bankman-Fried then directs users to a website saying FTX has “prepared a giveaway for you in which you can double your cryptocurrency” in an apparent "double-your-crypto" scam where users send crypto under the promise they'll receive double back.

A now-suspended Twitter account with the handle “S4GE_ETH” is understood to have been compromised, leading to scammers posting a link to the scam website — which now appears to have been taken offline.

The crypto community has pointed to the fact that scammers were able to pay a small fee in order to get Twitter’s “blue tick” verification in order to appear authentic.

Meanwhile, the video received widespread mockery for its poor production quality with one Twitter user ridiculing how the scam production pronounced “FTX” in the video, saying they’re “definitely using [...] ‘Effed-X’ from now on.”

At the same time, it gave many the opportunity to criticize the FTX founder, one user said “fake [Bankman-Fried] at least admits FTX is bankrupt” and YouTuber Stephen Findeisen shared the video saying he “can’t tell who lies more” between the real and fake Bankman-Fried.

Related: Crypto scammers are using black market identities to avoid detection: CertiK

Authorities in Singapore on Nov. 19 warned affected FTX users and investors to be vigilant as websites offering services promising to assist in recovering crypto stuck on the exchange are scams that mostly steal information such as account logins.

The Singapore Police Force warned of such a website which prompted FTX users to log in with their account credentials that claimed to be hosted by the United States Department of Justice.

Others have attempted to profit from the attention FTX and its former CEO are receiving. On Nov. 14, shortly after Bankman-Fried tweeted “What” without further explanation, some noticed the launch of a so-called “meme token” called WHAT.

“Deepfake” videos have long been used by cryptocurrency scammers to try to con unwitting investors. In May, faked videos of Elon Musk promoting a crypto platform surfaced on Twitter using footage from a TED Talk the month prior.

The video caught Musk’s attention at the time, who responded: “Yikes. Def not me.”

Gala Games hit by $200 million in possible inside job

Hackers Used Deepfake of Binance CCO to Perform Exchange Listing Scams

Hackers Used Deepfake of Binance CCO to Perform Exchange Listing ScamsA set of hackers managed to impersonate Binance chief communications officer (CCO) Patrick Hillmann in a series of video calls with several representatives of cryptocurrency projects. The attackers used what Hillman described as an AI Hologram, a deepfake of his image for this objective, and managed to fool some representatives of these projects, making them […]

Gala Games hit by $200 million in possible inside job

Musician sells rights to deepfake her voice using NFTs

“Creating work with the voices of others is something to embrace,” said Holly Herndon.

American musician and composer Holly Herndon seems to be capitalizing on the principle of deep-fake technology by allowing fans to use a digital version of herself to create original artwork they can then sell.

According to a Thursday announcement from Herndon on Twitter, users who want to make their own deepfakes using the musician’s unique voice and image will have the opportunity to sell their minted creations using nonfungible token, or NFT, marketplace Zora. Herndon said fans can submit their digital copies to be approved by the project’s DAO and would receive 50% of any auction profits.

The project said it would initially release three “genesis” Holly+ NFTs along with submissions from the public, which will be minted using a smart contract and auctioned on Zora next month. Users will receive half of any profits, with 40% given to the DAO and the remainder to Herndon herself. The reserve price for two of the genesis NFTs is 15 Ether (ETH) — roughly $48,150 at the time of publication.

“Creating work with the voices of others is something to embrace,” said Herndon. “Anyone can submit artwork using my likeness.”

Related: Deep Truths of Deepfakes — Tech That Can Fool Anyone

Herndon’s digital twin — called Holly+ — may have significant implications for artists wanting to maintain control over their image and voice. Though the musician’s first two NFTs are seemingly unlikely to be mistaken for a natural speaking or singing voice, deepfakes have often been used to spread misinformation or otherwise manipulate the truth.

In this case, with the artist’s consent and encouragement, and with the technology likely to improve in the future, more realistic — and profitable — digital versions of Herndon could arise. At the moment, the DAO offers a check for profiting on non-approved voice clips.

“Vocal deepfakes are here to stay,” said Herndon for the release of Holly+ last month. “A balance needs to be found between protecting artists, and encouraging people to experiment with a new and exciting technology. That is why we are running this experiment in communal voice ownership.”

Gala Games hit by $200 million in possible inside job