1. Home
  2. Deep Fake

Deep Fake

Celebrities boost crypto projects but don’t guarantee legitimacy

In some cases, celebrity backing for a crypto project is a red flag because it’s a scam ad made by criminals. 

Celebrity backing can make a big difference in the success of a crypto project, but that doesn’t mean the endorsement of a famous person makes it trustworthy. 

According to a 2023 research paper by two former United States Securities and Exchange Commission economists, Joshua White and Sean Wilkoff, there is a link between a celebrity endorsement of a crypto project and the likelihood of its dubiousness. 

During their research, White and Wilkoff found that in 2019, 26% of the initial coin offerings (ICO) they examined were likely scams. That number increased to nearly 40% by 2023. 

Read more

Charles Schwab plans to offer spot crypto trading as US rules evolve under Trump

TikTok could soon be flooded with AI avatars in ads

TikTok’s new tools include AI-powered digital avatars that brands can dub over and use to sell their products in multiple languages.

TikTok could soon be awash with ads featuring artificial intelligence-powered “digital avatars” that brands can make to say virtually anything to promote their product

TikTok announced on June 17 that it’s expanding its Symphony ad suite with “stock avatars” and an “AI dubbing” feature it claims helps brands create and localize content.

The stock avatars are “all created from video footage of real paid actors that are licensed for commercial use,” and users can choose an AI-powered “voice and accent” to read out a script which will be dubbed onto the avatar, TikTok said.

Read more

Charles Schwab plans to offer spot crypto trading as US rules evolve under Trump

AI deepfakes are getting better at spoofing KYC verification: Binance exec

The technology is getting so advanced, deepfakes may soon become undetectable by a human verifier, said Jimmy Su, Binance's Chief Security Officer.

Deepfake technology used by crypto fraudsters to bypass know-your-customer (KYC) verification on crypto exchanges such as Binance is only going to get more advanced, Binance's chief security officer warns.

Deepfakes are made using artificial intelligence tools that use machine learning to create convincing audio, images or videos featuring a person’s likeness. While there are legitimate use cases for the technology, it can also be used for scams and hoaxes.

Speaking to Cointelegraph, Binance chief security officer Jimmy Su said there has been a rise in fraudsters using the tech to try and get past the exchange’s customer verification processes.

“The hacker will look for a normal picture of the victim online somewhere. Based on that, using deep fake tools, they’re able to produce videos to do the bypass.”

Su said the tools have become so advanced that they can even correctly respond to audio instructions designed to check whether the applicant is a human and can do so in real-time.

“Some of the verification requires the user, for example, to blink their left eye or look to the left or to the right, look up or look down. The deep fakes are advanced enough today that they can actually execute those commands,” he explained.

However, Su believes the faked videos are not at the level yet where they can fool a human operator.

“When we look at those videos, there are certain parts of it we can detect with the human eye,” for example, when the user is required to turn their head to the side,” said Su.

“AI will overcome [them] over time. So it's not something that we can always rely on.”

In August 2022, Binance’s chief communications officer Patrick Hillmann warned that a “sophisticated hacking team” was using his previous news interviews and TV appearances to create a “deepfake” version of him.

The deepfake version of Hillmann was then deployed to conduct Zoom meetings with various crypto project teams promising an opportunity to list their assets on Binance — for a price, of course.

“That's a very difficult problem to solve,” said Su, when asked about how to combat such attacks.

“Even if we can control our own videos, there are videos out there that are not owned by us. So one thing, again, is user education.”

Related: Binance off the hook from $8M Tinder ‘pig butchering’ lawsuit

Binance is planning to release a blog post series aimed at educating users about risk management.

In an early version of the blog post featuring a section on cybersecurity, Binance said that it uses AI and machine learning algorithms for its own purposes, including detecting unusual login patterns and transaction patterns and other "abnormal activity on the platform."

AI Eye: ‘Biggest ever’ leap in AI, cool new tools, AIs are the real DAOs

Charles Schwab plans to offer spot crypto trading as US rules evolve under Trump

Amnesty nixes AI-generated images of Colombian protests after criticism

The human rights advocacy group pulled the faked images following widespread online criticism.

Human rights advocacy group Amnesty International has retracted artificial intelligence (AI) generated images it used in a campaign to publicize police brutality in Colombia during national protests in 2021.

The group was criticized for using AI to produce the images for its social media accounts according to reports. One image, in particular, was highlighted by The Guardian on May 2.

It depicts a woman being dragged away by police during Colombia’s protests against deep and long-standing economic and social inequalities in 2021.

However, a closer look shows a few discrepancies in the image such as the uncanny-looking faces, dated police uniforms and a protestor that appears to be somehow wrapped in a flag that is not the correct flag of Colombia.

The bottom of each image also carries a disclaimer saying the images are produced by an AI.

AI-generated image from Amnesty International. Source: Twitter

Amnesty International told The Guardian it chose to use AI to generate images to protect protesters from possible state retribution. Erika Guevara Rosas, director for Americas at Amnesty, said:

“We have removed the images from social media posts, as we don’t want the criticism for the use of AI-generated images to distract from the core message in support of the victims and their calls for justice in Colombia,”

Photojournalists criticized the use of the images, commenting that in today’s highly polarized era of fake news people are more likely to question the media's credibility.

AI-generated image from Amnesty International. Source: Twitter

Media scholar Roland Meyer commented on the deleted images stating “image synthesis reproduces and reinforces visual stereotypes almost by default,” before adding they were “ultimately nothing more than propaganda.”

Other images, now deleted by Amnesty, were shared by Twitter users in late April.

AI-generated image from Amnesty International. Source: Twitter

Related: Here’s how the crypto industry is using artificial intelligence

AI is being increasingly used to generate images and visual media. In late April, HustleGPT founder Dave Craige posted a video of the United States Republican Party using AI imagery in its political campaign.

“We all knew that AI and deep-fake images were going to make it to politics, I just didn’t realize it would happen so quickly,” he exclaimed.

Cointelegraph contacted Amnesty for comment but had not received a response at the time of publication.

Magazine: AI Eye: ‘Biggest ever’ leap in AI, cool new tools, AIs are real DAOs

Charles Schwab plans to offer spot crypto trading as US rules evolve under Trump

Elon Musk and tech execs call for pause on AI development

The authors of the letter say that advanced artificial intelligence could cause a profound change in the history of life on Earth, for better or worse.

More than 2,600 tech leaders and researchers have signed an open letter urging a temporary pause on further artificial intelligence (AI) development, fearing “profound risks to society and humanity.”

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and a host of AI CEOs, CTOs and researchers were among the signatories of the letter, which was published by the United States think tank Future of Life Institute (FOLI) on March 22.

The institute called on all AI companies to “immediately pause” training AI systems that are more powerful than GPT-4 for at least six months, sharing concerns that “human-competitive intelligence can pose profound risks to society and humanity,” among other things.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening,” the institute wrote.

GPT-4 is the latest iteration of OpenAI’s artificial intelligence-powered chatbot, which was released on March 14. To date, it has passed some of the most rigorous U.S. high school and law exams within the 90th percentile. It is understood to be 10 times more advanced than the original version of ChatGPT.

There is an “out-of-control race” between AI firms to develop more powerful AI, whi“no one — not even their creators — can understand, predict, or reliably control," FOLI claimed.

Among the top concerns were whether machines could flood information channels, potentially with “propaganda and untruth” and whether machines will “automate away” all employment opportunities.

FOLI took these concerns one step further, suggesting that the entrepreneurial efforts of these AI companies may lead to an existential threat:

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

“Such decisions must not be delegated to unelected tech leaders,” the letter added.

The institute also agreed with a recent statement from OpenAI founder Sam Altman that an independent review should be required before training future AI systems.

Altman in his Feb. 24 blog post highlighted the need to prepare for artificial general intelligence (AGI) and artificial superintelligence (ASI) robots.

Not all AI pundits have rushed to sign the petition, though. Ben Goertzel, the CEO of SingularityNET, explained in a March 29 Twitter response to Gary Marcus, the author of Rebooting.AI, that language learning models (LLMs) won’t become AGIs, which, to date, there have been few developments of.

Instead, he said research and development should be slowed down for things like bioweapons and nukes:

In addition to language learning models like ChatGPT, AI-powered deep fake technology has been used to create convincing images, audio and video hoaxes. The technology has also been used to create AI-generated artwork, with some concerns raised about whether it could violate copyright laws in certain cases.

Related: ChatGPT can now access the internet with new OpenAI plugins

Galaxy Digital CEO Mike Novogratz recently told investors he was shocked over the amount of regulatory attention that has been given to crypto, while little has been toward artificial intelligence.

“When I think about AI, it shocks me that we’re talking so much about crypto regulation and nothing about AI regulation. I mean, I think the government’s got it completely upside-down,” he opined during a shareholders call on March 28.

FOLI has argued that should AI development pause not be enacted quickly, governments should get involved with a moratorium.

“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it wrote.

Magazine: How to prevent AI from ‘annihilating humanity’ using blockchain

Charles Schwab plans to offer spot crypto trading as US rules evolve under Trump