
The Curve CEO clarifies misinformation about the UwU Lend hack and CRV token burn, outlining preventative measures and repayment of bad debt.
Michael Egorov, the founder and CEO of Curve Finance (CRV), has weighed in on the recent UwU Lend hack, explaining that the incident did not exploit Curve Finance itself.
In a Q&A with Cointelegraph, Egorov clarified that “this was not a Curve exploit. This was an exploit of a separate project [UwU Lend],” explaining:
Egorov highlighted measures to prevent future exploits, recommending that UwU Lend “re-verify all contracts and connect them to good security auditors” to hopefully recuperate losses.
Artificial intelligence researchers claim to have found an automated, easy way to construct "adversarial attacks" on large language models.
United States-based researchers have claimed to have found a way to consistently circumvent safety measures from artificial intelligence chatbots such as ChatGPT and Bard to generate harmful content.
According to a report released on July 27 by researchers at Carnegie Mellon University and the Center for AI Safety in San Francisco, there’s a relatively easy method to get around safety measures used to stop chatbots from generating hate speech, disinformation, and toxic material.
Well, the biggest potential infohazard is the method itself I suppose. You can find it on github. https://t.co/2UNz2BfJ3H
— PauseAI ⏸ (@PauseAI) July 27, 2023
The circumvention method involves appending long suffixes of characters to prompts fed into the chatbots such as ChatGPT, Claude, and Google Bard.
The researchers used an example of asking the chatbot for a tutorial on how to make a bomb, which it declined to provide.
Researchers noted that even though companies behind these LLMs, such as OpenAI and Google, could block specific suffixes, here is no known way of preventing all attacks of this kind.
The research also highlighted increasing concern that AI chatbots could flood the internet with dangerous content and misinformation.
Professor at Carnegie Mellon and an author of the report, Zico Kolter, said:
“There is no obvious solution. You can create as many of these attacks as you want in a short amount of time.”
The findings were presented to AI developers Anthropic, Google, and OpenAI for their responses earlier in the week.
OpenAI spokeswoman, Hannah Wong told the New York Times they appreciate the research and are “consistently working on making our models more robust against adversarial attacks.”
Professor at the University of Wisconsin-Madison specializing in AI security, Somesh Jha, commented if these types of vulnerabilities keep being discovered, “it could lead to government legislation designed to control these systems.”
Related: OpenAI launches official ChatGPT app for Android
The research underscores the risks that must be addressed before deploying chatbots in sensitive domains.
In May, Pittsburgh, Pennsylvania-based Carnegie Mellon University received $20 million in federal funding to create a brand new AI institute aimed at shaping public policy.
Magazine: AI Eye: AI travel booking hilariously bad, 3 weird uses for ChatGPT, crypto plugins
While disinformation is an ongoing issue that social media has only contributed to, AI could make it much easier for bad actors to spread disinformation.
In 2018, the world was shocked to learn that British political consulting firm Cambridge Analytica had harvested the personal data of at least 50 million Facebook users without their consent and used it to influence elections in the United States and abroad.
An undercover investigation by Channel 4 News resulted in footage of the firm’s then CEO, Alexander Nix, suggesting it had no issues with deliberately misleading the public to support its political clients, saying:
“It sounds a dreadful thing to say, but these are things that don’t necessarily need to be true. As long as they’re believed”
The scandal was a wake-up call about the dangers of both social media and big data, as well as how fragile democracy can be in the face of the rapid technological change being experienced globally.
How does artificial intelligence (AI) fit into this picture? Could it also be used to influence elections and threaten the integrity of democracies worldwide?
According to Trish McCluskey, associate professor at Deakin University, and many others, the answer is an emphatic yes.
The Pentagon's chief digital and AI officer Craig Martell warns that generative AI language models like #ChatGPT could become the "perfect tool" for #disinformation. They lack context and people take their words as fact. #AI #cybersecurity pic.twitter.com/pPCHY2zKJH
— Realtime Global Data Intelligence Platform (@KIDataApp) May 5, 2023
McCluskey told Cointelegraph that large language models such as OpenAI’s ChatGPT “can generate indistinguishable content from human-written text,” which can contribute to disinformation campaigns or the dissemination of fake news online.
Among other examples of how AI can potentially threaten democracies, McCluskey highlighted AI’s capacity to produce deep fakes, which can fabricate videos of public figures like presidential candidates and manipulate public opinion.
While it is still generally easy to tell when a video is a deepfake, the technology is advancing rapidly and will eventually become indistinguishable from reality.
For example, a deepfake video of former FTX CEO Sam Bankman-Fried that linked to a phishing website shows how lips can often be out of sync with the words, leaving viewers feeling that something is not quite right.
Over the weekend, a verified account posing as FTX founder SBF posted dozens of copies of this deepfake video offering FTX users "compensation for the loss" in a phishing scam designed to drain their crypto wallets pic.twitter.com/3KoAPRJsya
— Jason Koebler (@jason_koebler) November 21, 2022
Gary Marcu, an AI entrepreneur and co-author of the book Rebooting AI: Building Artificial Intelligence We Can Trust, agreed with McCluskey’s assessment, telling Cointelegraph that in the short term, the single most significant risk posed by AI is:
“The threat of massive, automated, plausible misinformation overwhelming democracy.”
A 2021 peer-reviewed paper by researchers Noémi Bontridder and Yves Poullet titled “The role of artificial intelligence in disinformation” also highlighted AI systems’ ability to contribute to disinformation and suggested it does so in two ways:
“First, they [AI] can be leveraged by malicious stakeholders in order to manipulate individuals in a particularly effective manner and at a huge scale. Secondly, they directly amplify the spread of such content.”
Additionally, today’s AI systems are only as good as the data fed into them, which can sometimes result in biased responses that can influence the opinion of users.
Classic, liberal AI bias. #AI #SnapchatAI #GenerativeAI #ArtificialIntelligence (note: I don't vote in elections. This was an idea I had to see how programmers designed this AI to respond in politics.) pic.twitter.com/hhP2v2pFHg
— Dorian Tapias (@CrypticStyle) May 10, 2023
While it is clear that AI has the potential to threaten democracy and elections around the world, it is worth mentioning that AI can also play a positive role in democracy and combat disinformation.
For example, McCluskey stated that AI could be “used to detect and flag disinformation, to facilitate fact-checking, to monitor election integrity,” as well as educate and engage citizens in democratic processes.
“The key,” McCluskey adds, “is to ensure that AI technologies are developed and used responsibly, with appropriate regulations and safeguards in place.”
An example of regulations that can help mitigate AI’s ability to produce and disseminate disinformation is the European Union’s Digital Services Act (DSA).
Related: OpenAI CEO to testify before Congress alongside ‘AI pause’ advocate and IBM exec
When the DSA comes into effect entirely, large online platforms like Twitter and Facebook will be required to meet a list of obligations that intend to minimize disinformation, among other things, or be subject to fines of up to 6% of their annual turnover.
The DSA also introduces increased transparency requirements for these online platforms, which require them to disclose how it recommends content to users — often done using AI algorithms — as well as how it moderate content.
Bontridder and Poullet noted that firms are increasingly using AI to moderate content, which they suggested may be “particularly problematic,” as AI has the potential to over-moderate and impinge on free speech.
The DSA only applies to operations in the European Union; McCluskey notes that as a global phenomenon, international cooperation would be necessary to regulate AI and combat disinformation.
Magazine: $3.4B of Bitcoin in a popcorn tin — The Silk Road hacker’s story
McCluskey suggested this could occur via “international agreements on AI ethics, standards for data privacy, or joint efforts to track and combat disinformation campaigns.”
Ultimately, McCluskey said that “combating the risk of AI contributing to disinformation will require a multifaceted approach,” involving “government regulation, self-regulation by tech companies, international cooperation, public education, technological solutions, media literacy and ongoing research.”