1. Home
  2. openai

openai

Chatgpt ‘Is the New Crypto,’ Meta Says Malware Actors Exploit AI Craze

Chatgpt ‘Is the New Crypto,’ Meta Says Malware Actors Exploit AI CrazeA growing number of malware creators are now taking advantage of the significant interest in Chatgpt to lure victims, Facebook owner Meta has noticed. According to its head of information security, the AI-based chatbot is “the new crypto” for bad actors and the social media giant is preparing for various abuses. Malware Inspired by Chatgpt […]

99.6% of Pump.fun traders haven’t locked in over $10K in profits: Data

ChatGPT and AI the newest vector for malware: Meta security team

Security researchers at Meta said “bad actors” have flocked to generative AI because it’s the latest tech to capture “people’s imagination and excitement.”

Artificial intelligence tools, such as ChatGPT, have become the latest way for “bad actors” to distribute malware, scams and spam — research from Meta’s security team warns.

A May 1 research report from Facebook parent Meta’s security team found 10 malware families posing as ChatGPT and similar artificial intelligence tools in March, some of which were found in various browser extensions, noting: 

“Since March alone, our security analysts have found around 10 malware families posing as ChatGPT and similar tools to compromise accounts across the internet.”

Meta explained that these “bad actors” — malware operators, scammers, spammers and the like — have moved to AI because it’s the “latest wave” of what is capturing “people’s imagination and excitement.”

The research comes amid a major interest in artificial intelligence, with ChatGPT, in particular, capturing much attention brought to AI

“For example, we’ve seen threat actors create malicious browser extensions available in official web stores that claim to offer ChatGPT-related tools," it added.

Meta security said that some of these “malicious extensions” included operations with ChatGPT functionality which coexisted alongside the malware.

The firm’s security team then explained that bad actors tend to move to where the latest craze is, referencing the hype around digital currency and the scams that have come from it:

“This is not unique to the generative AI space. As an industry, we’ve seen this across other topics popular in their time, such as crypto scams fueled by the interest in digital currency.”

“The generative AI space is rapidly evolving and bad actors know it,” they added, stressing the need to be “vigilant.”

Guy Rosen, Meta’s chief security officer, went one step further in a recent interview with Reuters by stating that “ChatGPT is the new crypto” for these bad actors.

Related: OpenAI launches bug bounty program to combat system vulnerabilities

It should however be noted that Meta is now making its own developments in generative AI.

Meta AI is currently building various forms of AI to help improve its augmented and artificial reality technologies.

Despite being heavily invested in the Metaverse, AI is now Meta’s single largest investment, according to chief executive Mark Zuckerberg.

Cointelegraph contacted OpenAI — the team behind ChatGPT — for comment.

Magazine: NFT Creator, Emily Xie: Creating ‘organic’ generative art from robotic algorithms

99.6% of Pump.fun traders haven’t locked in over $10K in profits: Data

Crypto.com Unveils Amy: An AI-Powered Companion for Crypto Enthusiasts

Crypto.com Unveils Amy: An AI-Powered Companion for Crypto EnthusiastsThis year, the world has witnessed a surge in the popularity of artificial intelligence (AI) software, with a plethora of cutting-edge platforms such as Openai’s Chatgpt 3.5, Chatgpt 4.0, DALL-E, Stable Diffusion, and other innovative tools like Midjourney and Google’s Bard taking the internet by storm. Amidst this technological revolution, Crypto.com’s CEO Kris Marszalek recently […]

99.6% of Pump.fun traders haven’t locked in over $10K in profits: Data

‘Godfather of AI’ resigns from Google, warns of the dangers of AI

Dr. Geoffrey Hinton is understood to have worked on artificial intelligence his whole life and is now warning how dangerous the technology could be.

An Artificial Intelligence (AI) pioneer, nicknamed the “Godfather of AI” has resigned from his position at Big Tech firm Google so he could speak more openly about the potential dangers of the technology.

Before resigning, Dr. Geoffrey Hinton worked at Google on machine learning algorithms for more than a decade. He reportedly earned his nickname due to his lifelong work on neural networks.

However, in a tweet on May 1, Hinton clarified that he left his position at Google “so that I could talk about the dangers of AI.”

In an interview with the New York Times, his most immediate concern with AI was its use in flooding the internet with fake photos, videos and text, where he voiced concern that many won’t “be able to know what is true anymore.”

Hinton’s other worries concerned AI tech taking over jobs. In the future, he believes AI could pose a threat to humanity due to it learning unexpected behaviors from the massive amounts of data it analyzes.

He also expressed concern at the continuing AI arms race that seeks to further develop the tech for use in lethal autonomous weapons systems (LAWS).

Hinton also expressed some partial regret over his life's work:

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”

In recent months, regulators, lawmakers and tech industry executives have also expressed concern about the development of AI. In March, over 2,600 tech executives and researchers signed an open letter in March that urged for a temporary halt of AI development citing “profound risks to society and humanity.”

A group of 12 European Union lawmakers signed a similar letter in April and a recent EU draft bill classifies AI tools based on their risk levels. The United Kingdom is also extending $125 million to support a task force for the development of “safe AI.”

AI used in fake news campaigns and pranks

AI tools are already reportedly being used for disinformation, with recent examples of media outlets tricked into publishing fake news, while one German outlet even used AI to fabricate an interview.

On May 1, Binance claimed it was the victim of a ChatGPT-originated smear campaign and shared evidence of the chatbot claiming its CEO Changpeng “CZ” Zhao was a member of a Chinese Communist Party youth organization.

The bot linked to a Forbes article and LinkedIn page which it claimed it sourced the information from, however, the article appears to not exist and the LinkedIn profile isn’t Zhao’s.

Last week, a group of pranksters also tricked multiple media outlets around the world, including the Daily Mail and The Independent.

Related: Scientists in Texas developed a GPT-like AI system that reads minds

The Daily Mail published and later took down a story about a purported Canadian actor called “Saint Von Colucci” who was said to have died after a plastic surgery operation to make him look more like a South Korean pop star.

The news came from a press release regarding the actor's death, which was sent by an entity masquerading as a public relations firm and used what appeared to be AI-generated images.

A picture sent to multiple media outlets purporting to be Saint Von Colucci. Source: Internet Archive

In April, the German outlet Die Aktuelle published an interview that used ChatGPT to synthesize a conversation with former Formula One driver Michael Schumacher, who suffered a serious brain injury in a 2013 skiing accident.

It was reported Schumacher’s family would take legal action over the article.

AI Eye: ‘Biggest ever’ leap in AI, cool new tools, AIs are real DAOs

99.6% of Pump.fun traders haven’t locked in over $10K in profits: Data

Solana Labs’ ChatGPT plugin allows AI to fetch blockchain data

The plugin will allow the AI chatbot to check wallet balances, transfer tokens and purchase NFTs when OpenAI makes plugins more widely available.

Solana (SOL) users will soon be able to interact with the network through an open-source plugin enabled on OpenAI’s artificial intelligence (AI) chatbot ChatGPT.

The plugin will allow ChatGPT to check wallet balances, transfer Solana-native tokens and purchase nonfungible tokens (NFTs) when OpenAI makes plugins available, according to an April 25 tweet by Solana Labs, the development firm behind the Solana blockchain.

Solana Labs is also encouraging developers to test out the open-source code to retrieve on-chain data that they may be interested in.

The screenshot shared by Solana Labs shows that ChatGPT can retrieve a list of NFTs owned by a particular Solana address, which shares an attached metadata link to the NFT — presumably sourced from Solana Labs' block explorer.

Solana Labs did not mention whether the plugin would be launched when OpenAI makes the plugin feature available to all.

The new ChatGPT plugins work by retrieving information from online sources and interacting with third-party websites to respond to commands requested by the user. The feature is currently being rolled out to all users.

However, not everyone is satisfied with the development.

One Twitter user asked Solana to firstly focus on developing a “working block explorer” while another questioned what benefit it would bring to the ecosystem.

It appears as though Solana Labs is now placing more focus on AI, having also announced on April 25 that it will provide $1 million in funding towards projects that build AI tools on Solana:

ChatGPT users can now delete chat history

On the same day, OpenAI announced ChatGPT users can now “turn off” their chat history, thanks to a new privacy feature.

The team announced the rollout of the new feature in an April 25 statement, which was launched to provide users with more control over their data. The firm added:

“Conversations that are started when chat history is disabled won’t be used to train and improve our models, and won’t appear in the history sidebar.”

The feature can be found in ChatGPT’s settings, which can be changed at any time, OpenAI said.

OpenAI explained that deleted conversations will be retained for 30 days for the purposes of reviewing them to monitor abusive material. Once that is cleared, conversations will be permanently deleted.

Related: First of many? How Italy’s ChatGPT ban could trigger a wave of AI regulation

The AI firm also added in a new “export” option for users to download their data and make more sense of what information ChatGPT stores.

The new privacy feature comes as Italy recently became the first European country to ban ChatGPT until it complies with the European Union’s user privacy laws pursuant to the General Data Protection Regulation (GDPR).

Magazine: NFT Creator, Emily Xie: Creating ‘organic’ generative art from robotic algorithms

99.6% of Pump.fun traders haven’t locked in over $10K in profits: Data

Over half of Americans fear ‘major impact’ by AI on workers: Survey

More respondents said AI will “hurt” American workers more than it will “help” them over the next 20 years.

Nearly two-thirds (62%) of Americans think implementing artificial intelligence (AI) in the workplace will have a “major impact” on American workers within the next 20 years, leaving many employees “wary” and “worried” about what their future holds.

An April 20 Pew Research report found 56% of the 11,004 adults surveyed in the United States said AI will have a major impact on the U.S. economy too. Another 22% believed AI will impact the economy to a minor degree.

Only 13% of participants believed “AI will help more than hurt” American workers, whereas 32% thought the opposite. The rest of the participants predicted “AI will equally help and hurt” American employees (32%) or were unsure (22%).

The study didn’t directly ask participants whether they thought they would lose employment to AI but many respondents cited worry that an AI-enabled workplace would lead to increased surveillance, data mismanagement and misinterpretations.

Pew Research said there is a “consensus” that many American workers feel like they would be watched “Big Brother” style, with 81% citing the concern.

71% of respondents said they oppose the idea of AI being used to help make a final decision in the hiring process.

Nearly two-thirds said they would be most bothered by AI tracking their minute-to-minute movements, and around half cited potential frustrations around an AI keeping track of how many hours they’re at their desk and recording exactly what they’re working on.

For every participant that was in favor of AI being used in the hiring process, 10 opposed it. Source: Pew Research

Just under 40% cited concern that AI would be used to evaluate their performance.

Despite the mixed views on what AI would offer to the workforce, two-thirds of respondents said they wouldn’t want to apply for a job where AI was used to make hiring decisions.

One surveyed man in his 60s explained that AI shouldn’t be used for that purpose because it can’t judge character:

“AI can’t factor in the unquantifiable intangibles that make someone a good co-worker ... or a bad co-worker. Personality traits like patience, compassion and kindness would be overlooked or undervalued.”

“It’s a ‘garbage in, garbage out’ problem,” another surveyed woman explained.

Not everyone agreed though as a man in his 50s explained AI has the potential to fill the shoes of a hiring manager:

“I think the AI would be able to evaluate all my skills and experience in their entirety where a human may focus just on what the job requires. The AI would see beyond the present and see my potential over time.”

Just under half of the participants said AI would treat all applicants in the same way “better” than what hiring managers do, while 15% said AI would be “worse.” Under 15% said the treatment would be “about the same.”

Related: 7 artificial intelligence examples in everyday life

Those surveyed who claimed AI would lead to “better” treatment explained the technology would help circumvent biases and discrimination based on age, gender and race.

Others believed AI may reinforce the same prejudices that companies are trying to eradicate.

The motivation to carry out the study was partly prompted by what Pew Research describes as the “rapid rise of ChatGPT” — an AI chatbot released by OpenAI on Nov. 30.

Magazine: NFT Creator, Emily Xie: Creating ‘organic’ generative art from robotic algorithms

99.6% of Pump.fun traders haven’t locked in over $10K in profits: Data

Elon Musk threatens Microsoft with suit, claims AI trained on Twitter data

The Twitter chief alleged Microsoft scraped information from the platform to train its AI and sell the data to others.

Microsoft has been threatened with a suit from Tesla and Twitter chief Elon Musk who claimed the Big Tech firm “illegally” trained its artificial intelligence (AI) on Twitter data.

On April 19, Musk tweeted that it was “lawsuit time” in response to a post reporting that Microsoft would cease supporting Twitter on April 25 across its online social advertising tools, Smart Campaigns and Multi-platform.

The Twitter boss alleged Microsoft “trained illegally using Twitter data” implying the firm mined user tweets to help train its AI-powered applications.

Microsoft didn’t explain why it was winding down Twitter support although Twitter’s API fees skyrocketed from $0 to $42,000 a month and in some cases are priced upwards of $200,000 per month according to a March report from Wired.

Musk made further allegations that Microsoft is “demonetizing” Twitter data by removing advertisements and “then selling our data to others.”

Microsoft’s decision to ditch Twitter means its customers will lose access to their Twitter accounts through its tools in addition to being able to create, manage, view and schedule Tweets.

Microsoft has scrapped Twitter advertisements from its Multi-platform. Source: Microsoft

Facebook, Instagram and LinkedIn remain available to Microsoft customers, its website states.

Related: Microsoft Azure Marketplace integrates on-ramp to blockchain data

Microsoft’s decision comes a few months after Twitter stopped providing free access to the Twitter API for versions 1.1 and 2.

Academics have been hit hard by the huge price swing. Over 17,500 academic papers have been based on Twitter data since 2020. Now they’ve been largely priced out.

Cointelegraph contacted Microsoft, who declined to comment on Musk’s claims and its decision to scrap Twitter ads support.

The software company is now reportedly developing its own AI chips to power ChatGPT to deal with the rising development costs for in-house and OpenAI projects.

Microsoft is the second largest company in the world by market cap behind Apple, with a $2.15 trillion valuation according to Google Finance.

Magazine: NFT Creator, Emily Xie: Creating ‘organic’ generative art from robotic algorithms

99.6% of Pump.fun traders haven’t locked in over $10K in profits: Data

Musk to Launch ‘Truthgpt,’ Says Microsoft-Backed Chatbot Is Trained to Lie

Musk to Launch ‘Truthgpt,’ Says Microsoft-Backed Chatbot Is Trained to LieTech investor Elon Musk intends to develop an artificial intelligence (AI) platform that will be “truth-seeking” and safe for mankind. Admitting he is starting late, the billionaire nevertheless vowed to try to present a “third option” that will challenge the products of giants Microsoft and Google. Elon Musk Slams Microsoft-Funded Openai, Google Founder for AI […]

99.6% of Pump.fun traders haven’t locked in over $10K in profits: Data

Musk Mulls AI Startup to Rival Chatgpt Maker Openai, Report

Musk Mulls AI Startup to Rival Chatgpt Maker Openai, ReportEntrepreneur Elon Musk is preparing to launch a startup that will compete with Openai, the creator of Chatgpt, a media report unveiled. According to quoted knowledgeable sources, the owner of Twitter and Tesla is already assembling a team of developers and talking to investors. Elon Musk Reportedly Working to Establish Openai Rival, Registers X.AI Corp […]

99.6% of Pump.fun traders haven’t locked in over $10K in profits: Data

Everyone on Earth Will Die Unless Major AI Changes Are Implemented, Warns Artificial Intelligence Expert

Everyone on Earth Will Die Unless Major AI Changes Are Implemented, Warns Artificial Intelligence Expert

An expert in the field of artificial intelligence is issuing a dire warning on the future of the rapidly-developing technology. In a new piece written for Time Magazine, AI research pioneer Eliezer Yudkowsky says the methods and structures currently used to grow AI are placing humanity in serious danger. Yudkowsky points to an open letter […]

The post Everyone on Earth Will Die Unless Major AI Changes Are Implemented, Warns Artificial Intelligence Expert appeared first on The Daily Hodl.

99.6% of Pump.fun traders haven’t locked in over $10K in profits: Data