1. Home
  2. artificial intelligence

artificial intelligence

AI deepfakes are getting better at spoofing KYC verification: Binance exec

The technology is getting so advanced, deepfakes may soon become undetectable by a human verifier, said Jimmy Su, Binance's Chief Security Officer.

Deepfake technology used by crypto fraudsters to bypass know-your-customer (KYC) verification on crypto exchanges such as Binance is only going to get more advanced, Binance's chief security officer warns.

Deepfakes are made using artificial intelligence tools that use machine learning to create convincing audio, images or videos featuring a person’s likeness. While there are legitimate use cases for the technology, it can also be used for scams and hoaxes.

Speaking to Cointelegraph, Binance chief security officer Jimmy Su said there has been a rise in fraudsters using the tech to try and get past the exchange’s customer verification processes.

“The hacker will look for a normal picture of the victim online somewhere. Based on that, using deep fake tools, they’re able to produce videos to do the bypass.”

Su said the tools have become so advanced that they can even correctly respond to audio instructions designed to check whether the applicant is a human and can do so in real-time.

“Some of the verification requires the user, for example, to blink their left eye or look to the left or to the right, look up or look down. The deep fakes are advanced enough today that they can actually execute those commands,” he explained.

However, Su believes the faked videos are not at the level yet where they can fool a human operator.

“When we look at those videos, there are certain parts of it we can detect with the human eye,” for example, when the user is required to turn their head to the side,” said Su.

“AI will overcome [them] over time. So it's not something that we can always rely on.”

In August 2022, Binance’s chief communications officer Patrick Hillmann warned that a “sophisticated hacking team” was using his previous news interviews and TV appearances to create a “deepfake” version of him.

The deepfake version of Hillmann was then deployed to conduct Zoom meetings with various crypto project teams promising an opportunity to list their assets on Binance — for a price, of course.

“That's a very difficult problem to solve,” said Su, when asked about how to combat such attacks.

“Even if we can control our own videos, there are videos out there that are not owned by us. So one thing, again, is user education.”

Related: Binance off the hook from $8M Tinder ‘pig butchering’ lawsuit

Binance is planning to release a blog post series aimed at educating users about risk management.

In an early version of the blog post featuring a section on cybersecurity, Binance said that it uses AI and machine learning algorithms for its own purposes, including detecting unusual login patterns and transaction patterns and other "abnormal activity on the platform."

AI Eye: ‘Biggest ever’ leap in AI, cool new tools, AIs are the real DAOs

Peter Schiff Claims Bitcoin Superpower Status Will Make America weaker

Is ChatGPT king? How top free AI chatbots fared during field testing

Competition is heating up with several new AI chatbots flooding the market, and if you don’t want to pay a monthly subscription, OpenAI may not be the best choice.

While OpenAI’s ChatGPT was the first artificial intelligence (AI)-powered chatbot to captivate the world after its public release in November 2022, a variety of competitors have entered the marketplace since then.

Tech giants Google and Microsoft have launched their AI chatbots, with Google’s Bard removing its waitlist, and opening up to over 180 countries and territories on May 10, after Microsoft beat it to the punch and fully released its AI-powered Bing search engine on May 4.

With several chatbots to choose from, Cointelegraph decided to put some of the most well-known through their paces to see which held up best during field testing, as well as comparing some of their features.

To test the chatbots, they were each asked a series of questions, riddles and more complex prompts to determine their accuracy and speed of responses.

Many AI chatbots available today are powered by OpenAI’s GPT models. While these AI chatbots may give similar results to ChatGPT, the app developers can also add additional commands, which may change the results.

OpenAI’s ChatGPT-3.5

While OpenAI has already released ChatGPT-4, which is available to Plus plan users for $20 per month, ChatGPT-3.5 is free to use and is tested here.

ChatGPT-4 significantly outperforms its predecessor with faster response speeds, more accurate responses and less server downtime.

The first AI chatbot to take the world by storm can help with tasks like essay writing, code debugging and even personal finances after only a second or so of processing time.

However, one area where ChatGPT underperforms is its lack of ability to search the internet.

This means the model is only as good as the training data fed into it, which goes up until September 2021. OpenAI is rolling out plugins that allow it to source online information using Bing’s search API, but this will be limited to users on the Plus plan.

Despite this shortcoming in the free version, the chatbot is still usually able to suggest resources to help the user with their query, as highlighted in the interaction below.

A screenshot illustrating ChatGPT-3.5’s inability to speak of recent events. Source: OpenAI

ChatGPT-3.5 correctly answered most of the riddles it was given and all the simple math problems, but the answers were less consistently correct when it was asked more complex problems.

For example, when asked to solve the quadratic equation 2t^2 + 0.3t - 0.4 = 0, ChatGPT-3.5 returned the correct answer in one out of three attempts and had similar issues multiplying larger numbers.

ChatGPT-3.5 can also be inaccurate when answering other questions. According to OpenAI’s testing, it was only able to correctly answer 213 of 400 questions in the Uniform Bar Exam, which graduated law students in the United States are required to pass before they can become practicing lawyers.

Outside of factual inaccuracies, ChatGPT-3.5 also struggled with questions to test its logical ability, such as the one below.

ChatGPT incorrectly answers a question aimed to test its logical ability. Source: OpenAI

Microsoft’s Bing

Bing’s ChatGPT is based on the GPT-4 language model created by OpenAI, but the two chatbots have several key differences.

The first noticeable difference is that it takes Bing’s chatbot much longer to respond to questions, with an average response time of approximately five seconds compared with OpenAI’s ChatGPT taking only one second.

It also requires users to use the Microsoft Edge web browser, which is nowhere near as popular as Google Chrome.

On the positive side, Bing’s chatbot utilizes the Bing search engine in its responses, allowing it to answer questions about current events, unlike any other chatbot using GPT-4. It’s also currently available for free.

Additionally, it provides sources for its answers, letting users more easily verify claims made by the chatbot.

Microsoft’s Bing ChatGPT in action. Source: Bing

Using the same quadratic equation 2t^2 + 0.3t - 0.4 = 0, Bing linked to Microsoft Math Solver but often gave an incorrect answer and had similar issues correctly answering larger multiplications.

In the same logical question about the bookmark posed to ChatGPT-3.5, Bing correctly answered that you would expect to see the bookmark on page 120.

Google’s Bard

Google’s recently released AI chatbot called Bard, which runs on its PaLM 2 language model.

As pointed out in a Twitter thread by AI enthusiast Moritz Kremb, it can both respond and be prompted with images, supports numerous programming languages and, like Bing’s chatbot, can connect to the internet.

When asked how PaLM 2 compares with GPT-4, Bard said that GPT-4 is better at generating text, but PaLM 2 is better at reasoning and logic, adding:

“Ultimately, the best language model for you depends on your needs. If you need an LLM that’s strong at reasoning and logic, then Palm 2 is the better choice. If you need an LLM that’s fast, good at generating text and has proved itself, then GPT-4 is the better choice.”

Bard correctly answered the bookmark question and it explained its answer in more depth than Bing, but the explanations were often nonsensical.

Related: What is Google’s Bard, and how does it work?

It solved most of the riddles it was given and performed well on the math questions, correctly solving the complex multiplication questions and the quadratic equation in two of the three draft answers it prepared.

YouChat

While it also uses OpenAI’s GPT-3.5, there are some differences between You.com’s YouChat and OpenAI’s ChatGPT.

It lists sources for most of the text it generates and also provides links to several web pages related to the query.

It also connects to the internet, allowing it to access current events, and because it doesn’t have the same level of popularity as OpenAI’s chatbot, downtime is not an issue.

It incorrectly answered both the bookmark question, the quadratic equation and the more complex multiplication problem.

It was able to solve most of the riddles given to it but incorrectly answered some.

HuggingChat

HuggingChat is an open-source AI chatbox from the AI firm Hugging Face, released in April.

Asked to solve the same quadratic equation, HuggingChat returned 684 words of text and failed to provide an answer to the question. While it could correctly answer simple problems, it could not multiply larger numbers.

While it sometimes gave direct answers, HuggingChat often returned vast walls of text, which were relevant initially but devolved into something akin to rambling.

For example, it was asked to solve the following riddle: “A barrel of water weighed 60 pounds. Someone put something in it, and now it weighs 40 pounds. What did the person add?”

The correct answer is a hole, but the HuggingChat replied ice cubes before launching into a 545-word monologue.

What about the rest?

There are many other AI chatbots currently available, designed for more limited use cases than the ones mentioned here, with the market likely to continue growing rapidly.

For example, Socratic is another AI chatbot from Google that can be downloaded onto a smartphone to help users answer questions on science, math, literature and more. It also provides visual explanations of concepts in different subjects and is a useful tool to aid learning.

DeepAI is an AI chatbot that specializes in writing text such as programming code, poems, stories or essays.

Conclusion

While it might be unfair to compare OpenAI’s ChatGPT-3.5 to Bing’s AI chatbot — given they are using different language models — this article intends to only look at AI chatbots available for free.

Through Bing, users can take advantage of OpenAI’s ChatGPT-4 language model, which is a huge improvement from its predecessor.

While Google’s Bard was promising, Bing generally performed the best of the current freely available AI chatbots, but still made some mistakes.

Other chatbots appear to have more limited use cases that could be more useful, but these three seem to lead the way as development progresses.

Magazine: Cryptocurrency trading addiction — What to look out for and how it is treated

The above represents an informal field testing of different AI solutions and is by no means exhaustive or representative of Cointelegraph’s position on a particular AI solution.

Peter Schiff Claims Bitcoin Superpower Status Will Make America weaker

ChatGPT creator OpenAI is releasing an open-source AI model: Report

The firm behind ChatGPT is seemingly under pressure from other available open-source AI models and is prepping to enter the space with one of its own.

An open-source artificial intelligence (AI) model is reportedly being prepared for public release by OpenAI, the firm behind the AI chatbot ChatGPT.

In a May 16 report in The Information citing a person with knowledge of the plan, OpenAI is undertaking the move as pressure mounts from competing open-sourced AI models, such as those leaked from Meta in February.

The timeline of when the model would be released was not reported.

It was said OpenAI’s open-source model would likely not be competitive with its flagship ChatGPT product, as the firm’s value comes from being able to sell access to its more sophisticated models.

OpenAI has faced stiff competition from open-source AI models such as Meta's LLaMa — which was originally limited to researchers but was leaked in full by a user from the imageboard site 4chan in late February.

Other open-source models include those from Stability AI, which opened its large language models in April, along with Databricks’ Dolly 2.0 AI, which it open-sourced days prior to Stability AI.

Open-source models mean the complete code is open to everyone. Anyone has the right to modify the models for any reason or fit them to specific purposes. Some firms choose to open-source their software as they believe it could benefit from contributions by outside developers.

Those building such models are getting significant backing funds too.

Related: MakerDAO publishes 5-phase roadmap featuring funding for open-source AI projects

On May 15, AI firm Together said it raised $20 million in a seed round backed by crypto figures including Oasis Labs co-founder Dawn Song, OpenSea co-founder Alex Atallah and Uniswap COO Mary-Catherine Lader — its stated mission is to provide open-source generative AI models.

Earlier in May, a leaked document from Google senior software engineer Luke Sernau pointed to open-source AI models as a significant threat to the company's own AI efforts.

“The uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI," Sernau wrote.

He added while Google was distracted by its competition with OpenAI, open-source AI models quietly became significantly more advanced. “They are lapping us," he wrote. “Open-source models are faster, more customizable, more private, and pound-for-pound more capable."

Cointelegraph contacted OpenAI for comment but did not immediately receive a response.

AI Eye: ‘Biggest ever’ leap in AI, cool new tools, AIs are the real DAOs

Update (May 16, 2:50 am UTC): This article has been updated to include more information from the leaked Google document and competing open-source AI models.

Peter Schiff Claims Bitcoin Superpower Status Will Make America weaker

OpenAI CEO in ‘advanced talks’ for $100M Worldcoin funding: Report

“Existing and new investors” will contribute to the $100 million, according to someone familiar with the situation.

OpenAI boss Sam Altman is reportedly in “advanced talks” of securing $100 million funding for Worldcoin, a project aimed at creating a collectively owned and globally distributed cryptocurrency.

A Financial Times report published on May 15 cited sources with knowledge of Worldcoin’s funding talks, stating that the $100 million will be sourced from a mix of “existing and new investors.”

When it was first revealed to the world, the startup boasted a Series A funding round led by a16z, with investors that also included Digital Currency Group, Coinbase Ventures as well as former FTX CEO Sam Bankman-Fried and LinkedIn co-founder Reid Hoffman.

In March 2022, a report from The Information claimed that the firm was raising $100 million from investors through a private token sale, citing two people with knowledge of the matter.

In the most recent report, one person familiar with the matter said the potential new funding was sizeable given the extended crypto winter.

“It’s a bear market, a crypto winter. It’s remarkable for a project in this space to get this amount of investment,” the source said.

Cointelegraph reached out to Worldcoin but did not receive an immediate response.

Related: Aleph Zero launches $50M ecosystem funding program

Co-founded by Alex Blania, Sam Altman and Max Novendstern, work on the Worldcoin project started in early 2020.

According to Worldcoin executives, the aim of the project is to “tackle two problems” raised by the growing complexity of artificial intelligence. 

Meanwhile, Worldcoin is preparing to launch its blockchain protocol and commence recording transactions within “the next six weeks,” after having been operating in beta.

On May 8, Worldcoin launched its own gas-free crypto wallet for verified humans.

Worldcoin team member Tiago Sada previously told Cointelegraph that the wallet was launched so “there is an alternative wallet that is focused just on simplicity.”

Magazine: How to control the AIs and incentivize the humans with crypto

Peter Schiff Claims Bitcoin Superpower Status Will Make America weaker

Hedge Fund Mogul Stanley Druckenmiller Warns of ‘Hard Landing’ for US Economy

Hedge Fund Mogul Stanley Druckenmiller Warns of ‘Hard Landing’ for US EconomyBillionaire hedge fund manager Stanley Druckenmiller has a dire prediction for the U.S. economy: a recession is looming, and it’s likely set to hit this June. Druckenmiller’s forecast comes as American consumer spending remains low, and is largely driven by credit card usage. Druckenmiller, a seasoned investment mogul, warns that it would be foolish to […]

Peter Schiff Claims Bitcoin Superpower Status Will Make America weaker

Forget Cambridge Analytica — Here’s how AI could threaten elections

While disinformation is an ongoing issue that social media has only contributed to, AI could make it much easier for bad actors to spread disinformation.

In 2018, the world was shocked to learn that British political consulting firm Cambridge Analytica had harvested the personal data of at least 50 million Facebook users without their consent and used it to influence elections in the United States and abroad.

An undercover investigation by Channel 4 News resulted in footage of the firm’s then CEO, Alexander Nix, suggesting it had no issues with deliberately misleading the public to support its political clients, saying:

“It sounds a dreadful thing to say, but these are things that don’t necessarily need to be true. As long as they’re believed”

The scandal was a wake-up call about the dangers of both social media and big data, as well as how fragile democracy can be in the face of the rapid technological change being experienced globally.

Artificial intelligence

How does artificial intelligence (AI) fit into this picture? Could it also be used to influence elections and threaten the integrity of democracies worldwide?

According to Trish McCluskey, associate professor at Deakin University, and many others, the answer is an emphatic yes.

McCluskey told Cointelegraph that large language models such as OpenAI’s ChatGPT “can generate indistinguishable content from human-written text,” which can contribute to disinformation campaigns or the dissemination of fake news online.

Among other examples of how AI can potentially threaten democracies, McCluskey highlighted AI’s capacity to produce deep fakes, which can fabricate videos of public figures like presidential candidates and manipulate public opinion.

While it is still generally easy to tell when a video is a deepfake, the technology is advancing rapidly and will eventually become indistinguishable from reality.

For example, a deepfake video of former FTX CEO Sam Bankman-Fried that linked to a phishing website shows how lips can often be out of sync with the words, leaving viewers feeling that something is not quite right.

Gary Marcu, an AI entrepreneur and co-author of the book Rebooting AI: Building Artificial Intelligence We Can Trust, agreed with McCluskey’s assessment, telling Cointelegraph that in the short term, the single most significant risk posed by AI is:

“The threat of massive, automated, plausible misinformation overwhelming democracy.”

A 2021 peer-reviewed paper by researchers Noémi Bontridder and Yves Poullet titled “The role of artificial intelligence in disinformation” also highlighted AI systems’ ability to contribute to disinformation and suggested it does so in two ways:

“First, they [AI] can be leveraged by malicious stakeholders in order to manipulate individuals in a particularly effective manner and at a huge scale. Secondly, they directly amplify the spread of such content.”

Additionally, today’s AI systems are only as good as the data fed into them, which can sometimes result in biased responses that can influence the opinion of users.

How to mitigate the risks

While it is clear that AI has the potential to threaten democracy and elections around the world, it is worth mentioning that AI can also play a positive role in democracy and combat disinformation.

For example, McCluskey stated that AI could be “used to detect and flag disinformation, to facilitate fact-checking, to monitor election integrity,” as well as educate and engage citizens in democratic processes.

“The key,” McCluskey adds, “is to ensure that AI technologies are developed and used responsibly, with appropriate regulations and safeguards in place.”

An example of regulations that can help mitigate AI’s ability to produce and disseminate disinformation is the European Union’s Digital Services Act (DSA).

Related: OpenAI CEO to testify before Congress alongside ‘AI pause’ advocate and IBM exec

When the DSA comes into effect entirely, large online platforms like Twitter and Facebook will be required to meet a list of obligations that intend to minimize disinformation, among other things, or be subject to fines of up to 6% of their annual turnover.

The DSA also introduces increased transparency requirements for these online platforms, which require them to disclose how it recommends content to users — often done using AI algorithms — as well as how it moderate content.

Bontridder and Poullet noted that firms are increasingly using AI to moderate content, which they suggested may be “particularly problematic,” as AI has the potential to over-moderate and impinge on free speech.

The DSA only applies to operations in the European Union; McCluskey notes that as a global phenomenon, international cooperation would be necessary to regulate AI and combat disinformation.

Magazine: $3.4B of Bitcoin in a popcorn tin — The Silk Road hacker’s story

McCluskey suggested this could occur via “international agreements on AI ethics, standards for data privacy, or joint efforts to track and combat disinformation campaigns.”

Ultimately, McCluskey said that “combating the risk of AI contributing to disinformation will require a multifaceted approach,” involving “government regulation, self-regulation by tech companies, international cooperation, public education, technological solutions, media literacy and ongoing research.”

Peter Schiff Claims Bitcoin Superpower Status Will Make America weaker

Google I/O: Tech giant slings raft of new AI tools — So what’s coming?

Google unveiled a slew of AI features at its latest I/O conference, with new, improved AI models to be integrated across all of its major products.

Tech giant Google announced a plethora of new Artificial Intelligence (AI)-backed features during its annual Google I/O conference, with updated AI tech set to appear across its major platforms.

On May 10, the annual Google I/O conference took place in California, with CEO Sundar Pichai giving a keynote address on the most significant updates to the firm’s AI stack, among other announcements.

Google shows its hand

The Pathways Language Model (PaLM) was revealed by Google last August. Since then, developers have used the language learning model to ship generative AI-related apps such as ever-popular chatbots.

Google updated its model with “PaLM 2,” boasting improved capabilities around reasoning, coding and multilingualism as the model was trained on more complex and varied subject matters.

PaLM 2 will come in various sizes — with one iteration of the model able to deploy on mobile phones.

Pichai introduced PaLM 2 in his May 10 keynote at I/O. Source: YouTube

Google said the new PaLM is the backbone for over 25 apps and showcased two specialized models. Med-PaLM 2 for medical applications and Sec PaLM 2 for use in cybersecurity.

Bing gets bested? — Google Search soon to get AI backup

Google’s flagship Search product is getting AI backup as rival Microsoft’s Bing beat Google to the punch when it integrated OpenAI’s ChatGPT as part of its search engine.

The feature, matter-of-factly called the Search Generative Experience (SGE), will have a limited experimental launch to opted-in U.S. users for testing before Google considers a wider rollout.

The tool seemingly collates information from webpages and delivers it in a ChatGPT-like response in Google Search above actual search results according to demonstrations of the product.

It also provides information about searched products when users use the “shopping” option in Search. In the Google-provided demonstration, the model gave advice on what aspects to look for in a bicycle when the user searched for e-bikes for example.

Google’s Bard gets souped up

Bard — Google’s answer to ChatGPT — was part of the products that received the PaLM 2 treatment, getting souped-up features and a broader rollout to boot.

The conversational AI model was launched around two months ago but only in the U.S. and the United Kingdom, now it’s been rolled out to over 180 countries with more to come.

Part of Bard’s upgrade includes improvements to its coding abilities and repertoire, Google also improved its citations with the bot highlighting where it fetched certain code from.

Bard got ported to Google’s new PaLM 2 to beef up its abilities. Source: YouTube

Generative image AI tools from graphic software company Adobe will also soon be integrated into Bard, allowing it to generate images from a prompt akin to similar popular tools.

Mail, Docs, Maps and more gets AI-ified

Many other Google products also got the backing of PaLM 2 and Pichai ran through a series of demonstrations showcasing new AI-powered features on Google’s Maps, Docs, Mail and Photos.

One demonstration highlighted an AI-charged version of Gmail’s “Smart Compose” feature that can automatically generate email responses using a prompt.

Related: Google DeepMind CEO Demis Hassabis says we may have AGI ‘in the next few years’

It can also be refined to make the text more formal, elaborate, or brief and seems to pull data from the email thread to bulk out responses.

The iterations of AI-backed automated response products in Gmail. Source: YouTube

A similar product, “Magic Compose,” is coming to Google’s Android phones with AI-generated responses that apparently help make a message transmit the “desired vibe” such as “chill” or “Shakespeare.”

Gemini could palm off PaLM with even newer AI

Despite just launching PaLM 2, Google is also working on an apparently more advanced large language model called “Gemini” to replace it or, at least, provide another option.

Gemini is still in training but Pichai said Google is “already seeing impressive multimodal capabilities not seen in prior models.”

He added once it’s “fine-tuned and rigorously tested for safety” then Gemini, like PaLM 2, will also launch in various sizes and capabilities.

AI Eye: ‘Biggest ever’ leap in AI, cool new tools, AIs are the real DAOs

Peter Schiff Claims Bitcoin Superpower Status Will Make America weaker

AIs and fries: Wendy’s to trial chatbot drive-thru operator

Wendy’s says its "FreshAI" bot reduces costs allowing funds to be focused elsewhere, others worry that eventually, those less skilled will be jobless.

An artificial intelligence (AI) chatbot dubbed "Wendy’s FreshAI," will take orders from Wendy's drive-thru customers after the fast-food chain partnered with Google Cloud to create the bot.

Over three-quarters of Wendy’s customers prefer to place their orders via drive-thru, according to a May 9 announcement from Google Cloud and the Tech Giant claimed using a chatbot to service these customers will “revolutionize the quick service restaurant industry.”

AI chatbots such as OpenAI's ChatGPT use natural language processing to understand what people are saying, and then use machine learning algorithms to generate a response.

In a statement to Wall Street Journal, Wendy’s CEO and president Todd Penegor said the chatbot “will be very conservational,” adding “you won’t know you’re talking to anybody but an employee.”

Google Cloud CEO Thomas Kurian noted, however, there many challenges associated with using chatbots to service drive-thru customers though, and said:

“You may think driving by and speaking into a drive-through is an easy problem for AI, but it’s actually one of the hardest”

The diversity of customers' orders is one challenge that Wendy’s and Google Cloud will have to overcome, as many customers might call menu items by a different name or have special requests. Additionally, the chatbot will also have to filter out any background noise.

To help refine the AI chatbot before it is rolled out to multiple stores, Wendy’s FreshAI will undergo a pilot launch at a Columbus, Ohio, restaurant in June. Customers will still have the option to speak to a human too.

Related: 5 ways AI is helping to improve customer service in e-commerce

Not everyone is impressed with the announcement though, with some arguing it's the latest way companies are “systematically eradicating jobs” and highlighted teenagers and others who are less skilled looking to gain employment would be the most affected by the change.

While AI has enormous potential to improve efficiency, increase productivity, and reduce costs in various industries, its unprecedented growth has many worried that it will bring about massive falls in employment as AI is used for tasks previously assigned to humans.

Magazine: $3.4B of Bitcoin in a popcorn tin — The Silk Road hacker’s story

Peter Schiff Claims Bitcoin Superpower Status Will Make America weaker

ChatGPT and other AIs could play a big role in driving more users to crypto

By doing things such as helping users find answers to their questions by looking at their wallets, AI could help introduce millions of new people to blockchain.

It’s no secret that bear markets are challenging. A quick scan of top projects of any past market cycle will unveil how many once-promising projects have faded into oblivion. While these cycles are often discouraging, many fail to realize that with every market downturn comes the opportunity for innovation toward a stronger future for Web3. Just look at Uniswap and OpenSea’s success to see how real the potential is for “building in a bear market” to spark new bullish cycles.

So, as we navigate the market climate, which is showing glimmers of hope for an upward trajectory, what innovations do we see on the horizon? At the top of the list sits artificial intelligence (AI) and its potential to take ​Web3 into its most valuable and exciting period of change, a new era that will unlock a goal we’ve been chasing since day dot: onboarding 1 billion users.

In what feels like an overnight period, AI has undoubtedly become the most exciting technological innovation since, well, blockchain technology. It feels natural that these two next-generation industries will unify to power the future of humanity. The proliferation of AI applications, the next iteration of Web2’s (controversial) success in improving and reimagining our digital existence through algorithmic curation and targeting, also presents an opportunity to step back and ask ourselves what ​Web3 really needs to hit mass adoption. Near the top of the list, many would agree, is to simplify and strengthen many ​blockchain use​ ​cases so that average​ ​users can easily participate.

Why are apps like Spotify, Amazon and Instagram so good at delivering the content we didn’t even know we needed? While it’s no secret these companies mine our data and invade our privacy to train algorithms to know us better than we often know ourselves, these products onboarded — and yes, changed — the world through simple and effective UX.

Related: Cryptocurrency miners are leading the next stage of AI

While many have rightfully identified vast risks in AI, from potential regulatory issues to its speed of adoption, the technology has (ideally 100% unbiased) potential to personalize the onboarding process and utility of applications to create a more supportive and effective ecosystem of decentralized applications in Web3, which infamously consists of clunky, intimidating and somewhat “cold” UX for average users. Thus, the opportunity to create smarter, more attractive applications led by frictionless UX is paramount.

AI apps like ChatGPT-4 are already making their way into classrooms, so why shouldn’t this technology help users get better at engaging with Web3 apps, from onboarding to becoming a power user?

Let’s take nonfungible tokens (NFTs), for example. AI can scrub a user’s wallet history to understand their buying patterns to recommend digital assets they might want, much like Amazon does for its endless abyss of products. AI excels at pattern prediction and recommendation — in other words, personalization. Additionally, AI can examine on-chain patterns and market activity to find the best time to buy an NFT.

S-Curve analysis for Bitcoin adoption. Source: Off the Chain Capital

Regarding much-needed improvements to security and accountability in crypto, AI can also examine on-chain data of wallets and determine their reputation to help distinguish between safe and unsafe transactions; it can also require users to complete extra verification steps before completing transactions, which could reduce rampant phishing and hacks. Exchanges already do something like this, but AI can automate the process and ensure a user’s custody is preserved. Take this further, and we can define protocol ratings based on the attack vector they might be exposed to, helping developers catch potential issues before they happen.

For many normal users, ​terms such as “smart contracts,​”​ “seed phrases”​ and “​wallets​”​ are intimidating terms. Imagine an outsourced AI chatbot, similar to the customer service assistance utilized by many websites, that can help understand our ​Web3 knowledge and user history (via our wallets) to help us complete actions — and understand them. This has the potential to drive new user engagement and the reengagement of existing Web3 users, leading to further adoption via improved education; it’s also easy to implement among all wallets.

Related: Artists face a choice with AI: Adapt or become obsolete

Moving to decentralized applications, AI can also significantly increase user engagement and adoption rate by analyzing the user’s on-chain data and recommending the best features and how to leverage them. AI can help users become the best possible traders, a virtual investment adviser with the most innovative tools.

And for developers, AI can even simplify their workflow by pre-auditing contracts, one of the biggest pain points for Web3 developers due to the amount of planning, time and cost it takes to audit smart contracts. Using AI as a pre-audit tool (and writing assistant) for smart contracts will enable a more streamlined operation for developers, saving valuable time and costs with the benefit of optimizations rolled into their product.

It all sounds pretty exciting. Whether you’re a developer looking to cut costs and time or a user looking to reduce what can be hours, days or years of learning necessary to understand the ins and outs of Web3, AI can remove the friction that’s amidst the greatest barrier of innovation and usability for our industry. Like Web3 overall, AI can be an additional layer to the ever-powerful toolkit that is revolutionizing how we create, chat, trade and live. Let’s use it as best we can as we build the future.

Harsh Rajat is the founder and project lead of Push Protocol (formerly EPNS). He has more than 12 years of entrepreneurial experience in various spectrums of tech, including system architecture, development and design in different tech fields (including mobile, web services, SaaS and blockchain). He previously founded 3 Magic Shots and Digital Poke.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Peter Schiff Claims Bitcoin Superpower Status Will Make America weaker

Artists face a choice with AI: Adapt or become obsolete

Art generated by artificial intelligence is making waves because of its ramifications — for originality, authorship, and authenticity.

The artificial intelligence art gold rush began around 2018 when Christie’s auctioned “Portrait of Edmond Belamy” for an astounding $432,500. Since then, it has been an up-only journey for AI-generated images.

“Théâtre D’opéra Spatial,” Jason Allen’s AI artwork, even won a prize at the Colorado State Fair art competition. This shows how AI-generated images have not only become more popular but also very sophisticated recently. But not everyone is happy about this development, and their concerns are valid.

The evolution of AI art has sparked feisty debates around originality, authorship and authenticity — themes fundamental to artistic expression. Generative AI, like any other technology, is intent-neutral. Whether the outcome is good or bad depends on how we use it.

So, it’s crucial to understand how and to what extent AI influences artistic communities and their creations.

The fears: Replacement, mimicry and homogenization

Some people fear that artificial intelligence will make artists obsolete — just like digital printing technology replaced analog typesetters, block makers, etc. This view, however, represents a limited, narrow perspective. Only the hopelessly conservative critic can peddle such short-sighted notions.

That’s not to say every traditional process involved in artistic creation will remain intact post-AI. Some of the menial and repetitive tasks will fade away. Yet, there’s little or no threat to creativity and ideation, the core humane aspects of art.

Related: Elizabeth Warren is pushing the Senate to ban your crypto wallet

Nevertheless, AI-generated images do present some ethical problems, such as copyright violations. They stem from the methods companies use to train text-to-art models and general adversarial networks. Consequently, some artists recently sued Midjourney, Stability AI and DeviantArt for using their artworks in AI training without permission or compensation.

Abandoning such practices is necessary to ensure the long-term adoption of and trust in AI art. They raise concerns about mimicry and homogenization, especially as text-to-image tools become more popular and accessible. And ultimately, this defeats the purpose of enhancing artistic creativity using AI.

Regardless of how artists and creators feel about AI-generated art, generative art tools have cemented their spot in the art ecosystem. Artist communities can now either adapt or complain. Many artists have smartly resorted to the former. 

AI-generated artwork. Source: BlueWillow

Recognize AI art as a distinct form

Boris Eldagsen, a German artist and photographer, refused to accept the prize he won at the 2023 Sony World Photography Awards. He “applied as a cheeky monkey” and submitted an AI-generated image for the creative open category.

Eldagsen wanted to make a point about whether we should consider AI-based imagery photography. He thinks AI-generated images and photographs are different entities, and can’t compete in the same category. The same argument can apply to AI art in general.

Like painting, sculpting, sketching, etc., AI art is a category in itself. We must approach it that way. It’s not a question of regulating how artists use AI or if they can participate in competitions, nor is it about purely philosophical considerations seeking the essence of art. Instead, it’s necessary to recognize AI art as a distinct form, setting the parameters for judgment accordingly.

Having said that, adding identifying marks to distinguish AI-generated images from photography or other kinds of digital imagery may prove helpful — particularly in the early days, to curb misunderstandings and the potential spread of misinformation.

The segregation works both ways, though, and can stop people from confusing digital illustrations as AI-generated artwork. This helps avoid situations where creators get banned from channels with “no AI art” policies, like how digitalAI artist Ben Moran was banned from Reddit’s r/Art subreddit.

AI-generated artwork. Source: BlueWillow

Toward human-machine collaboration in art

The World Photography Organization said they awarded Eldagsen at the 2023 Sony World Photography Awards because the AI-generated work relied heavily upon the artist’s “wealth of photographic knowledge.” Moreover, the competition’s creative open category “welcomes various experimental approaches to image making from cyanotypes and rayographs to cutting-edge digital practices.

This points to the fact that AI can enhance, not necessarily hamper, artistic creation. So, here’s the most critical aspect of AI’s influence on art and artistic communities: It widens their horizon significantly, unlocking new possibilities and ways to express ideas.

Related: The world could be facing a dark future thanks to CBDCs

Accessible generative AI tools — such as Midjourney, Dall-E and my own BlueWillow — also foster artistic inclusion. They allow anyone with a creative mind to make high-quality digital artwork. Imagine a physically challenged person or someone unable to afford art supplies exploring their artistic capabilities with these tools. This wouldn’t perhaps be possible earlier, at least not easily.

AI-generated images also help professional artists work faster and more efficiently. The impact is similar to using assistive tools like Photoshop or Illustrator, but much greater. While working on a commissioned project, for example, they can easily create multiple mockups using text-to-image tools. This saves loads of time and effort, letting creatives focus on creativity and innovation.

AI-generated artwork. Source: BlueWillow

Leverage technology — don’t fight it

Technological progress is inevitable. It’s also desirable if done right, while misplaced opposition is both naive and destructive. We are currently at this juncture when it comes to AI art.

Portrait artists decried the rise of photography. They were afraid. But today, we can only be thankful for the wonders that innovations in photography — both technology and content — have helped us achieve. Something similar will happen as we leverage AI for art.

AI’s disruptive influence is already visible to progressive artistic communities worldwide. Coupled with other emerging technologies like nonfungible tokens (NFTs), it’s enabling them to produce work with previously unimaginable variations, scale and speed. And we are thus witnessing the unfolding of our artistic future — one that’s diverse, community-oriented and technologically supreme.

Hector Ferran is the vice president of marketing at BlueWillow AI, an image-generating AI company.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Peter Schiff Claims Bitcoin Superpower Status Will Make America weaker