1. Home
  2. openai

openai

OpenAI needs a DAO to manage ChatGPT

A decentralized autonomous organization could help solve concerns over issues including ChatGPT’s political biases and its potential for abuse.

ChatGPT, a large language model that can converse with users, is one of OpenAI’s ground-breaking models. Although there are numerous advantages to this technology, some worry that it needs to be regulated in a way that ensures privacy, neutrality and decentralized knowledge. A decentralized autonomous organization (DAO) can be the solution to these issues.

Firstly, privacy is a major concern when it comes to the use of ChatGPT. In order to enhance its responses, the model gathers data from users — but this data may contain sensitive information that individuals may not want to divulge to a central authority. For instance, if a user discloses to ChatGPT their financial or medical history, this information may be kept and used in ways they did not expect or authorize. If the information is obtained by unauthorized parties, it may result in privacy violations or even identity theft.

Related: AI has a role to play in detecting fake NFTs

Furthermore, ChatGPT could be utilized for illicit activities such as phishing scams or social engineering attacks. By mimicking a human discussion, ChatGPT could deceive users into disclosing private information or taking actions they wouldn’t ordinarily do. It is critical that OpenAI institute clear policies and procedures for managing and storing user data to allay these privacy worries. A DAO can make sure that the data gathered by ChatGPT is stored in a decentralized manner, where users have more control over their data and where it can only be accessed by authorized entities.

Secondly, there is a growing concern about political bias in artificial intelligence models, and ChatGPT is no exception. Some fear that when these models develop further, they could unintentionally reinforce existing societal biases or perhaps introduce new ones. The AI chatbot can also be used to disseminate propaganda or false information. This may result in unfair or unjust outcomes that have a negative effect on both individuals and communities. Biased replies may result from the model, reflecting the developers’ or training data’s prejudices.

Related: Cryptocurrency miners are leading the next stage of AI

A DAO can guarantee that ChatGPT is trained on objective data and that the responses it produces are scrutinized by a wide range of people, such as representatives from various companies, academic institutions and social organizations who can spot and rectify any bias. This would minimize the possibility of bias by ensuring that decisions on ChatGPT are made with input from a diversity of perspectives.

The DAO may also put in place a system of checks and balances to make sure that ChatGPT doesn’t reinforce already-existing prejudices in society or introduce any new ones. The DAO may, for instance, put in place a procedure for auditing ChatGPT’s responses to ensure they are impartial and fair. This could entail having unbiased professionals examine ChatGPT's comments and point out any instances of prejudice.

Finally, another issue with ChatGPT is knowledge centralization. The model has access to a wealth of information, which is advantageous in many ways. This might result in a monopoly on knowledge since knowledge is concentrated in the hands of a small number of people or organizations. Likewise, there is a risk that human-machine-only knowledge sharing will become the norm, leaving individuals entirely dependent on machines for collective knowledge.

For instance, a programmer facing a coding issue could have earlier resorted to Stack Overflow to seek assistance by posting their question and receiving replies from other human programmers who may have encountered similar problems and found solutions. Yet, as AI language models like ChatGPT proliferate, it’s becoming more common for programmers to ask a query and then receive a response without having to communicate with other people. This could result in users interacting less and sharing less knowledge online — for example, on websites such as Stack Overflow — and a consolidation of knowledge within AI language models. That could significantly undermine human agency and control over the production and distribution of knowledge — making it less accessible to us in the future.

There are no easy answers to the complicated problem of knowledge centralization. It does, however, emphasize the need for a more decentralized strategy for knowledge production and transfer. A DAO, which offers a framework for more democratic and open information sharing, may be able to help in this situation. By using blockchain technology and smart contracts, a DAO could make it possible for people and organizations to work together and contribute to a shared body of knowledge while having more control over how that knowledge is accessed.

Ultimately, a DAO can offer a framework to oversee and manage ChatGPT’s operations, guaranteeing decentralized user data storage, responses that are scrutinized for bias, and more democratic and open information exchange. The use of a DAO may be a viable solution to these concerns, allowing for greater accountability, transparency and control over the use of ChatGPT and other AI language models. As AI technology continues to advance, it is crucial that we prioritize ethical considerations and take proactive steps to address potential issues before they become a problem.

Guneet Kaur joined Cointelegraph as an editor in 2021. She holds a Master of Science in financial technology from the University of Stirling and an MBA from India’s Guru Nanak Dev University.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Base Blasts Off 219% as NFTs Hit $155M This Week: Winners, Losers, and Big Spenders

‘Biased, deceptive’: Center for AI accuses ChatGPT creator of violating trade laws

The group believes GPT-4 violates Section 5 of the Federal Trade Commission Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.”

The Center for Artificial Intelligence and Digital Policy (CAIDP) has filed a complaint with the United States Federal Trade Commission (FTC) in an attempt to halt the release of powerful AI systems to consumers.

The complaint centered around OpenAI’s recently released large language model, GPT-4, which the CAIDP describes as “biased, deceptive, and a risk to privacy and public safety” in its March 30 complaint.

CAIDP, an independent non-profit research organization, argued that the commercial release of GPT-4 violates Section 5 of the FTC Act, which prohibits ''unfair or deceptive acts or practices in or affecting commerce.''

To back its case, the AI ethics organization pointed to contents in the GPT-4 System Card, which state:

“We found that the model has the potential to reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups.”

In the same document, it stated: “AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.”

Complaint filed by the Centre for AI and Digital Policy against OpenAI. Source: CAIDP

CAIDP added that OpenAI released GPT-4 to the public for commercial use with full knowledge of these risks and that no independent assessment of GPT-4 was undertaken prior to its release.

As a result, the CAIDP wants the FTC to conduct an investigation into the products of OpenAI and other operators of powerful AI systems:

“It is time for the FTC to act [...] CAIDP urges the FTC to open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”

While ChatGPT-3 was released in November, the latest version, GPT-4 is considered to be ten times more intelligent. Upon its release on March 14, a study found that GPT-4 was able to pass the most rigorous U.S. high school and law exams within the top 90th percentile.

It can also detect smart contract vulnerabilities on Ethereum, among other things.

The complaint comes as Elon Musk, Apple’s Steve Wozniak, and a host of AI experts signed a petition to “pause” development on AI systems more powerful than GPT-4. 

CAIDP president Marc Rotenberg was among the other 2600 signers of the petition, which was introduced by the Future of Life Institute on March 22.

Related: Here’s how ChatGPT-4 spends $100 in crypto trading

The authors argued that “Advanced AI could represent a profound change in the history of life on Earth,” for better or for worse.

The United Nations Educational, Scientific and Cultural Organization (UNESCO) has also called on states to implement the UN’s “Recommendation on the Ethics of AI” framework.

In other news, a former AI researcher for Google recently alleged that Google’s AI chatbot, "Bard," has been trained using ChatGPT’s responses.

While the researcher has resigned over the incident, Google executives have denied the allegations put forth by their former colleague.

Magazine: How to prevent AI from ‘annihilating humanity’ using blockchain

Base Blasts Off 219% as NFTs Hit $155M This Week: Winners, Losers, and Big Spenders

Chatgpt More Useful Than Crypto, Nvidia Tech Chief Says

Chatgpt More Useful Than Crypto, Nvidia Tech Chief SaysUnlike AI applications such as Chatgpt, cryptocurrencies do not bring “anything useful,” a top executive of U.S. chip maker Nvidia is convinced. The comment comes despite his company making significant sales in the space where its powerful processors are widely used to mint digital coins. Developing Chatbots More Worthwhile Than Crypto Mining, Nvidia Exec Claims […]

Base Blasts Off 219% as NFTs Hit $155M This Week: Winners, Losers, and Big Spenders

Elon Musk and tech execs call for pause on AI development

The authors of the letter say that advanced artificial intelligence could cause a profound change in the history of life on Earth, for better or worse.

More than 2,600 tech leaders and researchers have signed an open letter urging a temporary pause on further artificial intelligence (AI) development, fearing “profound risks to society and humanity.”

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and a host of AI CEOs, CTOs and researchers were among the signatories of the letter, which was published by the United States think tank Future of Life Institute (FOLI) on March 22.

The institute called on all AI companies to “immediately pause” training AI systems that are more powerful than GPT-4 for at least six months, sharing concerns that “human-competitive intelligence can pose profound risks to society and humanity,” among other things.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening,” the institute wrote.

GPT-4 is the latest iteration of OpenAI’s artificial intelligence-powered chatbot, which was released on March 14. To date, it has passed some of the most rigorous U.S. high school and law exams within the 90th percentile. It is understood to be 10 times more advanced than the original version of ChatGPT.

There is an “out-of-control race” between AI firms to develop more powerful AI, whi“no one — not even their creators — can understand, predict, or reliably control," FOLI claimed.

Among the top concerns were whether machines could flood information channels, potentially with “propaganda and untruth” and whether machines will “automate away” all employment opportunities.

FOLI took these concerns one step further, suggesting that the entrepreneurial efforts of these AI companies may lead to an existential threat:

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

“Such decisions must not be delegated to unelected tech leaders,” the letter added.

The institute also agreed with a recent statement from OpenAI founder Sam Altman that an independent review should be required before training future AI systems.

Altman in his Feb. 24 blog post highlighted the need to prepare for artificial general intelligence (AGI) and artificial superintelligence (ASI) robots.

Not all AI pundits have rushed to sign the petition, though. Ben Goertzel, the CEO of SingularityNET, explained in a March 29 Twitter response to Gary Marcus, the author of Rebooting.AI, that language learning models (LLMs) won’t become AGIs, which, to date, there have been few developments of.

Instead, he said research and development should be slowed down for things like bioweapons and nukes:

In addition to language learning models like ChatGPT, AI-powered deep fake technology has been used to create convincing images, audio and video hoaxes. The technology has also been used to create AI-generated artwork, with some concerns raised about whether it could violate copyright laws in certain cases.

Related: ChatGPT can now access the internet with new OpenAI plugins

Galaxy Digital CEO Mike Novogratz recently told investors he was shocked over the amount of regulatory attention that has been given to crypto, while little has been toward artificial intelligence.

“When I think about AI, it shocks me that we’re talking so much about crypto regulation and nothing about AI regulation. I mean, I think the government’s got it completely upside-down,” he opined during a shareholders call on March 28.

FOLI has argued that should AI development pause not be enacted quickly, governments should get involved with a moratorium.

“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it wrote.

Magazine: How to prevent AI from ‘annihilating humanity’ using blockchain

Base Blasts Off 219% as NFTs Hit $155M This Week: Winners, Losers, and Big Spenders

ChatGPT can now access the internet with new OpenAI plugins

OpenAI said it is initially rolling out the plugins to a small set of users to "study their real-world use and impact," before expanding to larger-scale access.

Artificial intelligence chatbot ChatGPT can now retrieve information from online sources and interact with third-party websites via a new plugin feature introduced by its creator, OpenAI.

The plugin feature is still in its “limited” alpha phase and will only be available to a small set of users initially before rolling out to larger-scale access. Users must add themselves to a waitlist in order to access the new feature on Chat GPT Plus, the firm said in a March 23 announcement.

Initially, there will only be 11 plugins available. These plugins range from allowing users to check the scores of live sporting events to booking an international flight and purchasing food for home delivery. The firm added that it is “gradually rolling out plugins” so that it can assess its real-world use.

“Plugins are tools designed specifically for language models with safety as a core principle, and help ChatGPT access up-to-date information, run computations, or use third-party services,” said OpenAI.

Among the cohort of websites that are supported by the new plugin feature are e-commerce platforms Shopify, Klarna and Instacart, and travel search engines Expedia and KAYAK.

The plugins also include the math computer Wolfram, for carrying out calculations, and the business messaging app Slack, according to the announcement.

A screenshot of the API on ChatGPT when connecting to third-party plugins. Source: OpenAI

Other apps include FiscalNote, Milo Family AI, OpenTable, Shop, Speak and Zapier.

How does it access the web?

ChatGPT utilizes the Bing API to search for information along with a text-based web browser to navigate results and interact with websites.

It is able to synthesize information across multiple sources to give a more grounded response. It also cites the sources it used so users can verify where ChatGPT derived its response from.

OpenAI said the plug-in capabilities came on the back of high demand from its user base since the firm launched ChatGPT on Nov. 30.

Mitchell Hashimoto, the founder of software firm HashiCorp and an early user of the ChatGPT plugin API, told his 94,300 Twitter followers on March 23 that it is one of the most “impressive” computer applications he has ever used:

Being able to use plugins that access the internet could improve one of ChatGPT’s arguably biggest shortfalls, that it was trained with data only up to September 2021 and does not have access to the internet to grab more recent information

The typical answer given when ChatGPT is asked a question requiring up-to-date data. Source: Open AI

Related: How to solve coding problems using ChatGPT?

Earlier this month, OpenAI released the latest version of its artificial intelligence Chatbot, ChatGPT-4.

So far, the new version has already managed to successfully pass many of the toughest U.S. high school and law school exams in the 90th percentile.

Using the same version, Cointelegraph recently launched an experiment using GPT-4 to invest in cryptocurrencies using information fed from Cointelegraph Markets and a selection of Cointelegraph’s daily news, with the aim of understanding how it interprets news to make trading decisions.

So far, the cryptocurrency portfolio is up 6.08% over seven days. It currently has an allocation consisting of 55% Bitcoin (BTC), 35% Ether (ETH), 5% Cardano (ADA) and 5% XRP (XRP).

Magazine: All rise for the robot judge: AI and blockchain could transform the courtroom

Base Blasts Off 219% as NFTs Hit $155M This Week: Winners, Losers, and Big Spenders

Openai’s GPT-4 Launch Sparks Surge in AI-Centric Crypto Assets

Openai’s GPT-4 Launch Sparks Surge in AI-Centric Crypto AssetsFollowing Openai’s release of GPT-4, a deep learning and artificial intelligence product, crypto assets focused on AI have spiked in value. The AGIX token of the Singularitynet project has risen 25.63% in the last 24 hours. Over the last seven days, four out of the top five AI-centric digital currencies have seen double-digit gains against […]

Base Blasts Off 219% as NFTs Hit $155M This Week: Winners, Losers, and Big Spenders

ChatGPT v4 aces the bar, SATs and can identify exploits in ETH contracts

GPT-4 completed many of the tests within the top 10% of the cohort, while the original version of ChatGPT often finished up in the bottom 10%.

GPT-4, the latest version of the artificial intelligence chatbot ChatGPT, can pass high school tests and law school exams with scores ranking in the 90th percentile and has new processing capabilities that were not possible with the prior version.

The figures from GPT-4’s test scores were shared on March 14 by creator OpenAI, revealing it can also convert image, audio and video inputs to text in addition to handling “much more nuanced instructions” more creatively and reliably. 

“It passes a simulated bar exam with a score around the top 10% of test takers,” OpenAI added. “In contrast, GPT-3.5’s score was around the bottom 10%.”

The figures show that GPT-4 achieved a score of 163 in the 88th percentile on the LSAT exam — the test college students need to pass in the United States to be admitted into law school.

Exam results of GPT-4 and GPT-3.5 on a range of recent U.S. exams. Source: OpenAI

GPT4’s score would put it in a good position to be admitted into a top 20 law school and is only a few marks short of the reported scores needed for acceptance to prestigious schools such as Harvard, Stanford, Princeton or Yale.

The prior version of ChatGPT only scored 149 on the LSAT, putting it in the bottom 40%.

GPT-4 also scored 298 out of 400 in the Uniform Bar Exam — a test undertaken by recently graduated law students permitting them to practice as a lawyer in any U.S. jurisdiction.

UBE scores needed to be admitted to practice law in each U.S. jurisdiction. Source: National Conference of Bar Examiners

The old version of ChatGPT struggled in this test, finishing in the bottom 10% with a score of 213 out of 400.

As for the SAT Evidence-Based Reading & Writing and SAT Math exams taken by U.S. high school students to measure their college readiness, GPT-4 scored in the 93rd and 89th percentile, respectively.

GPT-4 excelled in the “hard” sciences too, posting well above average percentile scores in AP Biology (85-100%), Chemistry (71-88%) and Physics 2 (66-84%).

Exam results of GPT-4 and GPT-3.5 on a range of recent U.S. exams. Source: OpenAI

However its AP Calculus score was fairly average, ranking in the 43rd to 59th percentile.

Another area where GPT-4 was lacking was in English literature exams, posting scores in the 8th to 44th percentile across two separate tests.

OpenAI said GPT-4 and GPT-3.5 took these tests from the 2022-2023 practice exams, and that “no specific training” was taken by the language processing tools:

“We did no specific training for these exams. A minority of the problems in the exams were seen by the model during training, but we believe the results to be representative.”

The results prompted fear in the Twitter community too.

Related: How will ChatGPT affect the Web3 space? Industry answers

Nick Almond, the founder of FactoryDAO, told his 14,300 Twitter followers on March 14 that GPT4 is going to “scare people” and it will “collapse” the global education system.

Former Coinbase director Conor Grogan said he inserted a live Ethereum smart contract into GPT-4, and the chatbot instantly pointed to several “security vulnerabilities” and outlined how the code mighbe exploited:

Earlier smart contract audits on ChatGPT found that its first version was also capable at spotting out code bugs to a reasonable degree as well.

Rowan Cheung, the founder of the AI newsletter The Rundown, shared a video of GPT transcribing a hand-drawn fake website on a piece of paper into code.

Base Blasts Off 219% as NFTs Hit $155M This Week: Winners, Losers, and Big Spenders

Binance Tests AI-Infused NFT Platform Bicasso in Limited 10K Mint Run

Binance Tests AI-Infused NFT Platform Bicasso in Limited 10K Mint RunOn Wednesday, Binance CEO Changpeng Zhao, also known as CZ, announced the launch of a new non-fungible token (NFT) platform infused with artificial intelligence (AI). The AI-centric product is named Bicasso, and CZ said the beta version dropped today and was limited to 10,000 mints. Combining AI and NFTs: Binance CEO Announces Bicasso Artificial intelligence […]

Base Blasts Off 219% as NFTs Hit $155M This Week: Winners, Losers, and Big Spenders

Google Backs AI Firm Anthropic With $300 Million, Following Series B Investment From Controversial FTX Co-Founder

Google Backs AI Firm Anthropic With 0 Million, Following Series B Investment From Controversial FTX Co-FounderAs the artificial intelligence (AI) wars intensify, the AI firm Anthropic has raised $300 million from Google and sources say that the tech giant will get roughly a 10% stake in the AI company. Interestingly, in April 2022, Anthropic raised approximately $500 million from sources including Sam Bankman-Fried (SBF), co-founder of FTX; Caroline Ellison, former […]

Base Blasts Off 219% as NFTs Hit $155M This Week: Winners, Losers, and Big Spenders

Artificial Intelligence and Cryptocurrency: The Rise of AI-Focused Projects in 2023

Artificial Intelligence and Cryptocurrency: The Rise of AI-Focused Projects in 2023Trends show that artificial intelligence (AI) will be a major topic in 2023, as data indicates a surge in interest. Since interest peaked and Microsoft invested billions into Chatgpt, demand for AI-focused cryptocurrency projects has risen dramatically. For example, the crypto project Fetch.ai has seen its native token FET rise 212% in the past 30 […]

Base Blasts Off 219% as NFTs Hit $155M This Week: Winners, Losers, and Big Spenders