1. Home
  2. openai

openai

OpenAI promises to fund legal costs for ChatGPT users sued over copyright

OpenAI joins Google, Microsoft and others in legally backing their users if they face legal action over copyright infringement.

OpenAI says it will cover the legal costs for business-tier ChatGPT users that find themselves in hot water over copyright infringement.

OpenAI is calling its pledge Copyright Shield which only covers users of its business-tier ChatGPT Enterprise and its developer platform. OpenAI isn’t covering users of the free and Plus ChatGPT versions.

On Nov. 6 at the company’s first developer conference DevDay, OpenAI CEO Sam Altman said “we will step in and defend our customers and pay the costs incurred if you face legal claims around copyright infringement and this applies both to ChatGPT Enterprise and the API.”

Altman at OpenAI’s DevDay introducing its legal protection offer Copyright Shield. Source: YouTube

OpenAI joins tech firms Microsoft, Amazon and Google in offering to legally back users accused of copyright infringement. Adobe and Shutterstock — stock image providers with generative AI offerings — also made the same promise.

OpenAI’s DevDay also saw the firm announce that users can soon create custom ChatGPT models with the option to sell them on an upcoming app store along with a new and updated AI model dubbed ChatGPT-4 Turbo.

Related: AI chatbots are illegally ripping off copyrighted news, says media group

OpenAI is facing a litany of suits alleging it used copyrighted material to train its AI models.

Comedian and author Sarah Silverman, along with two others, sued OpenAI in July claiming ChatGPT’s training data includes their copyrighted work accessed from illegal online libraries.

OpenAI was hit with at least two further suits in September. A class action alleged OpenAI and Microsoft of using stolen private information to train models while the Author’s Guild sued OpenAI alleging “systematic theft” of copyrighted material.

Magazine: ‘AI has killed the industry’ — EasyTranslate boss on adapting to change

Solana Faces a Bold New Challenger Lightchain AI and the Future of Blockchain

ChatGPT launches new feature that lets subscribers make their own GPTs

The ‘GPTs’ feature potentially reduces the need for paid subscribers to enter complex prompts, the OpenAI team claimed.

Artificial intelligence (AI) system ChatGPT now allows users to create their own generative pre-trained transformers (GPTs), according to a November 6 blog post from developer Open AI. This means that users can now create custom ChatGPT apps that handle a variety of tasks, instead of needing to enter long strings of commands into the chat window to perform these tasks.

According to OpenAI’s post, they found that many users were storing text files that they used to frame how ChatGPT responded to prompts. Each time these “power users” opened ChatGPT, they had to cut and paste these text fields into the program’s chat box before performing any tasks. The team launched GPTs as a means of alleviating this problem, as they stated:

“Many power users maintain a list of carefully crafted prompts and instruction sets, manually copying them into ChatGPT. GPTs now do all of that for you.”

The new feature is available to subscribers of the “ChatGPT Plus” and Enterprise subscription tiers. There is no free version available at this time.

Related: AI chatbots are illegally ripping off copyrighted news, says media group

OpenAI also stated that a new store for GPT will open “later this month.” The store will allow developers to create GPTs and offer them for sale, similar to the way a mobile app store works. Only “verified builders” will be allowed to post GPTs in the store, and the team claims that it has created “new systems” to help protect the privacy and safety of users as the store rolls out.

Users can also share their GPTs publicly if they want others to be able to use them, the post stated. And enterprises can create “internal-only” GPTs that can only be used within specific departments or by employees specifically authorized to use them.

According to the post, bio-tech firm Amgen, management consulting company Bain, and payments processor Square have already begun using GPTs to create marketing materials, aid customer support staff, or help onboard new engineers.

ChatGPT is one of the most popular AI chat programs, as it has over 180 million users, according to SimilarWeb data cited by Reuters. But it faces increasing competition from Google’s Bard and Anthropic's Claude 2. On November 5, Elon Musk announced that he has created his own AI chat program, called “Grok.”

Solana Faces a Bold New Challenger Lightchain AI and the Future of Blockchain

ChatGPT creator OpenAI builds new team to check AI risks

ChatGPT creator OpenAI is taking seriously the full spectrum of safety risks related to AI and launching its “Preparedness” team as planned.

OpenAI, the artificial intelligence (AI) research and deployment firm behind ChatGPT, is launching a new initiative to assess a broad range of AI-related risks.

OpenAI is building a new team dedicated to tracking, evaluating, forecasting and protecting potential catastrophic risks stemming from AI, the firm announced on Oct. 25.

Called “Preparedness,” OpenAI’s new division will specifically focus on potential AI threats related to chemical, biological, radiological and nuclear threats, as well as individualized persuasion, cybersecurity and autonomous replication and adaptation.

Led by Aleksander Madry, the Preparedness team will try to answer questions like how dangerous are frontier AI systems when put to misuse as well as whether malicious actors would be able to deploy stolen AI model weights.

“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity,” OpenAI wrote, admitting that AI models also pose “increasingly severe risks.” The firm added:

“We take seriously the full spectrum of safety risks related to AI, from the systems we have today to the furthest reaches of superintelligence. [...] To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness.”

According to the blog post, OpenAI is now seeking talent with different technical backgrounds for its new Preparedness team. Additionally, the firm is launching an AI Preparedness Challenge for catastrophic misuse prevention, offering $25,000 in API credits to its top 10 submissions.

OpenAI previously said that it was planning to form a new team dedicated to addressing potential AI threats in July 2023.

Related: CoinMarketCap launches ChatGPT plugin

The risks potentially associated with artificial intelligence have been frequently highlighted, along with fears that AI has the potential to become more intelligent than any human. Despite acknowledging these risks, companies like OpenAI have been actively developing new AI technologies in recent years, which has in turn sparked further concerns.

In May 2023, the Center for AI Safety nonprofit organization released an open letter on AI risk, urging the community to mitigate the risks of extinction from AI as a global priority alongside other societal-scale risks, such as pandemics and nuclear war.

Magazine: How to protect your crypto in a volatile market — Bitcoin OGs and experts weigh in

Solana Faces a Bold New Challenger Lightchain AI and the Future of Blockchain

Meta chief AI scientist says AI won’t threaten humans

Yann LeCun a chief AI scientist at Meta said labeling the existential risk of AI is “premature” and it’s “preposterous” that AI might kill off humanity.

The chief artificial intelligence (AI) scientist at Meta has spoken out, reportedly saying that worries over the existential risks of the technology are still “premature,” according to a Financial Times interview

On Oct. 19 the FT quotes Yann LeCun as saying the premature regulation of AI technology will reinforce dominance of Big Tech companies and leave no room for competition.

“Regulating research and development in AI is incredibly counterproductive,” he said. LeCun believes regulators are using the guise of AI safety for what he called “regulatory capture.”

Since the AI boom really took off after the release of OpenAI’s chatbot ChatGPT-4 in November 2022, various thought leaders in the industry have come out proclaiming threats to humanity at the hands of AI.

Dr. Geoffrey Hinton, known as the “godfather of AI,” left his position in machine learning at Google so that he could “talk about the dangers of AI.

Director of the Center for AI Safety, Dan Hendrycks tweeted back in May that mitigating the risk of extinction from AI should become a global priority on par with “other societal-scale risks such as pandemics and nuclear war.”

Related: Forget Cambridge Analytica — Here’s how AI could threaten elections

However, on the same topic, LeCun said in his latest interview that the idea is “preposterous” that AI will kill off humanity.

“The debate on existential risk is very premature until we have a design for a system that can even rival a cat in terms of learning capabilities, which we don’t have at the moment.”

He also claimed that current AI models are not as capable as some claim, saying they don’t understand how the world works and are not able to “plan” or “reason.”

According to LeCun, he expects that AI will eventually help manage our everyday lives, saying that, “everyone’s interaction with the digital world will be mediated by AI systems.”

Nonetheless, fears surrounding the power of the technology remain a concern among many. The AI task force advisor in the United Kingdom has warned that AI could threaten humanity within two years.

Magazine: ‘Moral responsibility’: Can blockchain really improve trust in AI?

Solana Faces a Bold New Challenger Lightchain AI and the Future of Blockchain

OpenAI partners with G42 in Dubai eyeing Middle East expansion

The two companies said they plan to use OpenAI’s models in industries in which G42 has connections and experience, such as energy, finance, healthcare and public services.

OpenAI, the maker of popular artificial intelligence (AI) chatbot ChatGPT, and Dubai-based technology holding group G42 announced a new partnership on Oct. 18 to expand AI capabilities in the Middle East region. 

The two companies plan to leverage OpenAI’s generative AI models in sectors of G42’s expertise, including financial services, energy, healthcare and public services.

G42 said that organizations in the United Arab Emirates (UAE) and other regions using its business solutions should now have a more simplified process of integrating advanced AI capabilities into existing businesses.

It said it plans to “prioritize its substantial AI infrastructure capacity to support OpenAI's local and regional inferencing on Microsoft Azure data centers.”

Sam Altman, co-founder and CEO of OpenAI, said that G42’s connections in the industry can help bring AI solutions that “resonate with the nuances of the region.” He said the collaboration will help advance generative AI across the globe.

Related: Middle East regulatory clarity drives crypto industry growth — Binance FZE head

This development follows another from neighboring Middle Eastern country Saudi Arabia, which recently announced a collaboration between a local university and universities in China around developing an Arabic-based AI system

The large language model (LLM), called AceGPT, is built on Meta’s Llama 2. According to the project’s GitHub page, it is designed to be an AI assistant for Arabic speakers and answer queries in Arabic.

Both of these developments come as regulators in the United States grow increasingly weary over the destination of AI semiconductor chip exports, including the Middle East.

In August, U.S. officials reportedly added “some Middle Eastern countries” to its list of areas where AI chip maker Nvidia and its rival AMD need to curb exports of their high-level semiconductor chips.

A few weeks later, U.S. regulators denied blocking said exports to the Middle East. However, in its most recent expansion of export controls of AI semiconductor chips, one new rule was to expand licensing requirements for the export of advanced chips to “all 22 countries to which the United States maintains an arms embargo.” Aside from its main target being China, this includes Middle Eastern countries of Iraq, Iran and Lebanon.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Solana Faces a Bold New Challenger Lightchain AI and the Future of Blockchain

Baidu unveils Ernie 4.0 AI system, says overall performance ‘on par with ChatGPT’

The Chinese megacorporation claims its newest model rivals OpenAI’s popular model in generating text, images, and video.

Baidu, one of China’s largest technology companies, released version 4.0 of its popular “Ernie” artificial intelligence (AI) large language model (LLM) chatbot.

According to an English language translation provided by Baidu, embedded in an X post alongside the usual Google Translate subtitles, CEO Robin Li claimed the updated model “stands on par with GPT-4 in terms of overall performance.”

Robin Li described the updated capabilities of the new Ernie model through four distinct verticals: understanding, generation, reasoning, and memory.

Under the category of “understanding,” Baidu claims overall improvement in human-computer interaction. “ERNIE Bot can now accurately interpret 'out-of-order statements, vague expressions, and implied meanings' in text,” reads one post in the announcement thread.

This is particularly noteworthy, as Ernie has reportedly been trained in both Chinese languages and English — teaching models to understand colloquial or conversational prompts has, traditionally, been a challenging task for LLM engineers.

Related: Biden considers tightening AI chip controls to China via third parties

In contrast to previous versions, Ernie 4.0 also appears to have significantly improved capabilities in both quality and speed when it comes to generating images, video, and coherent copy. “With just one image and a few prompts,” reads another post in the thread, “we created 1 video ad, 5 ad copies, and 1 poster in just 3 minutes. ERNIE Bot transforms a single person into a marketing team.”

In the area of “reasoning,” CEO Robin Li demonstrated the models’ advanced analytical problem solving capabilities by posing a complex question. Similar LLM models of the past have struggled with problems which require any form of reasoning. In the demonstration, Ernie provides both text and imagery as well as a succinct and demonstrably correct answer.

The final leg of the 4.0 update involved expanding Ernie’s so-called “memory.” To the best of our knowledge, there’s no scientific evidence that LLMs or any AI system can “reason” or have any form of actual “memory,” but analogous to those terms would be a model’s ability to process problems and recall prompts and outputs from previous sessions.

According to Robin Li, Ernie’s “memory” is about as good as ChatGPT’s. “Even after five rounds of conversation and writing thousands of words, ERNIE Bot can remember previously generated content.”

Solana Faces a Bold New Challenger Lightchain AI and the Future of Blockchain

Saudi Arabia and China collaborate on Arabic-based AI system

A university in Saudi Arabia has collaborated with two Chinese universities to create an Arabic-focused AI system called AceGPT.

The King Abdullah University of Science and Technology (KAUST) in Saudi Arabia has collaborated with two Chinese universities to create an Arabic-focused artificial intelligence (AI) system. 

The large language model (LLM) called AceGPT is built on Meta’s LlaMA2 and was launched by a Chinese-American professor at KAUST in collaboration with the School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHKSZ) and the Shenzhen Research Institute of Big Data (SRIBD).

According to the project’s GitHub page, the model is designed to function as an AI assistant for Arabic speakers and answer queries in Arabic. The disclaimer said it may not produce “satisfactory results” in other languages, however.

Additionally, the developers said the model has been enhanced to recognize possible types of misuse including mishandling sensitive information, producing harmful content, perpetuating misinformation, or failing safety checks. 

However, the project has also cautioned users to be responsible in their use due to a lack of safety checks. 

“We have not conducted an exhaustive safety check on the model, so users should exercise caution. We cannot overemphasize the need for responsible and judicious use of our model.”

AceGPT is said to have been created off open-source data and data crafted by the researchers.

Related: Saudi Arabia looks to blockchain gaming and Web3 to diversify economy

This development comes as Saudi Arabia continues to make efforts to become a regional leader in emerging technologies such as AI. In July, the central bank of Saudi Arabia collaborated with the Hong Kong Monetary Authority on tokens and payments.

Prior to that, in February the Saudi government partnered with the Sandbox metaverse platform to accelerate future metaverse plans.

In August, U.S. regulators told AI chip maker Nvidia and its rival AMD to curb exports of their high-level semiconductor chips used to develop AI to, vaguely put, “some” Middle Eastern countries. 

However, U.S. regulators have since denied explicitly blocking AI chip exports to the Middle East region.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Solana Faces a Bold New Challenger Lightchain AI and the Future of Blockchain

EU mulls more restrictive regulations for large AI models: Report

Negotiators in the EU are reportedly considering additional restrictions for large AI models - like OpenAI’s GPT-4- as a component of the forthcoming AI Act.

Representatives in the European Union are reportedly negotiating a plan for additional regulations on the largest artificial intelligence (AI) systems, according to a report from Bloomberg. 

The European Commission, European Parliament and the various EU member states are said to be in discussions regarding the potential effects of large language models (LLMs), including Meta’s Llama 2 and OpenAI’s GPT-4, and possible additional restrictions to be imposed on them as a part of the forthcoming AI Act.

Bloomberg reports that sources close to the matter said the goal is not to overburden new startups with too many regulations while keeping larger models in check.

According to the sources, the agreement reached by negotiators on the topic is still in the preliminary stages.

The AI Act and the new proposed regulations for LLMs would be a similar approach to the matter as the EU’s Digital Services Act (DSA).

The DSA was recently implemented by EU lawmakers and makes it so platforms and websites have standards to protect user data and scan for illegal activities. However, the web’s largest platforms are subject to stricter controls.

Companies under this category like Alphabet Inc. and Meta Inc. had until Aug. 28 to update service practices to comply with the new EU standards.

Related: UNESCO and Netherlands design AI supervision project for the EU

The EU’s AI Act is posed to be one of the first set of mandatory rules for AI set in place by a Western government. China has already enacted its own set of AI regulations, which came into effect in August 2023. 

Under the EU’s AI regulations companies developing and deploying AI systems would need to perform risk assessments, label AI-generated content and are completely banned from the use of biometric surveillance, among other things.

However, the legislation has not been enacted yet and member states still have the ability to disagree with any of the proposals set forth by parliament.

In China, since the implementation of its AI laws, it has been reported that more than 70 new AI models have already been released.

Magazine: The Truth Behind Cuba’s Bitcoin Revolution: An on-the-ground report

Solana Faces a Bold New Challenger Lightchain AI and the Future of Blockchain

Bitcoin is a ‘super logical’ step on the tech tree: OpenAI CEO

During an episode of The Joe Rogan Experience, Altman expressed his excitement for Bitcoin and also said he was “super against” CBDCs.

OpenAI CEO Sam Altman has called Bitcoin (BTC) a "super logical" step on the tech tree, which is both free of government control while helping to fight corruption.

"I’m excited about Bitcoin,” Altman told Joe Rogan during an Oct. 6 episode of The Joe Rogan Experience podcast.

“I think this idea that we have a global currency that is outside of the control of any government is a super logical and important step on the tech tree."

The OpenAI boss' wide-ranging interview with Rogan covered his thoughts on Bitcoin as a world reserve currency and his concerns about central bank digital currencies (CBDCs).

Altman, who also serves as founder of Worldcoin, said the shift to a “technologically enabled world,” including Bitcoin, could help reduce corruption.

“One of the things that I've observed, obviously many other people too, is corruption is such an incredible hindrance to getting anything done in a society to make it forward progress,” said Altman.

“But in a world where payments, for example, are no longer like bags of cash but done somehow digitally and somebody, even if you're using Bitcoin, can like watch those flows," he said, adding:

“I think that's like a corruption reducing thing."

Meanwhile, Rogan expressed his own optimism for Bitcoin despite skepticism of the wider cryptocurrency industry, saying he believes it can become a “universal viable currency.”

“The real fascinating crypto is Bitcoin. To me, that's the one that I think has the most likely possibility of becoming a universal viable currency. It's limited in the amount that there can be [and] people mine it with their own [computer].”

“That to me is very fascinating. I love the fact that it's been implemented,” Rogan added.

Altman, however, has been a long supporter of Bitcoin well before the podcast. In a blog post dated 10 years ago, Altman argued that a world transacting in Bitcoin would be more transparent.

“A world where we all transact in Bitcoin would be much more transparent, and financial transparency is great. It’s perhaps the thing that would most reduce corruption,” Altman said.

Rogan, Altman ‘very worried’ about CBDCs, slams U.S. war on crypto

Meanwhile, both Altman and Rogan said they were “super against” CBDCs and expressed worry about the United States becoming a surveillance state.

Rogan argued that CBDCs could give governments even more control over how people spend their money:

“I'm very worried about central bank digital currency and that being tied to a social credit score. That scares the shit out of me. The push to that is not for the overall good of society, that's for control.”

Related: CBDC frameworks must guard user privacy, monetary freedom — BIS chief

Altman added he hasn’t been impressed with how the U.S. government has treated the cryptocurrency industry recently:

“There's many things that I’m disappointed that the U.S. government has done recently, but the war on crypto, which I think is a like, we can’t give this up, like we’re going to control this and all that. That's the thing that makes me quite sad about the country,” he said.

Magazine: Asia Express: China expands CBDC’s tentacles, Malaysia is HK’s new crypto rival

Solana Faces a Bold New Challenger Lightchain AI and the Future of Blockchain

Microsoft-owned LinkedIn releases AI-powered assistant for job recruiters

LinkedIn plans to pilot a new AI-powered assistant aimed at recruiters searching for job candidates and will also release an AI educational assistant in its learning section.

The Microsoft-owned and business-focused social platform LinkedIn announced the rollout of new artificial intelligence (AI) features to assist job recruiters when sourcing candidates. 

On Oct. 3, LinkedIn said it is launching its pilot for the “Recruiter 2024,” which is an AI-assisted tool for recruiters.

According to the announcement, recruiters using the tool can now ask questions using “natural language” to find candidates on the platform. In addition, the tool can be used to create ad campaigns for jobs.

At the Talent Connect Summit in New York, LinkedIn CEO Ryan Roslansky said the industry needs new playbooks, and AI can help create those.

“The good news is that AI is not just accelerating the need for new playbooks, it’s also going to be a great tool in helping you all build them...”

Along with the AI-assisted recruiting tool, LinkedIn also launched AI-powered coaching in its LinkedIn Learning section. It said the AI aspect will be able to tailor content and offer real-time advice based on the user’s career aspirations. 

Over the last year, LinkedIn reported a 65% increase in interest in its AI course offerings. The new AI-powered recruiting and learning tools will be available to a “small handful” of users and will be made more widely available in the future.

Related: Samsung to develop AI chips with Canadian startup Tenstorrent

Microsoft owns LinkedIn and is reportedly using technology from OpenAI, which is backed by Microsoft and is the creator of the popular AI chatbot ChatGPT, to develop its AI features.

In May, it released AI-assisted messages for recruiters and has since reported that 74% of users say it saves them time.

LinkedIn is one of many companies beginning to integrate AI-powered applications into its operations. On Sept. 27, Mets CEO Mark Zuckerberg unveiled his answer to ChatGPT with a new AI chat assistant known as Meta AI.

The Meta AI assistant will be integrated into Meta-owned platforms, including popular social media and messaging applications Instagram, Facebook and WhatsApp.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Solana Faces a Bold New Challenger Lightchain AI and the Future of Blockchain