1. Home
  2. Large Language Models

Large Language Models

AI chatbots are getting worse over time — academic paper

A dwindling consumer interest in chatbots caused a drop in AI-sector revenues during the second business quarter of 2024.

A recent research study titled "Larger and more instructable language models become less reliable" in the Nature Scientific Journal revealed that artificially intelligent chatbots are making more mistakes over time as newer models are released.

Lexin Zhou, one of the study's authors, theorized that because AI models are optimized to always provide believable answers, the seemingly correct responses are prioritized and pushed to the end user regardless of accuracy.

These AI hallucinations are self-reinforcing and tend to compound over time — a phenomenon exacerbated by using older large language models to train newer large language models resulting in "model collapse."

Read more

Counterpunch: Russia Reveals Plan to Utilize Frozen Western Assets

OpenAI supports California bill on marking AI content — Report

After previously opposing another AI-related bill, SB 1047, OpenAI has expressed support for AB 3211, which would require watermarks on AI-generated content.

The artificial intelligence startup OpenAI, which is behind the ChatGPT chatbot, reportedly supports a new bill that proposes labeling content generated with AI.

OpenAI chief strategy officer Jason Kwon has expressed support for the bill AB 3211, which would require watermarks in the metadata of AI-generated photos, videos and audio clips, Reuters reported on Aug. 26.

According to the source, Kwon believes that marking AI-made material will help users differentiate such content from human-made content. The report noted that the enforcement of the bill would particularly be helpful amid growing misinformation about political candidates.

Read more

Counterpunch: Russia Reveals Plan to Utilize Frozen Western Assets

Robinhood users are getting AI tools to help them trade

Robinhood has acquired Pluto Capital, an AI powered investment research firm.

Robinhood users will soon have access to AI tools to make more informed trades following the firm’s acquisition of AI-powered investment research firm Pluto Capital.

Pluto’s AI will provide Robinhood traders with personalized investment strategies, data analytics tools and real-time insights to make “informed decisions swiftly and confidently,” Robinhood said in its July 1 statement.

The acquisition will also see Pluto’s founder and CEO, Jacob Sansbury, help Robinhood assist with its product roadmap and AI integrations.

Read more

Counterpunch: Russia Reveals Plan to Utilize Frozen Western Assets

Elon Musk launches AI chatbot ‘Grok’ — says it can outperform ChatGPT

Grok costs $16 per month on X Premium Plus. But for now it is only offered to a limited number of users in the United States.

Elon Musk and his artificial intelligence startup xAI have released “Grok” — an AI chatbot which can supposedly outperform OpenAI’s first iteration of ChatGPT in several academic tests.

The motivation behind building Gruk is to create AI tools equipped to assist humanity by empowering research and innovation, Musk and xAI explained in a Nov. 5 X (formerly Twitter) post.

Musk and the xAI team said a “unique and fundamental advantage” possessed by Grok is that it has real-time knowledge of the world via the X platform.

“It will also answer spicy questions that are rejected by most other AI systems,” Muska and xAI said. "Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!"

The engine powering Grok — Grok-1 — was evaluated in several academic tests in mathematics and coding, performing better than ChatGPT-3.5 in all tests, according to data shared by xAI.

However it didn’t outperform OpenAI’s most advanced version, GPT-4, across any of the tests.

“It is only surpassed by models that were trained with a significantly larger amount of training data and compute resources like GPT-4, Musk and xAI said. “This showcases the rapid progress we are making at xAI in training LLMs with exceptional efficiency.”

The AI startup noted that Grok will be accessible on X Premium Plus at $16 per month. But for now, it is only offered to a limited number of users in the United States.

Grok still remains a “very early beta product” which should improve rapidly by the week, xAI noted.

Related: Twitter is now worth half of the $44B Elon Musk paid for it: Report

The xAI team said they will also implement more safety measures over time to ensure Grok isn’t used maliciously.

“We believe that AI holds immense potential for contributing significant scientific and economic value to society, so we will work towards developing reliable safeguards against catastrophic forms of malicious use.”

“We believe in doing our utmost to ensure that AI remains a force for good,” xAI added.

The AI startup's launch of Grok comes eight months after Musk founded the firm in March.

Magazine: Hall of Flame: Peter McCormack’s Twitter regrets — ‘I can feel myself being a dick’

Counterpunch: Russia Reveals Plan to Utilize Frozen Western Assets

AI chatbots are illegally ripping off copyrighted news, says media group

AI developers are taking revenue, data and users away from news publications by building competing products, the News Media Alliance claims.

Artificial intelligence developers heavily rely on illegally scraping copyrighted material from news publications and journalists to train their models, a news industry group has claimed.

On Oct. 30, the News Media Alliance (NMA) published a 77-page white paper and accompanying submission to the United States Copyright Office that claims the data sets that train AI models use significantly more news publisher content compared to other sources.

As a result, the generations from AI “copy and use publisher content in their outputs” which infringes on their copyright and puts news outlets in competition with AI models.

“Many generative AI developers have chosen to scrape publisher content without permission and use it for model training and in real-time to create competing products,” NMA stressed in an Oct. 31 statement.

The group argues while news publishers make investments and take on risks, AI developers are the ones rewarded “in terms of users, data, brand creation, and advertising dollars.”

Reduced revenues, employment opportunities and tarnished relationships with its viewers are other setbacks publishers face, the NMA noted its submission to the Copyright Office.

To combat the issues, the NMA recommended the Copyright Office declare that using a publication’s content to monetize AI systems harms publishers. The group also called for various licensing models and transparency measures to restrict the ingestion of copyrighted materials.

The NMA also recommends the Copyright Office adopt measures to scrap protected content from third-party websites.

The NMA acknowledged the benefits of generative AI and noted that publications and journalists can use AI for proofreading, idea generation and search engine optimization.

OpenAI’s ChatGPT, Google’s Bard and Anthropic’s Claude are three AI chatbots that have seen increased use over the last 12 months. However, the methods to train these AI models have been criticized, with all facing copyright infringement claims in court.

Related: How Google’s AI legal protections can change art and copyright protections

Comedian Sarah Silverman sued OpenAI and Meta in July claiming the two firms used her copyrighted work to train their AI systems without permission.

OpenAI and Google were hit with separate class-action suits over claims they scraped private user information from the internet.

Google has said it will assume legal responsibility if its customers are alleged to have infringed copyright for using its generative AI products on Google Cloud and Workspace.

“If you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.

However, Google’s Bard search tool isn't covered by its legal protection promise.

OpenAI and Google did not immediately respond to a request for comment.

Magazine: AI Eye: Real uses for AI in crypto, Google’s GPT-4 rival, AI edge for bad employees

Counterpunch: Russia Reveals Plan to Utilize Frozen Western Assets

US Space Force pauses use of ChatGPT-like tools due to security fears: Report

At least 500 Space Force staff members have been affected, according to the department’s former chief software officer.

The United States Space Force has temporarily banned its staff from using generative artificial tools while on duty to protect government data, according to reports.

Space Force members were informed that they “are not authorized” to web-based generative AI tools — to create text, images, and other media — unless specifically approved, according to an Oct. 12 report by Bloomberg, citing a memorandum addressed to the Guardian Workforce (Space Force members) on Sept. 29.

“Generative AI “will undoubtedly revolutionize our workforce and enhance Guardian’s ability to operate at speed,” Lisa Costa, Space Force's deputy chief of space operations for technology and innovation reportedly said in the memorandum.

However, Costa cited concerns over current cybersecurity and data handling standards, explaining that AI and large language model (LLM) adoption needs to be more “responsible.”

The United States Space Force is a space service branch of the U.S. Armed Forces tasked with protecting the U.S. and allied interests in space.

The Space Force’s decision has already impacted at least 500 individuals using a generative AI platform called “Ask Sage,” according to Bloomberg, citing comments from Nick Chaillan, former chief software officer for the United States Air Force and Space Force.

Chaillan reportedly criticized the Space Force’s decision. “Clearly, this is going to put us years behind China,” he wrote in a September email complaining to Costa and other senior defense officials.

“It’s a very short-sighted decision,” Chaillan added.

Chaillan noted that the U.S. Central Intelligence Agency and its departments have developed generative AI tools of their own that meet data security standards.

Related: Data protection in AI chatting: Does ChatGPT comply with GDPR standards?

Concerns that LLMs could leak private information into the public has been a fear for some governments in recent months.

Italy temporarily blocked AI chatbot ChatGPT in March, citing suspected breaches of data privacy rules before reversing its decision about a month later.

Tech giants such as Apple, Amazon, and Samsung are among the firms that have also banned or restricted employees from using ChatGPT-like AI tools at work.

Magazine: Musk’s alleged price manipulation, the Satoshi AI chatbot and more

Counterpunch: Russia Reveals Plan to Utilize Frozen Western Assets

AI can be used in ‘every single process’ of JPMorgan’s operations, says CEO

JPMorgan’s CEO Jamie Dimon pointed to trading, hedging, research and error detection as just some of the processes that can be streamlined by AI.

JPMorgan CEO Jamie Dimon says artificial intelligence could be applied to “every single process” of his firm’s operations and may replace humans in certain roles.

In an Oct. 2 interview with Bloomberg, Dimon said he expects to see “all different types of models” and tools and technology for AI in the future. “It’s a living, breathing thing, he said, adding:

“But the way to think about for us is every single process, so errors, trading, hedging, research, every app, every database, you can be applying AI.”

“So it might be as a co-pilot, it might be to replace humans … AI is doing all the equity hedging for us for the most part. It’s idea generation, it’s large language models,” he said, adding more generally, it could also impact customer service. 

“We already have thousands of people doing it,” said the JPMorgan CEO about AI research, including some of the “top scientists around the world.”

Asked whether he expects AI will replace some jobs, Dimon said “of course” — but stressed that technology has always done so.

“People need to take a deep breath. Technology has always replaced jobs,” he explained.

“Your children will live to 100 and not have cancer because of technology and literally they'll probably be working three days a week. So technology’s done unbelievable things for mankind.”

However, Dimon acknowledged there are also “negatives” to emerging technologies.

When it comes to AI, Dimon says he’s particularly concerned about “AI being used by bad people to do bad things” — particularly in cyberspace — but is hopeful that legal guardrails will curtail such conduct over time.

Related: AI tech boom: Is the artificial intelligence market already saturated?

Dimon concluded that AI will add “huge value” to the workforce and in the event that the firm replaces its employees with AI, he hopes they will be able to redeploy displaced workers in more suitable work environments.

“We expect to be able to get them a job somewhere local in a different branch or a different function, if we can do that, and we’ll be doing that with any dislocation that takes place as a result of AI.”

Magazine: AI Eye: Real uses for AI in crypto, Google’s GPT-4 rival, AI edge for bad employees

Counterpunch: Russia Reveals Plan to Utilize Frozen Western Assets

Meta and Microsoft launch open-source AI model Llama 2

Llama 2 is trained on 40% more public data and can process twice as much context than Llama 1, according to Meta.

Big Tech firms Meta and Microsoft have teamed up to launch Llama 2, an open-source large language model from Meta that will feature on Microsoft’s Windows and cloud computing platform Azure.

The pair announced the collaboration on July 18 saying Llama 2 was made free for research and commercial use while also being optimized to run on Windows.

The announcement confirmed rumors from last week that said Llama 2 would be built for businesses and researchers to create applications on Meta’s AI tech stack.

Meta claimed Llama 2 was trained on 40% more publicly available online data sources and can process twice as much context compared to Llama 1.

The firm said Llama 2 outperforms many competitor open-source LLMs when it comes to coding, proficiency, reasoning and performance on knowledge tests. However, Meta conceded it isn’t quite as efficient compared to its closed-source competitors such as OpenAI’s GPT-4, according to one of its research papers

In a July 18 Instagram post, Meta CEO Mark Zuckerberg said Llama 2 “gives researchers and businesses access to build with our next generation large language model as the foundation of their work.”

Mark Zuckerberg with Microsoft CEO Satya Nadella. Source: Instagram

Meta said it was “blown away” by the demand for Llama 1 following the release of its limited version in February, which received over 100,000 requests for access. The model was soon leaked online by a user of the imageboard website 4chan.

Related: AI has potential to send Bitcoin price over $750K — Arthur Hayes

Llama 1's figures, however, were far off from ChatGPT’s, which saw an estimated 100 million or more users sign up to use the model in the first three months, according to a February Reuters report.

With the partnership, Microsoft now backs two big players in the AI space, having invested a cumulative $13 million in OpenAI over the course of 2023, according to a January report by Fortune.

Meta’s decision to open source Llama was criticized by two United States senators in June, who claim that the “seemingly minimal” protections in the first version of Llama potentially opened the doors for malicious users to engage in “criminal tasks.”

Magazine: AI Eye: AI travel booking hilariously bad, 3 weird uses for ChatGPT, crypto plugins

Counterpunch: Russia Reveals Plan to Utilize Frozen Western Assets