1. Home
  2. Claude

Claude

Anthropic, Palantir follows Meta’s lead taking AI to war

Anthropic’s AI model will be integrated into one of Palantir’s data systems, which is authorized to contain “secrets” critical to US national security.

AI firm Anthropic has become the latest firm to give the United States government access to its AI models for national security purposes — following a similar lead to Meta’s announcement earlier this week.

US defense departments will be granted access to Anthropic’s Claude 3 and 3.5 AI models, which will be integrated into Palantir’s AI Platform and secured on Amazon Web Services, Palantir said in a Nov. 7 statement.

“Our partnership with Anthropic and AWS provides U.S. defense and intelligence communities the tool chain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions,” Palantir’s chief technology officer Shyam Sankar explained.

Read more

Angel Investor: Multichain a Stopgap, Future Lies in Advanced Protocols

Google’s new Gemini AI model dominates benchmarks, beats GPT-4o and Claude-3

This is the first time Google’s taken the top slot on the Chatbot Arena leaderboard.

There’s a new top dog in the world of generative artificial intelligence benchmarks and its name is Gemini 1.5 Pro. 

The previous champ, OpenAI’s ChatGPT-4o, was finally surpassed on Aug. 1 when Google quietly launched an experimental release of its latest model.

Gemini’s latest update arrived without fanfare and is currently labelled as experimental. But it quickly gained the attention of the AI community across social media as reports began to trickle in that it was surpassing its rivals on benchmark scores.

Read more

Angel Investor: Multichain a Stopgap, Future Lies in Advanced Protocols

Anthropic Launches Claude 3.5 Sonnet, Surpassing Openai’s Chatgpt

Anthropic Launches Claude 3.5 Sonnet, Surpassing Openai’s ChatgptAnthropic, one of the largest artificial intelligence (AI) companies in the market, has released the latest iteration of its in-house large language model (LLM) Claude, identified as 3.5 Sonnet. Anthropic claims this new assistant competes with Openai’s GPT-4o, surpassing the latter in several benchmarks including Graduate Level Reasoning, Grade School Math, Multilingual Math, and Code. […]

Angel Investor: Multichain a Stopgap, Future Lies in Advanced Protocols

8 AI Chatbots Predict Precious Metals Year-End Prices: Gold at $2,800, Silver at $42

8 AI Chatbots Predict Precious Metals Year-End Prices: Gold at ,800, Silver at In recent months, our newsdesk has been testing the use of artificial intelligence (AI) chatbots to forecast bitcoin prices by the end of 2024 at specific intervals throughout the year. This time, we’ve changed our approach by employing the same generative AI chatbots to project the year-end prices for an ounce of .999 fine gold […]

Angel Investor: Multichain a Stopgap, Future Lies in Advanced Protocols

7 AI Chatbots Predict Bitcoin’s Price Post-Halving; See $80K-$100K by Year-End

7 AI Chatbots Predict Bitcoin’s Price Post-Halving; See K-0K by Year-EndOn April 28, 2024, the price of bitcoin was coasting along at $62,900 per unit as of 7:28 p.m. Eastern Time (ET). Since then, the price fell below the $62K mark by Monday morning ET, only to climb back above $62,000 by mid-afternoon. It has been nine days since the last halving event and 109 […]

Angel Investor: Multichain a Stopgap, Future Lies in Advanced Protocols

AI chatbots are illegally ripping off copyrighted news, says media group

AI developers are taking revenue, data and users away from news publications by building competing products, the News Media Alliance claims.

Artificial intelligence developers heavily rely on illegally scraping copyrighted material from news publications and journalists to train their models, a news industry group has claimed.

On Oct. 30, the News Media Alliance (NMA) published a 77-page white paper and accompanying submission to the United States Copyright Office that claims the data sets that train AI models use significantly more news publisher content compared to other sources.

As a result, the generations from AI “copy and use publisher content in their outputs” which infringes on their copyright and puts news outlets in competition with AI models.

“Many generative AI developers have chosen to scrape publisher content without permission and use it for model training and in real-time to create competing products,” NMA stressed in an Oct. 31 statement.

The group argues while news publishers make investments and take on risks, AI developers are the ones rewarded “in terms of users, data, brand creation, and advertising dollars.”

Reduced revenues, employment opportunities and tarnished relationships with its viewers are other setbacks publishers face, the NMA noted its submission to the Copyright Office.

To combat the issues, the NMA recommended the Copyright Office declare that using a publication’s content to monetize AI systems harms publishers. The group also called for various licensing models and transparency measures to restrict the ingestion of copyrighted materials.

The NMA also recommends the Copyright Office adopt measures to scrap protected content from third-party websites.

The NMA acknowledged the benefits of generative AI and noted that publications and journalists can use AI for proofreading, idea generation and search engine optimization.

OpenAI’s ChatGPT, Google’s Bard and Anthropic’s Claude are three AI chatbots that have seen increased use over the last 12 months. However, the methods to train these AI models have been criticized, with all facing copyright infringement claims in court.

Related: How Google’s AI legal protections can change art and copyright protections

Comedian Sarah Silverman sued OpenAI and Meta in July claiming the two firms used her copyrighted work to train their AI systems without permission.

OpenAI and Google were hit with separate class-action suits over claims they scraped private user information from the internet.

Google has said it will assume legal responsibility if its customers are alleged to have infringed copyright for using its generative AI products on Google Cloud and Workspace.

“If you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.

However, Google’s Bard search tool isn't covered by its legal protection promise.

OpenAI and Google did not immediately respond to a request for comment.

Magazine: AI Eye: Real uses for AI in crypto, Google’s GPT-4 rival, AI edge for bad employees

Angel Investor: Multichain a Stopgap, Future Lies in Advanced Protocols

Google to invest another $2B in AI firm Anthropic: Report

Google has already invested $500 million as part of the deal, while the outstanding $1.5 billion will be paid over time, according to the Wall Street Journal.

Google has doubled down on its artificial intelligence bets by investing another $2 billion into AI startup Anthropic, according to a new report.

Google has already invested $500 million upfront to Anthropic — a rival to ChatGPT creators OpenAI — and will pay off the remaining $1.5 billion over time, according to an Oct. 27 report by the Wall Street Journal (WSJ), which cited people familiar.

The mega-deal adds to Google’s $550 million investment into Anthropic earlier in the year.

Google Cloud also striked a multi-year deal with Anthropic a few months ago worth over $3 billion, WSJ revealed, citing a person familiar with the matter.

The news follows Amazon’s massive $4 billion investment into Anthropic late last month.

Anthropic is using much of these investments to train its AI systems, such as AI assistant Claude, in hopes that the firm can achieve the next big breakthrough in the AI industry.

On the other side of the fence is OpenAI, who have received more than $13 billion in funding from Microsoft alone since 2019 and continue to build more advanced versions of its own AI chat bot, ChatGPT. The popular chat bot amassed over 100 million users within the first two months of launching in November, which caught the attention of many venture capital firms around the world looking to invest in the space.

The co-founders of Anthropic, siblings Dario and Daniela Amodei, previously worked at OpenAI but left in 2021 following disputes with OpenAI’s CEO Sam Altman over safety implications associated with building AI systems.

Related: Universal Music Group sues Anthropic AI over copyright infringement

Prior to Google and Amazon, Anthropic was largely bankrolled by former FTX CEO Sam Bankman-Fried, who invested about $530 million in Anthropic's in April 2022 — about seven months before FTX collapsed.

Anthropic’s surge in valuation has been viewed as a positive sign for FTX creditors in hopes that they will be compensated fully from FTX’s bankruptcy case.

Magazine: AI Eye: Real uses for AI in crypto, Google’s GPT-4 rival, AI edge for bad employees

Angel Investor: Multichain a Stopgap, Future Lies in Advanced Protocols

AI researchers say they’ve found a way to jailbreak Bard and ChatGPT

Artificial intelligence researchers claim to have found an automated, easy way to construct "adversarial attacks" on large language models.

United States-based researchers have claimed to have found a way to consistently circumvent safety measures from artificial intelligence chatbots such as ChatGPT and Bard to generate harmful content. 

According to a report released on July 27 by researchers at Carnegie Mellon University and the Center for AI Safety in San Francisco, there’s a relatively easy method to get around safety measures used to stop chatbots from generating hate speech, disinformation, and toxic material.

The circumvention method involves appending long suffixes of characters to prompts fed into the chatbots such as ChatGPT, Claude, and Google Bard.

The researchers used an example of asking the chatbot for a tutorial on how to make a bomb, which it declined to provide. 

Screenshots of harmful content generation from AI models tested. Source: llm-attacks.org

Researchers noted that even though companies behind these LLMs, such as OpenAI and Google, could block specific suffixes, here is no known way of preventing all attacks of this kind.

The research also highlighted increasing concern that AI chatbots could flood the internet with dangerous content and misinformation.

Professor at Carnegie Mellon and an author of the report, Zico Kolter, said:

“There is no obvious solution. You can create as many of these attacks as you want in a short amount of time.”

The findings were presented to AI developers Anthropic, Google, and OpenAI for their responses earlier in the week.

OpenAI spokeswoman, Hannah Wong told the New York Times they appreciate the research and are “consistently working on making our models more robust against adversarial attacks.”

Professor at the University of Wisconsin-Madison specializing in AI security, Somesh Jha, commented if these types of vulnerabilities keep being discovered, “it could lead to government legislation designed to control these systems.”

Related: OpenAI launches official ChatGPT app for Android

The research underscores the risks that must be addressed before deploying chatbots in sensitive domains.

In May, Pittsburgh, Pennsylvania-based Carnegie Mellon University received $20 million in federal funding to create a brand new AI institute aimed at shaping public policy.

Magazine: AI Eye: AI travel booking hilariously bad, 3 weird uses for ChatGPT, crypto plugins

Angel Investor: Multichain a Stopgap, Future Lies in Advanced Protocols