1. Home
  2. Generative Artificial Intelligence

Generative Artificial Intelligence

Google’s Gemini demo is now getting accused of being ‘fake’

Onlookers praised the Gemini tech demo from Google upon its release last week but the tech firm admits some of it was jazzed up for “brevity.”

A "hands-on" tech demo of Google’s new artificial intelligence model Gemini has gone from being the talk of the town to being accused by critics of being “basically entirely fake.”

The six-minute video, which garnered 2.1 million views on YouTube since its release on Dec. 7, shows it seamlessly interacting with a human operator in seemingly real-time, including analyzing a duck drawing, hand gestures, and inventing a game called “Guess the Country" with just an image prompt of the world map. 

However, Oriol Vinyals, a Google Deepmind executive has since clarified that while the user prompts and outputs in the video are real, it has been “shortened for brevity.” In reality, Gemini's interactions were text-based, not voiced, and took much longer than how it was represented in the video.

Read more

These Two Low-Cap Altcoins Are Flashing Bullish Signals Amid Heightened FUD, Says Analytics Firm Santiment

California governor calls for statewide generative AI training

In a recent report, California Governor Gavin Newsom emphasized the significance of preparing for the next generation of skills essential to thrive in the GenAI economy.

California Governor Gavin Newsom has stressed the importance of people staying ahead of the curve in generative artificial intelligence (GenAI) by acquiring new skills and becoming acquainted with the emerging technology.

As outlined in the report, there is a suggestion that residents of California should have access to educational and training opportunities in GenAI, noting:

"To support California’s state government workforce and prepare for the next generation of skills needed to thrive in the GenAI economy, agencies will provide trainings for state government workers to use state-approved GenAI to achieve equitable outcomes.”

It stated that this is considered essential in response to the notable employment impact indicated by recent reports on GenAI.

The report cited Goldman Sachs' forecast, indicating that GenAI is expected to affect 300 million jobs worldwide, despite the potential productivity gains expected to be achieved. 

“As such, the State must lead in training and supporting workers, allowing them to participate in the AI economy and creating the demand for businesses to locate and hire here in California,” it noted.

It further stated that GenAI education initiatives should commence at higher education institutions and vocational schools.

Related: IBM launches $500M fund to develop generative AI for enterprise

There have been several reports in recent times over AI’s potential impact on jobs in the worldwide economy.

On July 12, The Organisation for Economic Co-operation and Development (OECD) released a report outlining the jobs most at risk of AI. 

The research goes on to label “high-skill, white collar jobs” as the most exposed to AI. The OECD says these are key qualities of occupations requiring significant training or tertiary education. 

Furthermore, measures of AI exposure show that available tools have shown the most progress in areas requiring “non-routine, cognitive tasks such as information ordering, memorization and perceptual speed.”

Magazine: Train AI models to sell as NFTs, LLMs are Large Lying Machines: AI Eye

These Two Low-Cap Altcoins Are Flashing Bullish Signals Amid Heightened FUD, Says Analytics Firm Santiment

AI chatbots are illegally ripping off copyrighted news, says media group

AI developers are taking revenue, data and users away from news publications by building competing products, the News Media Alliance claims.

Artificial intelligence developers heavily rely on illegally scraping copyrighted material from news publications and journalists to train their models, a news industry group has claimed.

On Oct. 30, the News Media Alliance (NMA) published a 77-page white paper and accompanying submission to the United States Copyright Office that claims the data sets that train AI models use significantly more news publisher content compared to other sources.

As a result, the generations from AI “copy and use publisher content in their outputs” which infringes on their copyright and puts news outlets in competition with AI models.

“Many generative AI developers have chosen to scrape publisher content without permission and use it for model training and in real-time to create competing products,” NMA stressed in an Oct. 31 statement.

The group argues while news publishers make investments and take on risks, AI developers are the ones rewarded “in terms of users, data, brand creation, and advertising dollars.”

Reduced revenues, employment opportunities and tarnished relationships with its viewers are other setbacks publishers face, the NMA noted its submission to the Copyright Office.

To combat the issues, the NMA recommended the Copyright Office declare that using a publication’s content to monetize AI systems harms publishers. The group also called for various licensing models and transparency measures to restrict the ingestion of copyrighted materials.

The NMA also recommends the Copyright Office adopt measures to scrap protected content from third-party websites.

The NMA acknowledged the benefits of generative AI and noted that publications and journalists can use AI for proofreading, idea generation and search engine optimization.

OpenAI’s ChatGPT, Google’s Bard and Anthropic’s Claude are three AI chatbots that have seen increased use over the last 12 months. However, the methods to train these AI models have been criticized, with all facing copyright infringement claims in court.

Related: How Google’s AI legal protections can change art and copyright protections

Comedian Sarah Silverman sued OpenAI and Meta in July claiming the two firms used her copyrighted work to train their AI systems without permission.

OpenAI and Google were hit with separate class-action suits over claims they scraped private user information from the internet.

Google has said it will assume legal responsibility if its customers are alleged to have infringed copyright for using its generative AI products on Google Cloud and Workspace.

“If you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.

However, Google’s Bard search tool isn't covered by its legal protection promise.

OpenAI and Google did not immediately respond to a request for comment.

Magazine: AI Eye: Real uses for AI in crypto, Google’s GPT-4 rival, AI edge for bad employees

These Two Low-Cap Altcoins Are Flashing Bullish Signals Amid Heightened FUD, Says Analytics Firm Santiment