1. Home
  2. Meta.

Meta.

Mark Zuckerberg says Meta wearables that read brain signals are coming soon

The new neural technology that Meta is developing will be “pretty wild,” said Zuckerberg, adding its first application will be for AR glasses.

Meta CEO Mark Zuckerberg has hinted his firm is making progress on its first “consumer neural interfaces,” non-invasive wearable devices that can interpret brain signals to control computers. 

However, unlike Elon Musk’s Neuralink brain chip, Zuckerberg explained that these devices wouldn’t be something that “jacks into your brain” but something wearable on the wrist that can “read neural signals that your brain sends through your nerves to your hand to basically move it in different subtle ways.”

Meta first began discussing the development of “wrist-based interaction” in March 2021 as part of Facebook Reality Labs Research.

Read more

Kanye West plans ‘Swasticoin,’ eyes blockchain launch, and claims ‘only broke boys rug pull’

Meta launches ‘most capable openly available LLM to date’ rivalling GPT and Claude

Llama-3 may be the company’s most ambitious artificial intelligence project yet.

Meta bellied up to the artificial intelligence (AI) bar today to announce that its newest large language model, Llama-3, is the “most capable,” and “best open source model” currently available. 

The company’s statements surrounding the general availability of Llama-3 as well as a new standalone “Meta AI” portal, has the tech world abuzz with declarations that the current space leaders, Microsoft, OpenAI, Google, and Anthropic, finally have some stiff competition from the company formerly called Facebook.

A Meta blog post made no quibbles about the company’s position concerning where its Llama suite of LLMs now lie in the global AI model hierarchy:

Read more

Kanye West plans ‘Swasticoin,’ eyes blockchain launch, and claims ‘only broke boys rug pull’

Google slashes price of Gemini AI model, opens up to developers

The Google parent company Alphabet said it is slashing prices for its pro version of AI model Gemini and plans to make its tools more accessible to developers to create their own versions.

Alphabet, the parent company of Google, announced on Dec. 13 that it plans to slash the cost of a version of its most advanced artificial intelligence (AI) model Gemini and make it more accessible to developers. 

According to reports, the company said the price for the pro model of Gemini has been cut 25-50% of what it was in June.

Gemini was introduced in three variations on Dec. 6, with its most sophisticated version being able to reason and understand information at a higher level than other Google technology, along with computing video and audio.

Read more

Kanye West plans ‘Swasticoin,’ eyes blockchain launch, and claims ‘only broke boys rug pull’

Meta releases ‘Purple Llama’ AI security suite to meet White House commitments

Meta believes that this is “the first industry-wide set of cyber security safety evaluations for Large Language Models (LLMs).”

Meta released a suite of tools for securing and benchmarking generative artificial intelligence models (AI) on Dec. 7. 

Dubbed “Purple Llama,” the toolkit is designed to help developers build safely and securely with generative AI tools, such as Meta’s open-source model, Llama-2.

The release, which Meta claims is the “first industry-wide set of cyber security safety evaluations for Large Language Models (LLMs),” includes:

Read more

Kanye West plans ‘Swasticoin,’ eyes blockchain launch, and claims ‘only broke boys rug pull’

Meta to fight AI-generated fake news with ‘invisible watermarks’

Meta will make use of a deep-learning model to apply watermarks to images generated with its AI tool, which would be invisible to the human eye.

Social media giant Meta (formerly Facebook) will include an invisible watermark in all images it creates using artificial intelligence (AI) as it steps up measures to prevent misuse of the technology.

In a Dec. 6 report detailing updates for Meta AI — Meta’s virtual assistant — the company revealed it will soon add invisible watermarking to all AI-generated images created with the "imagine with Meta AI experience." Like numerous other AI chatbots, Meta AI generates images and content based on user prompts. However, Meta aims to prevent bad actors from viewing the service as another tool for duping the public.

Like numerous other AI image generators, Meta AI generates images and content based on user prompts. The latest watermark feature would make it more difficult for a creator to remove the watermark.  

Read more

Kanye West plans ‘Swasticoin,’ eyes blockchain launch, and claims ‘only broke boys rug pull’

IBM, Meta and others form ‘AI Alliance’ to advance AI development

In a joint statement, IBM and Meta outlined the AI Alliance’s objectives, emphasizing a commitment to safety, collaboration, diversity, economic opportunity, and universal benefits.

In the race for market supremacy among artificial intelligence (AI) firms, a coalition of technology leaders spearheaded by IBM and Meta established the AI Alliance.

In a joint statement, IBM and Meta outlined the AI Alliance’s objectives, emphasizing a commitment to safety, collaboration, diversity, economic opportunity, and universal benefits.

While numerous members endorse open-source development, it’s important to note that adherence to this model is not obligatory for membership.

“The progress we continue to witness in AI is a testament to open innovation and collaboration across communities of creators, scientists, academics, and business leaders.”

According to IBM and Meta, the AI Alliance will create a governing board and technical oversight committee focused on advancing AI projects and setting standards and guidelines.

“The AI Alliance brings together researchers, developers, and companies to share tools and knowledge that can help us all make progress whether models are shared openly or not,”

Looking to engage the academic community, the AI Alliance also includes several educational and research institutions, including CERN, NASA, Cleveland Clinic, Cornell University, Dartmouth, Imperial College London, University of California Berkeley, University of Illinois, University of Notre Dame, The University of Tokyo, and Yale University.

While Meta has advocated for open-source AI models and responsible development, the company opted to decentralize and streamline AI development by disbanding its responsible AI team in November.

Related: Meta’s AI boss says there’s an ‘AI war’ underway, and Nvidia is ‘supplying the weapons’

Read more

Kanye West plans ‘Swasticoin,’ eyes blockchain launch, and claims ‘only broke boys rug pull’

Meta’s AI boss says there’s an ‘AI war’ underway and Nvidia is ‘supplying the weapons’

The outspoken executive also said that Meta isn’t pursuing quantum computing because it isn’t currently useful.

Meta AI boss Yann LeCun sounded off on the industry-wide state of artificial intelligence and quantum computing during a recent event to celebrate the 10 year anniversary of the founding of Meta’s Fundamental Artificial Intelligence Research (FAIR) team. 

During LeCun’s commentary, he commented on Nvidia’s current stranglehold on the AI hardware industry, the likelihood human-level AI will emerge in the near future, and why Meta isn’t currently pursuing quantum computing alongside its competitors.

The artificial intelligence war

LeCun’s views on the imminence of so-called human-level AI are well-documented.

By comparison, Elon Musk recently gave the bold prediction that a “Digital God” would arrive within the next 3 to 5 years.

In the middle, perhaps, lies Nvidia CEO Jensen Huang. He recently stated that AI would be able to complete tests in a manner “fairly competitive” with humans in the next five years.

Read more

Kanye West plans ‘Swasticoin,’ eyes blockchain launch, and claims ‘only broke boys rug pull’

Meta dissolves responsible AI division amid restructuring

The RAI restructuring comes as the Facebook parent nears the end of its “year of efficiency,” as CEO Mark Zuckerberg called it during a February earnings call.

Social media giant, Meta has reportedly disbanded its Responsible AI division, the team dedicated to regulating the safety of its artificial intelligence ventures as they get developed and deployed. 

According to a report, many RAI team members have transitioned to roles within the Generative AI product division at the company, with some joining the AI Infrastructure team.

Meta’s Generative AI team, which was established in February, focuses on developing products that generate language and images to mimic the equivalent human-made version. It came as companies across the tech industry poured money into machine learning development to avoid being left behind in the AI race. Meta is among the Big Tech companies that have been playing catch-up since the AI boom took hold.

The RAI restructuring comes as the Facebook parent nears the end of its “year of efficiency,” as CEO Mark Zuckerberg called it during a February earnings call. So far, that has played out as a flurry of layoffs, team mergers and redistributions at the company.

Ensuring AI’s safety has become a priority of top players in the space, especially as regulators and other officials pay closer attention to the nascent technology’s potential harms. In July, Anthropic, Google, Microsoft and OpenAI formed an industry group focused specifically on setting safety standards as AI advances.

Report: Google sues scammers over creation of fake Bard AI chatbot

According to the report, RAI team members have been redistributed within the company, but they remain committed to supporting responsible AI development and use, emphasizing ongoing investment in this area.

The company recently introduced two AI-powered generative models. The first, Emu Video, leverages Meta’s previous Emu model and can generate video clips based on text and image inputs. The second model, Emu Edit, is focused on image manipulation, promising more precision in image editing.

Cointelegraph reached out to Meta for more information but is yet to get feedback at the time of this publication.

Magazine: Train AI models to sell as NFTs, LLMs are Large Lying Machines: AI Eye

Kanye West plans ‘Swasticoin,’ eyes blockchain launch, and claims ‘only broke boys rug pull’

Meta bans usage of generative AI ad creation tools for political advertisers

Meta updated its help center with a note explaining that political advertisers are prohibited from using its new generative AI ad campaign creation tools.

Meta, the parent company of Facebook and Instagram, is not allowing political campaigns and advertisers to use its generative artificial intelligence (AI) advertising tools, a company spokesperson said in a Reuters exclusive report

On Nov. 6 Meta updated its help center to reflect the decision. In a note explaining how the tools work, the company said as it tests new generative AI ads creation tools in its Ads Manager, “advertisers running campaigns that qualify as ads for Housing, Employment or Credit or Social Issues, Elections, or Politics, or related to Health, Pharmaceuticals or Financial Services aren’t currently permitted to use these Generative AI features.”

"We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of Generative AI in ads that relate to potentially sensitive topics in regulated industries.”

Meta’s general advertising standards, however, don’t have any rules specifically on AI, though it does prohibit ads from running on the platform that contain content that has been debunked by its fact-checking partners.

Related: Consumer surveys show a growing distrust of AI and firms that use it

In September, Google updated its political content policy which mandated that all verified election advertisers disclose uses of AI in their campaign content. 

Google’s standards call out “synthetic content that inauthentically depicts real or realistic-looking people or events” and say the notices must be “clear and conspicuous” in places where users will notice them.

However, on Google’s platforms, “Ads that contain synthetic content altered or generated in such a way that is inconsequential to the claims made in the ad will be exempt from these disclosure requirements.”

Regulators in the United States are also considering creating regulations around political AI deep fakes ahead of the upcoming 2024 election cycle. 

Already, there are concerns about AI usage on social media potentially impacting voter sentiment through the creation of fake news - moreover, the accessibility AI allows to produce fake news-, deep fakes and more. 

Additionally, there have been claims made that one of the most popular AI chatbots ChatGPT has a left-leaning political bias, though these claims are widely disputed in the AI community and academia.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Kanye West plans ‘Swasticoin,’ eyes blockchain launch, and claims ‘only broke boys rug pull’

TikTok, Snapchat, OnlyFans and others to combat AI-generated child abuse content

Major social platforms, AI companies, governments and NGOs issued a joint statement pledging to combat AI-generated abusive content, such as explicit images of children.

A coalition of major social media platforms, artificial intelligence (AI) developers, governments and non-governmental organizations (NGOs) have issued a joint statement pledging to combat abusive content generated by AI.

On Oct. 30, the United Kingdom issued the policy statement, which includes 27 signatories, including the governments of the United States, Australia, Korea, Germany and Italy, along with social media platforms Snapchat, TikTok and OnlyFans.

It was also undersigned by the AI platforms Stability AI and Ontocord.AI and a number of NGOs working toward internet safety and children’s rights, among others.

The statement says that while AI offers “enormous opportunities” in tackling threats of online child sexual abuse, it can also be utilized by predators to generate such types of material.

It revealed data from the Internet Watch Foundation that, within a month of 11,108 AI-generated images shared in a dark web forum, 2,978 depicted content related to child sexual abuse.

Related: US President Joe Biden urges tech firms to address risks of AI

The U.K. government said the statement stands as a pledge to “seek to understand and, as appropriate, act on the risks arising from AI to tackling child sexual abuse through existing fora.”

“All actors have a role to play in ensuring the safety of children from the risks of frontier AI.”

It encouraged transparency on plans for measuring, monitoring and managing ways AI can be exploited by child sexual offenders and on a country level to build policies regarding the topic.

Additionally, it aims to maintain a dialogue around combating child sexual abuse in the AI age. This statement was released in the run-up to the U.K. hosting its global summit on AI safety this week.

Concerns over child safety in relation to AI have been a major topic of discussion in the face of the rapid emergence and widespread use of the technology.

On Oct. 26, 34 states in the U.S. filed a lawsuit against Meta, the Facebook and Instagram parent company, over child safety concerns.

Magazine: AI Eye: Get better results being nice to ChatGPT, AI fake child porn debate, Amazon’s AI reviews

Kanye West plans ‘Swasticoin,’ eyes blockchain launch, and claims ‘only broke boys rug pull’