1. Home
  2. Meta.

Meta.

Meta to fight AI-generated fake news with ‘invisible watermarks’

Meta will make use of a deep-learning model to apply watermarks to images generated with its AI tool, which would be invisible to the human eye.

Social media giant Meta (formerly Facebook) will include an invisible watermark in all images it creates using artificial intelligence (AI) as it steps up measures to prevent misuse of the technology.

In a Dec. 6 report detailing updates for Meta AI — Meta’s virtual assistant — the company revealed it will soon add invisible watermarking to all AI-generated images created with the "imagine with Meta AI experience." Like numerous other AI chatbots, Meta AI generates images and content based on user prompts. However, Meta aims to prevent bad actors from viewing the service as another tool for duping the public.

Like numerous other AI image generators, Meta AI generates images and content based on user prompts. The latest watermark feature would make it more difficult for a creator to remove the watermark.  

Read more

MEXC Raises the Bar: Supercar Giveaway Boosted to 12,000,000 USDT

IBM, Meta and others form ‘AI Alliance’ to advance AI development

In a joint statement, IBM and Meta outlined the AI Alliance’s objectives, emphasizing a commitment to safety, collaboration, diversity, economic opportunity, and universal benefits.

In the race for market supremacy among artificial intelligence (AI) firms, a coalition of technology leaders spearheaded by IBM and Meta established the AI Alliance.

In a joint statement, IBM and Meta outlined the AI Alliance’s objectives, emphasizing a commitment to safety, collaboration, diversity, economic opportunity, and universal benefits.

While numerous members endorse open-source development, it’s important to note that adherence to this model is not obligatory for membership.

“The progress we continue to witness in AI is a testament to open innovation and collaboration across communities of creators, scientists, academics, and business leaders.”

According to IBM and Meta, the AI Alliance will create a governing board and technical oversight committee focused on advancing AI projects and setting standards and guidelines.

“The AI Alliance brings together researchers, developers, and companies to share tools and knowledge that can help us all make progress whether models are shared openly or not,”

Looking to engage the academic community, the AI Alliance also includes several educational and research institutions, including CERN, NASA, Cleveland Clinic, Cornell University, Dartmouth, Imperial College London, University of California Berkeley, University of Illinois, University of Notre Dame, The University of Tokyo, and Yale University.

While Meta has advocated for open-source AI models and responsible development, the company opted to decentralize and streamline AI development by disbanding its responsible AI team in November.

Related: Meta’s AI boss says there’s an ‘AI war’ underway, and Nvidia is ‘supplying the weapons’

Read more

MEXC Raises the Bar: Supercar Giveaway Boosted to 12,000,000 USDT

Meta’s AI boss says there’s an ‘AI war’ underway and Nvidia is ‘supplying the weapons’

The outspoken executive also said that Meta isn’t pursuing quantum computing because it isn’t currently useful.

Meta AI boss Yann LeCun sounded off on the industry-wide state of artificial intelligence and quantum computing during a recent event to celebrate the 10 year anniversary of the founding of Meta’s Fundamental Artificial Intelligence Research (FAIR) team. 

During LeCun’s commentary, he commented on Nvidia’s current stranglehold on the AI hardware industry, the likelihood human-level AI will emerge in the near future, and why Meta isn’t currently pursuing quantum computing alongside its competitors.

The artificial intelligence war

LeCun’s views on the imminence of so-called human-level AI are well-documented.

By comparison, Elon Musk recently gave the bold prediction that a “Digital God” would arrive within the next 3 to 5 years.

In the middle, perhaps, lies Nvidia CEO Jensen Huang. He recently stated that AI would be able to complete tests in a manner “fairly competitive” with humans in the next five years.

Read more

MEXC Raises the Bar: Supercar Giveaway Boosted to 12,000,000 USDT

Meta dissolves responsible AI division amid restructuring

The RAI restructuring comes as the Facebook parent nears the end of its “year of efficiency,” as CEO Mark Zuckerberg called it during a February earnings call.

Social media giant, Meta has reportedly disbanded its Responsible AI division, the team dedicated to regulating the safety of its artificial intelligence ventures as they get developed and deployed. 

According to a report, many RAI team members have transitioned to roles within the Generative AI product division at the company, with some joining the AI Infrastructure team.

Meta’s Generative AI team, which was established in February, focuses on developing products that generate language and images to mimic the equivalent human-made version. It came as companies across the tech industry poured money into machine learning development to avoid being left behind in the AI race. Meta is among the Big Tech companies that have been playing catch-up since the AI boom took hold.

The RAI restructuring comes as the Facebook parent nears the end of its “year of efficiency,” as CEO Mark Zuckerberg called it during a February earnings call. So far, that has played out as a flurry of layoffs, team mergers and redistributions at the company.

Ensuring AI’s safety has become a priority of top players in the space, especially as regulators and other officials pay closer attention to the nascent technology’s potential harms. In July, Anthropic, Google, Microsoft and OpenAI formed an industry group focused specifically on setting safety standards as AI advances.

Report: Google sues scammers over creation of fake Bard AI chatbot

According to the report, RAI team members have been redistributed within the company, but they remain committed to supporting responsible AI development and use, emphasizing ongoing investment in this area.

The company recently introduced two AI-powered generative models. The first, Emu Video, leverages Meta’s previous Emu model and can generate video clips based on text and image inputs. The second model, Emu Edit, is focused on image manipulation, promising more precision in image editing.

Cointelegraph reached out to Meta for more information but is yet to get feedback at the time of this publication.

Magazine: Train AI models to sell as NFTs, LLMs are Large Lying Machines: AI Eye

MEXC Raises the Bar: Supercar Giveaway Boosted to 12,000,000 USDT

Meta bans usage of generative AI ad creation tools for political advertisers

Meta updated its help center with a note explaining that political advertisers are prohibited from using its new generative AI ad campaign creation tools.

Meta, the parent company of Facebook and Instagram, is not allowing political campaigns and advertisers to use its generative artificial intelligence (AI) advertising tools, a company spokesperson said in a Reuters exclusive report

On Nov. 6 Meta updated its help center to reflect the decision. In a note explaining how the tools work, the company said as it tests new generative AI ads creation tools in its Ads Manager, “advertisers running campaigns that qualify as ads for Housing, Employment or Credit or Social Issues, Elections, or Politics, or related to Health, Pharmaceuticals or Financial Services aren’t currently permitted to use these Generative AI features.”

"We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of Generative AI in ads that relate to potentially sensitive topics in regulated industries.”

Meta’s general advertising standards, however, don’t have any rules specifically on AI, though it does prohibit ads from running on the platform that contain content that has been debunked by its fact-checking partners.

Related: Consumer surveys show a growing distrust of AI and firms that use it

In September, Google updated its political content policy which mandated that all verified election advertisers disclose uses of AI in their campaign content. 

Google’s standards call out “synthetic content that inauthentically depicts real or realistic-looking people or events” and say the notices must be “clear and conspicuous” in places where users will notice them.

However, on Google’s platforms, “Ads that contain synthetic content altered or generated in such a way that is inconsequential to the claims made in the ad will be exempt from these disclosure requirements.”

Regulators in the United States are also considering creating regulations around political AI deep fakes ahead of the upcoming 2024 election cycle. 

Already, there are concerns about AI usage on social media potentially impacting voter sentiment through the creation of fake news - moreover, the accessibility AI allows to produce fake news-, deep fakes and more. 

Additionally, there have been claims made that one of the most popular AI chatbots ChatGPT has a left-leaning political bias, though these claims are widely disputed in the AI community and academia.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

MEXC Raises the Bar: Supercar Giveaway Boosted to 12,000,000 USDT

TikTok, Snapchat, OnlyFans and others to combat AI-generated child abuse content

Major social platforms, AI companies, governments and NGOs issued a joint statement pledging to combat AI-generated abusive content, such as explicit images of children.

A coalition of major social media platforms, artificial intelligence (AI) developers, governments and non-governmental organizations (NGOs) have issued a joint statement pledging to combat abusive content generated by AI.

On Oct. 30, the United Kingdom issued the policy statement, which includes 27 signatories, including the governments of the United States, Australia, Korea, Germany and Italy, along with social media platforms Snapchat, TikTok and OnlyFans.

It was also undersigned by the AI platforms Stability AI and Ontocord.AI and a number of NGOs working toward internet safety and children’s rights, among others.

The statement says that while AI offers “enormous opportunities” in tackling threats of online child sexual abuse, it can also be utilized by predators to generate such types of material.

It revealed data from the Internet Watch Foundation that, within a month of 11,108 AI-generated images shared in a dark web forum, 2,978 depicted content related to child sexual abuse.

Related: US President Joe Biden urges tech firms to address risks of AI

The U.K. government said the statement stands as a pledge to “seek to understand and, as appropriate, act on the risks arising from AI to tackling child sexual abuse through existing fora.”

“All actors have a role to play in ensuring the safety of children from the risks of frontier AI.”

It encouraged transparency on plans for measuring, monitoring and managing ways AI can be exploited by child sexual offenders and on a country level to build policies regarding the topic.

Additionally, it aims to maintain a dialogue around combating child sexual abuse in the AI age. This statement was released in the run-up to the U.K. hosting its global summit on AI safety this week.

Concerns over child safety in relation to AI have been a major topic of discussion in the face of the rapid emergence and widespread use of the technology.

On Oct. 26, 34 states in the U.S. filed a lawsuit against Meta, the Facebook and Instagram parent company, over child safety concerns.

Magazine: AI Eye: Get better results being nice to ChatGPT, AI fake child porn debate, Amazon’s AI reviews

MEXC Raises the Bar: Supercar Giveaway Boosted to 12,000,000 USDT

Meta chief AI scientist says AI won’t threaten humans

Yann LeCun a chief AI scientist at Meta said labeling the existential risk of AI is “premature” and it’s “preposterous” that AI might kill off humanity.

The chief artificial intelligence (AI) scientist at Meta has spoken out, reportedly saying that worries over the existential risks of the technology are still “premature,” according to a Financial Times interview

On Oct. 19 the FT quotes Yann LeCun as saying the premature regulation of AI technology will reinforce dominance of Big Tech companies and leave no room for competition.

“Regulating research and development in AI is incredibly counterproductive,” he said. LeCun believes regulators are using the guise of AI safety for what he called “regulatory capture.”

Since the AI boom really took off after the release of OpenAI’s chatbot ChatGPT-4 in November 2022, various thought leaders in the industry have come out proclaiming threats to humanity at the hands of AI.

Dr. Geoffrey Hinton, known as the “godfather of AI,” left his position in machine learning at Google so that he could “talk about the dangers of AI.

Director of the Center for AI Safety, Dan Hendrycks tweeted back in May that mitigating the risk of extinction from AI should become a global priority on par with “other societal-scale risks such as pandemics and nuclear war.”

Related: Forget Cambridge Analytica — Here’s how AI could threaten elections

However, on the same topic, LeCun said in his latest interview that the idea is “preposterous” that AI will kill off humanity.

“The debate on existential risk is very premature until we have a design for a system that can even rival a cat in terms of learning capabilities, which we don’t have at the moment.”

He also claimed that current AI models are not as capable as some claim, saying they don’t understand how the world works and are not able to “plan” or “reason.”

According to LeCun, he expects that AI will eventually help manage our everyday lives, saying that, “everyone’s interaction with the digital world will be mediated by AI systems.”

Nonetheless, fears surrounding the power of the technology remain a concern among many. The AI task force advisor in the United Kingdom has warned that AI could threaten humanity within two years.

Magazine: ‘Moral responsibility’: Can blockchain really improve trust in AI?

MEXC Raises the Bar: Supercar Giveaway Boosted to 12,000,000 USDT

MultiversX eyes metaverse scalability as CEO sheds light on spatial computing

The MultiversX CEO discussed the recent interest of Meta and Apple in the metaverse domain and analyzed their approaches.

The metaverse gained attracted a lot of interest from the crypto community and venture capital firms during the peak of the previous bull market. The likes of Meta and Apple joining the metaverse bandwagon only gave more legitimacy to the concept, but both multibillion-dollar tech firms have quite a different approach toward it.

On one hand, Meta shifted its whole focus to virtual reality (VR) and recently released new smart glasses in partnership with Rayban while Apple incorporated a spatial computing approach and focused on augmented reality (AR) more and launched its own AR glasses earlier this year.

Blockchain-based metaverse-focused platform MultiversX CEO Beniamin Mincu believes the spatial computing approach by Apple is more catered towards the metaverse goal than Meta’s VR quest. In an exclusive interview with Cointelegraph editor Zhiyan Sun, Mincu told Cointelegraph that Meta’s focus on virtual reality could be a mistake as it isn’t as intuitive, while Apple’s spatial computing approach makes the AR glasses a more intuitive experience.

He explained that Meta’s glasses are only fixated on a particular virtual world, while the concept of the metaverse is more about an interactive experience within that virtual world. The glasses focus only on one use case, rather than multiple ones:

“I think the most fundamental one that changes the conversation is viewing a lens or an interface as a spatial computing device. I think this is a very underrated paradigm shift that Apple has introduced. So this is why spatial computing, it seems like it's the same thing, which is a different world.”

Spatial computing refers to the processes and tools used to capture, process, and interact with 3-dimensional data. Spatial computing can include IoT, digital twins, ambient computing, augmented reality, virtual reality, AI, and physical controls. Spatial computing is defined as human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces.

Related: The Sandbox co-founder explains how the metaverse has evolved for brands: Web Summit 2022

Mincu added that MultiversX's (formerly Elrond) new technical upgrades on Oct. 19 will align it well with the spatial computing approach and make it more scalable. The technical upgrade would bring key features to the platform including early block proposals, parallel node processing, consensus signature checks, and dynamic gas cost improvements.

These technical upgrades promise to increase transactional throughput by 7X with faster confirmation times and shorter finality. Among other notable changes, the new upgrade will bring on-chain governance, a new and enhanced virtual machine, and an improved relayed transaction model which would allow tokens operating on the network to cover gas costs.

MEXC Raises the Bar: Supercar Giveaway Boosted to 12,000,000 USDT

Meta makes progress towards AI system that decodes images from brain activity

The system is entirely non-invasive and could have near-term applications for some.

The new system combines a non-invasive brain scanning method called magnetoencephalography (MEG) with an artificial intelligence system.

A typical MEG imaging machine. Image public domain, source: NIMH/Wikipedia

This work leverages the company’s previous work decoding letters, words, and audio spectrograms from intracranial recordings.

According to a Meta blog post,

“This AI system can be deployed in real time to reconstruct, from brain activity, the images perceived and processed by the brain at each instant.”

A post from the AI at Meta account on X, formerly Twitter, showcased the real-time capabilities of the model through a demonstration depicting what an individual was looking at and how the AI decoded their MEG-generated brain scans.

It’s worth noting that, despite the progress shown, this experimental AI system requires pre-training on an individual’s brainwaves. In essence, rather than training an AI system to read minds, the developers train the system to interpret specific brain waves as specific images. There’s no indication that this system could produce imagery for thoughts unrelated to pictures the model was trained on.

However, Meta AI also notes that this is early work and that further progress is expected. As such, the team has specifically noted that this research is part of the company’s ongoing initiative to unravel the mysteries of the brain.

Related: Neuralink gets FDA approval for ‘in-human’ trials of its brain-computer interface

And, while there’s no current reason to believe a system such as this would be capable of invading someone’s privacy, under the current technological limitations, there is reason to believe that it could provide a quality of life upgrade for some individuals.

“We're excited about this research,” read a post by the Meta AI team on X, adding that they “hope that one day it may provide a stepping stone toward non-invasive brain-computer interfaces in a clinical setting that could help people who have lost their ability to speak.”

Meta AI unveiled a new artificial intelligence (AI) system designed to decode imagery from human brain waves on Oct. 18 via a blog post. 

MEXC Raises the Bar: Supercar Giveaway Boosted to 12,000,000 USDT

Saudi Arabia and China collaborate on Arabic-based AI system

A university in Saudi Arabia has collaborated with two Chinese universities to create an Arabic-focused AI system called AceGPT.

The King Abdullah University of Science and Technology (KAUST) in Saudi Arabia has collaborated with two Chinese universities to create an Arabic-focused artificial intelligence (AI) system. 

The large language model (LLM) called AceGPT is built on Meta’s LlaMA2 and was launched by a Chinese-American professor at KAUST in collaboration with the School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHKSZ) and the Shenzhen Research Institute of Big Data (SRIBD).

According to the project’s GitHub page, the model is designed to function as an AI assistant for Arabic speakers and answer queries in Arabic. The disclaimer said it may not produce “satisfactory results” in other languages, however.

Additionally, the developers said the model has been enhanced to recognize possible types of misuse including mishandling sensitive information, producing harmful content, perpetuating misinformation, or failing safety checks. 

However, the project has also cautioned users to be responsible in their use due to a lack of safety checks. 

“We have not conducted an exhaustive safety check on the model, so users should exercise caution. We cannot overemphasize the need for responsible and judicious use of our model.”

AceGPT is said to have been created off open-source data and data crafted by the researchers.

Related: Saudi Arabia looks to blockchain gaming and Web3 to diversify economy

This development comes as Saudi Arabia continues to make efforts to become a regional leader in emerging technologies such as AI. In July, the central bank of Saudi Arabia collaborated with the Hong Kong Monetary Authority on tokens and payments.

Prior to that, in February the Saudi government partnered with the Sandbox metaverse platform to accelerate future metaverse plans.

In August, U.S. regulators told AI chip maker Nvidia and its rival AMD to curb exports of their high-level semiconductor chips used to develop AI to, vaguely put, “some” Middle Eastern countries. 

However, U.S. regulators have since denied explicitly blocking AI chip exports to the Middle East region.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

MEXC Raises the Bar: Supercar Giveaway Boosted to 12,000,000 USDT