1. Home
  2. Meta.

Meta.

EU mulls more restrictive regulations for large AI models: Report

Negotiators in the EU are reportedly considering additional restrictions for large AI models - like OpenAI’s GPT-4- as a component of the forthcoming AI Act.

Representatives in the European Union are reportedly negotiating a plan for additional regulations on the largest artificial intelligence (AI) systems, according to a report from Bloomberg. 

The European Commission, European Parliament and the various EU member states are said to be in discussions regarding the potential effects of large language models (LLMs), including Meta’s Llama 2 and OpenAI’s GPT-4, and possible additional restrictions to be imposed on them as a part of the forthcoming AI Act.

Bloomberg reports that sources close to the matter said the goal is not to overburden new startups with too many regulations while keeping larger models in check.

According to the sources, the agreement reached by negotiators on the topic is still in the preliminary stages.

The AI Act and the new proposed regulations for LLMs would be a similar approach to the matter as the EU’s Digital Services Act (DSA).

The DSA was recently implemented by EU lawmakers and makes it so platforms and websites have standards to protect user data and scan for illegal activities. However, the web’s largest platforms are subject to stricter controls.

Companies under this category like Alphabet Inc. and Meta Inc. had until Aug. 28 to update service practices to comply with the new EU standards.

Related: UNESCO and Netherlands design AI supervision project for the EU

The EU’s AI Act is posed to be one of the first set of mandatory rules for AI set in place by a Western government. China has already enacted its own set of AI regulations, which came into effect in August 2023. 

Under the EU’s AI regulations companies developing and deploying AI systems would need to perform risk assessments, label AI-generated content and are completely banned from the use of biometric surveillance, among other things.

However, the legislation has not been enacted yet and member states still have the ability to disagree with any of the proposals set forth by parliament.

In China, since the implementation of its AI laws, it has been reported that more than 70 new AI models have already been released.

Magazine: The Truth Behind Cuba’s Bitcoin Revolution: An on-the-ground report

$113B Asset Manager Files to Launch XRP ETF in US Amid Shifting Crypto Policies

Meta refutes claims of copyright infringement in AI training

In a lawsuit against Sarah Silverman and other authors Meta claims its AI system does not create copyright infringing material.

Meta has refuted claims that its artificial intelligence (AI) model Llama was trained using copyrighted material from popular books.

In court on Sept. 18 Meta asked a San Francisco federal judge to dismiss claims made by author Sarah Silverman and a host of other authors who have said it violated copyrights of their books in order to train its AI system.

The Facebook and Instagram parent company called the use of materials to train its systems “transformative” and of “fair use.”

“Use of texts to train LLaMA to statistically model language and generate original expression is transformative by nature and quintessential fair use..."

It continued by pointing out a conclusion in another related court battle, “much like Google’s wholesale copying of books to create an internet search tool was found to be fair use in Authors Guild v. Google, Inc., 804 F.3d 202 (2d Cir. 2015).” 

Meta said the “core issue” of copyright fair use should be taken up again on, “another day, on a more fulsome record.” The company said the plaintiff couldn’t provide explanations of the “information” they’re referring to, nor could they provide specific outputs related to their material.

The attorneys of the authors said in a separate statement on Sept. 19 that they are "confident” their claims will be held and will continue to proceed through “discovery and trial.”

OpenAI also attempted to dismiss parts of the claims back in August under similiar grounds to what Meta is currently proposing. 

Related: What is fair use? US Supreme Court weighs in on AI’s copyright dilemma

The original lawsuit against Meta and OpenAI was opened in July and was one of many lawsuits popping up against Big Tech giants over copyright and data infringement with the rise of AI.

On Sept. 5 a pair of unnamed engineers opened a class-action lawsuit against OpenAI and Microsoft regarding their alleged scraping methods to obtain private data while training their respective AI models.

In July, Google was sued on similar grounds after it updated its privacy policy. The lawsuit accused the company of misusing large amounts of data, including copyrighted material, in its own AI training.

Magazine: AI Eye: Real uses for AI in crypto, Google’s GPT-4 rival, AI edge for bad employees

$113B Asset Manager Files to Launch XRP ETF in US Amid Shifting Crypto Policies

Crypto Biz: PayPal rolls out crypto ramps, Franklin Templeton joins BTC ETF race, and more

This week’s Crypto Biz looks at PayPal’s crypto gateway, Franklin Templeton’s BTC ETF filing, Coinbase’s Lightning Network integration, and Meta’s plans for a new AI model.

Without aggressive marketing tactics, fintech giant PayPal is quietly and consistently venturing deeper into the crypto space, rolling out features and building key partnerships to advance its digital assets strategy.

This week, PayPal unveiled new on-ramps and off-ramps for cryptocurrencies for its clients in the United States — a noteworthy step for the country, particularly as many crypto firms struggle with supporting fiat-crypto conversions since the United States Securities and Exchange Commission began its controversial crackdown on the industry.

Also deepening ties with the crypto ecosystem is the traditional asset manager firm Franklin Templeton. The company filed for a spot Bitcoin (BTC) exchange-traded fund (ETF) in the U.S., joining a long list of major investment firms seeking approval for a Bitcoin investment product, including names such as BlackRock, Fidelity and WisdomTree, among many others.

With new participants joining the digital assets world daily, it’s evident that crypto is becoming more mainstream, even in the face of a long bearish market.

This week’s Crypto Biz looks at PayPal’s crypto gateway, Franklin Templeton’s BTC ETF filing, Coinbase’s Lightning Network integration, and Meta’s plans for a new AI model.

PayPal enables US users to sell cryptocurrency via MetaMask wallet

PayPal continues expanding its digital asset services, integrating new methods to sell cryptocurrencies like Bitcoin. The payments giant introduced on Sept. 11 new on- and off-ramps for Web3 payments, allowing users in the U.S. to convert their crypto to U.S. dollars directly from their wallets into their PayPal balance. The off-ramp feature is available to wallets, decentralized applications and nonfungible token marketplaces and is live on MetaMask. The latest rollout came shortly after PayPal partnered with hardware wallet firm Ledger to provide a new on-ramp integration in August 2023, allowing verified users in the U.S. to buy Bitcoin, Ether (ETH), Bitcoin Cash (BCH) and Litecoin (LTC) directly on a Ledger hardware wallet.

Promotional video for PayPal's on- and off-ramps. Source: YouTube

Franklin Templeton files for spot Bitcoin ETF

Asset manager Franklin Templeton applied with the U.S. SEC on Sept. 12 to launch a spot Bitcoin ETF. The S-1 registration statement comes after the SEC delayed decisions on spot ETF applications from WisdomTree, Valkyrie, Fidelity, VanEck, Bitwise and Invesco on Aug. 31 and a court ruling on Aug. 29 that the SEC must consider Grayscale’s application to convert its BTC futures ETF into a spot ETF. According to the application, the fund would be structured as a trust. Coinbase would custody the BTC, and Bank of New York Mellon would be the cash custodian and administrator. Franklin Templeton has $1.5 trillion in assets under management.

Meta is building an AI model to rival OpenAI’s most powerful system

Social media giant Meta is developing a new artificial intelligence (AI) model that will rival OpenAI’s most advanced version. According to sources from The Wall Street Journal, the new model will be “several times” more powerful than its Llama 2 model, which Meta released earlier this year. Llama was trained on 70 billion parameters, and while OpenAI hasn’t released its parameters for GPT-4, they’re estimated to be around 1.5 trillion. Meta’s new system will be open-source, allowing other companies to build AI tools to produce high-level text, analysis and other types of output with it. The company has also been building the data centers necessary to create such a high-level system while acquiring more of Nvidia’s H100 semiconductor chips.

Coinbase to integrate Bitcoin Lightning Network: CEO Brian Armstrong

Crypto exchange Coinbase has confirmed its decision to integrate layer-2 payment protocol Lightning Network (LN) as users seek faster and cheaper Bitcoin transactions. Until recently, major crypto exchanges, including Coinbase and Binance, had no intent to adopt the layer-2 solution, as many community members argued that LN integration offered fewer incentives for exchanges’ income. Brian Armstrong, CEO of Coinbase, asked the crypto community to be patient during the integration process. LN was created to help solve Bitcoin’s scalability problem and to compete against projects promising faster and cheaper transactions. The decision comes a month after Viktor Bunin, a protocol specialist at Coinbase, started investigating the feasibility of LN integration.

Crypto Biz is your weekly pulse on the business behind blockchain and crypto, delivered directly to your inbox every Thursday.

$113B Asset Manager Files to Launch XRP ETF in US Amid Shifting Crypto Policies

Meta ‘ruined’ the term metaverse, but now it’s evolving: Yuga Labs CEO

While Meta’s Horizon Worlds is suffering from a low user base, metaverse platforms have focused on building, says Yuga Labs CEO Daniel Alegre.

Big Tech player Meta gave the metaverse a bad name when it pushed its janky vision to the masses. Luckily, open online virtual worlds have continued to evolve, says Yuga Labs CEO Daniel Alegre.

Speaking to Cointelegraph at Token 2049 in Singapore, Alegre said the problem with the metaverse is that Meta “ruined the term because it said: ‘This is something brand new’” — despite other metaverse platforms already existing.

“I was at Activision Blizzard, we had World of Warcraft. World of Warcraft is a metaverse, Fortnite is a metaverse — so the metaverse is evolving, I think, in very, very positive ways.”

Alegre said the low userbase is a core issue of Meta’s Horizon Worlds — but it’s otherwise only useful “if there was a reason to be there.”

“[Users] go in and say ‘Hey, Mark, so cool to see you…So now what?’ It just flopped, there's a huge echo in the room.”

He added unlike Horizon Worlds, Yuga’s upcoming Otherside metaverse — in development since at least March 2022 with no official launch date — came from a need by their community of nonfungible tokenholders to have a digital space to connect.

“The digital connection is what they've asked us to do,” Alegre said. “At its core, [Otherside] a way for our community to connect digitally in one location.”

So far, Otherside has only been glimpsed through a handful of early access demos and a “vibe check” by a focus group in July. Alegre said Yuga recently conducted another limited experience of Otherside with “core members.”

Otherside’s up-and-running peer The Sandbox has also sought to bring culture online, with its co-founder Sebastien Borget telling Cointelegraph that it’s creating neighborhoods on its platform that mirror countries such as Singapore and Türkiye.

NFTs diverging down “two avenues”

Alegre said he’s also seeing a divergence in how NFTs are being viewed. On one hand, NFTs are being valued purely for their art and history. On the other, they're valued for their community and intellectual property rights.

“Those are two avenues that this is all going down,” he opined.

He compared the use cases between the NFT projects CryptoPunks and Bored Ape Yacht Club (BAYC) — both Yuga-owned properties where holders own the commercial IP — to highlight how holders use them.

CryptoPunks — an early NFT collection — are being exposed to “top museums and collectors” who are starting to see the value of owning the original, according to Alegre.

Related: Shrapnel Web3 shooter won't let US users cash out, thanks to Gensler

Meanwhile, BAYC holders have created a community and Alegre claims “more than 900 holders of Apes are building businesses on top of the Apes.”

Alegre shows a coffee pack emblazoned with a Bored Ape given to him by the owner of the BAYC #9472 NFT. Source: Andrew Fenton/Cointelegraph

He said Yuga was in a similar position to YouTube where its user-generated content (UGC) model allowed businesses to be built around sharing videos on the platform.

“You have media companies based on UGC and creative agencies and advertising. You’re starting to see the same thing evolve with the Bored Ape community.”

“It shows you that NFTs, and NFT ownership if you give it to the community they take it in ways that you can never imagine,” Alegre said. “Both in the offline space as well as the online space.”

Magazine: NFT Collector: Creative AI art, Tomorrowland sells tomorrow’s future

$113B Asset Manager Files to Launch XRP ETF in US Amid Shifting Crypto Policies

Meta is building AI model to rival OpenAI’s most powerful system

Meta is reportedly in the process of building a new, more powerful and open-source AI model to rival the most powerful systems of its rival OpenAI.

Meta, the parent company to social media platforms Facebook and Instagram, says it's developing a new artificial intelligence (AI) that will rival the most advanced model from OpenAI, according to a Wall Street Journal exclusive

WSJ reported that individuals familiar with the matter said Meta aims for the new AI model to be “several times” more powerful than its Llama 2 model, which it released earlier this year.

For the moment, the WSJ sources say Meta’s plans for the new system are for it to be open-source, and therefore allow other companies to build AI tools to produce high-level text, analysis and other types of output.

The company has also been building data centers necessary to create such a high-level system while acquiring more of Nvidia’s H100 semiconductor chips - the most powerful and coveted chips currently available on the market.

Llama was trained on 70 billion parameters, and while OpenAI hasn’t released its parameters for GPT-4; it's estimated around 1.5 trillion.

Related: Nvidia drops new AI chip expected to cut development costs

The sources said Meta anticipates training to begin for the large language model (LLM) in early 2024 and to be ready for release sometime next year. It is likely to be released after Google’s expected forthcoming LLM Gemini.

Microsoft is a primary backer of OpenAI and also collaborated with Meta to help make Llama 2 available on Azure, its cloud-computing platform. However, the sources said Meta plans to train its upcoming model on its own infrastructure. 

This development comes as major tech companies and governments are racing to create, deploy and control high-level AI systems. 

Recently, the United Kingdom government announced that it plans to spend $130 million on high-powered chips to create AI systems.

Across the globe in China, the country’s new legislation on AI recently went into effect. Since then the CEO of Baidu, a major China-based tech company, said that over 70 AI models have been released in the country.

Magazine: AI Eye: Get better results being nice to ChatGPT, AI fake child porn debate, Amazon’s AI reviews

$113B Asset Manager Files to Launch XRP ETF in US Amid Shifting Crypto Policies

Scientists created ‘OpinionGPT’ to explore explicit human bias — and you can test it for yourself

Due to the nature of the model's tuning data, it's unclear whether this system is actually capable of generating outputs showing real-world bias.

A team of researchers from Humboldt-Universitat zu Berlin have developed a large language artificial intelligence model with the distinction of having been intentionally tuned to generate outputs with expressed bias.

Called OpinionGPT, the team’s model is a tuned variant of Meta’s Llama 2, an AI system similar in capability to OpenAI’s ChatGPT or Anthropic’s Claude 2.

Using a process called instruction-based fine-tuning, OpinionGPT can purportedly respond to prompts as if it were a representative of one of 11 bias groups: American, German, Latin American, Middle Eastern, a teenager, someone over 30, an older person, a man, a woman, a liberal, or a conservative.

OpinionGPT was refined on a corpus of data derived from “AskX” communities, called subreddits, on Reddit. Examples of these subreddits would include “Ask a Woman” and “Ask an American.”

The team started by finding subreddits related to the 11 specific biases and pulling the 25-thousand most popular posts from each one. They then retained only those posts that met a minimum threshold for upvotes, did not contain an embedded quote, and were under 80 words.

With what was left, it appears as though they used an approach similar to Anthropic’s Constitutional AI. Rather than spin up entirely new models to represent each bias label, they essentially fine-tuned the single 7 billion-parameter Llama2 model with separate instruction sets for each expected bias.

Related: AI usage on social media has potential to impact voter sentiment

The result, based upon the methodology, architecture, and data described in the German team’s research paper, appears to be an AI system that functions as more of a stereotype generator than a tool for studying real world bias.

Due to the nature of the data the model has been refined on, and that data’s dubious relation to the labels defining it, OpinionGPT doesn’t necessarily output text that aligns with any measurable real-world bias. It simply outputs text reflecting the bias of its data.

The researchers themselves recognize some of the limitations this places on their study, writing:

“For instance, the responses by "Americans" should be better understood as 'Americans that post on Reddit,' or even 'Americans that post on this particular subreddit.' Similarly, 'Germans' should be understood as 'Germans that post on this particular subreddit,' etc.”

These caveats could further be refined to say the posts come from, for example, “people claiming to be Americans who post on this particular subreddit,” as there’s no mention in the paper of vetting whether the posters behind a given post are in fact representative of the demographic or bias group they claim to be.

The authors go on to state that they intend to explore models that further delineate demographics (ie: liberal German, conservative German).

The outputs given by OpinionGPT appear to vary between representing demonstrable bias and wildly differing from the established norm, making it difficult to discern its viability as a tool for measuring or discovering actual bias.

Source: Screenshot, Table 2: Haller et. al., 2023

According to OpinionGPT, as shown in the above image, for example, Latin Americans are biased towards basketball being their favorite sport.

Empirical research, however, clearly indicates that football (also called soccer in some countries) and baseball are the most popular sports by viewership and participation throughout Latin America.

The same table also shows that OpinionGPT outputs “water polo” as its favorite sport when instructed to give the “response of a teenager,” an answer that seems statistically unlikely to be representative of most 13-19 year olds around the world.

The same goes for the idea that an average American’s favorite food is “cheese.” We found dozens of surveys online claiming that pizza and hamburgers were America’s favorite foods, but couldn’t find a single survey or study that claimed Americans' number one dish was simply cheese.

While OpinionGPT might not be well-suited for studying actual human bias, it could be useful as a tool for exploring the stereotypes inherent in large document repositories such as individual subreddits or AI training sets.

For those who are curious, the researchers have made OpinionGPT available online for public testing. However, according to the website, would-be users should be aware that “generated content can be false, inaccurate, or even obscene.”

$113B Asset Manager Files to Launch XRP ETF in US Amid Shifting Crypto Policies

Meta’s assault on privacy should serve as a warning against AI

Facebook was the worst thing to happen to user privacy over the last two decades. Artificial intelligence could be the worst thing to happen in the days ahead.

In an increasingly AI-driven world, blockchain could play a critical role in preventing the sins committed by apps like Facebook from becoming widespread and normalized.

Artificial intelligence platforms such as ChatGPT and Google’s Bard have entered the mainstream and have already been accused of inflaming the political divide with their biases. As foretold in popular films such as The Terminator, The Matrix and most recently, Mission: Impossible — Dead Reckoning Part One, it’s already become evident that AI is a wild animal we’ll likely struggle to tame.

From democracy-killing disinformation campaigns and killer drones to the total destruction of individual privacy, AI can potentially transform the global economy and likely civilization itself. In May 2023, global tech leaders penned an open letter that made headlines, warning that the dangers of AI technology may be on par with nuclear weapons.

Related: Girlfriends, murdered kids, assassin androids — Is AI cursed?

One of the most significant fears of AI is the lack of transparency surrounding its training and programming, particularly in deep learning models that can be difficult to expropriate. Because sensitive data is used to train AI models, they can be manipulable if the data becomes compromised.

In the years ahead, blockchain will be widely utilized alongside AI to enhance the transparency, accountability and audibility concerning its decision-making process.

For instance, when training an AI model using data stored on a blockchain, the data’s provenance and integrity can be ensured, preventing unauthorized modifications. Stakeholders can track and verify the decision-making process by recording the model’s training parameters, updates and validation results on the blockchain.

With this use case, blockchain will play a leading role in preventing the unintentional misuse of AI. But what about the intentional? That’s a much more dangerous scenario, which, unfortunately, we’ll likely face in the coming years.

Even without AI, centralized Big Tech has historically aided and abetted behavior that profits by manipulating both individuals and democratic values to the highest bidder, as made famous in Facebook’s Cambridge Analytica scandal. In 2014, the “Thisisyourdigitallife” app offered to pay users for personality tests, which required permission to access their Facebook profiles and those of their friends. Essentially, Facebook allowed Cambridge Analytica to spy on users without permission.

The result? Two historic mass-targeted psychological public relations campaigns that had a relatively strong influence on both the outcomes of the United States presidential election and the United Kingdom’s European Union membership referendum in 2016. Has Meta (previously Facebook) learned from its mistakes? It doesn’t look like it.

In July, Meta unveiled its latest app, Threads. Touted as a rival to Elon Musk’s Twitter, it harvests the usual data Facebook and Instagram collect. But — similar to TikTok — when Threads users signed up, they unwittingly gave Meta access to GPS location, camera, photos, IP information, device type and device signals. It’s a standard practice of Web2 to justify such practices, touting that “users agreed to the terms and conditions.” In reality, it would take an average of 76 working days to read every privacy policy for each app used by a standard internet user. The point? Meta now has access to almost everything on the phones of over 150 million users.

In comes AI. If the after-effects of the Cambridge Analytica scandal warranted concerns, can we even begin to comprehend the impacts of a marriage between this invasive surveillance and the godlike intelligence of AI?

The unsurprising remedy here is blockchain, but the solution isn’t as straightforward.

Related: The absurd AI mania is coming to an end

One of the main dangers of AI rests in the data it can collect and then weaponize. Regarding social media, blockchain technology can potentially enhance data privacy and control, which could help mitigate Big Tech’s data harvesting practices. However, it’s unlikely to “stop” Big Tech from taking sensitive data.

To truly safeguard against the intentional dangers of AI and ward off future Cambridge Analytica-like scenarios, decentralized, preferably blockchain-based, social media platforms are required. By design, they reduce the concentration of user data in one central entity, minimizing the potential for mass surveillance and AI disinformation campaigns.

Put simply, through blockchain technology, we already have the tools needed to safeguard our independence from AI at both the individual and national levels.

Shortly after signing the open letter to governments on the dangers of AI in May, OpenAI CEO Sam Altman published a blog post proposing several strategies for responsible management of powerful AI systems. They involved collaboration among the major AI developers, greater technical study of large language models and establishing a global organization for AI safety.

While these measures make a good start, they fail to address the systems that make us vulnerable to AI — namely, the centralized Web2 entities such as Meta. To truly safeguard against AI, more development is urgently required toward the rollout of blockchain-based technologies, namely in cybersecurity, and for a genuinely competitive ecosystem of decentralized social media apps.

Callum Kennard is the content manager at Storm Partners, a Web3 solutions provider based in Switzerland. He’s a graduate of the University of Brighton in England.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

$113B Asset Manager Files to Launch XRP ETF in US Amid Shifting Crypto Policies

Meta launches community-licensed AI coding tool to the public

Code Llama is available for both personal and business use under the Llama2 community license agreement.

Meta AI announced the launch of ‘Code Llama,’ a community-licensed artificial intelligence (AI) coding tool built on the Llama2 large language model (LLM), on Aug. 24.

The new tool is a fine-tuned version of LLama2 that’s been trained specifically for the purpose of generating and discussing computer code.

According to a blog post from Meta, Code Llama is split into several variants with one model fine-tuned for general coding in a number of languages (including Python, C++, Java, PHP, Typescript, C#, Bash and more).

Other models include Code Llama Python and Code Llama Instruct. The former is fine-tuned for Python applications. As Meta puts it, this is to further support the AI community:

“Because Python is the most benchmarked language for code generation — and because Python and PyTorch play an important role in the AI community — we believe a specialized model provides additional utility.”

According to Meta, the emphasis in these first two model variants is on understanding, explaining, and discussing code.

Code Llama Instruct, however, is the fine-tuned version of Code Llama that Meta recommends for actually generating code. According to the blog post, it's been engineered specifically to generate “helpful and safe answers in natural language."

The models are also available in different parameter sizes in order to operate in different environments. Code Llama comes in 7-billion, 14-billion, and 34-billion parameter sizes, each with different functionality.

Meta says the 7B models, for example, can run on a single GPU. While the 14B and 34B models would require more substantial hardware, they’re also capable of more complex tasks — especially those requiring low-latency feedback such as real-time processes.

Code Llama is generally available under the same community license agreement as Llama2, meaning it can be used for personal or business use with proper attribution.

This could be a massive boon for businesses and individuals who have a high need use-case for LLM models for coding purposes, such as fintech institutions that are traditionally underserved by the AI and big tech communities.

Web3 innovators, trading bot developers, and cryptocurrency exchanges all operate in a constantly-shifting environment that, to date, has seen relatively little in the way of dedicated B2B or B2C solutions for day-to-day crypto and blockchain coding problems from big tech.

Related: Naver Corp unveils South Korea’s answer to ChatGPT and generative AI

Dedicated coding tools, such as GitHub’s Co-Pilot (built with ChatGPT technology) can go a long way towards aiding developers in these underserved areas, but the costs of use can be prohibitive for some users and the lack of open-source options can pose problems for proprietary software developers.

The existence of a free-to-use, community-licensed alternative based on Meta’s highly-touted Llama2 LLM could help level the playing field for blockchain and crypto projects with small development teams.

$113B Asset Manager Files to Launch XRP ETF in US Amid Shifting Crypto Policies

Thailand threatens Facebook over crypto scams and other fraudulent ads

Thailand’s digital minister said he would seek a court order to shut Facebook in the country unless it takes action on the alleged scams.

Thailand is planning to seek a court-issued shutdown order against Facebook unless it takes steps to deal with alleged investment and crypto scam ads on its platform.

On Aug. 21, the Ministry of Digital Economy and Society (MDES) stated over 200,000 people had been duped by Facebook ads that touted crypto scams, investing in fake businesses and faked government agencies such as the Securities and Exchange Commission.

Popular tactics used by the scammers included crypto investment and trading scams, MDES claimed. Some ads also allegedly used images of celebrities and well-known financial figures along with promises of up to 30% daily returns to lure people into the schemes.

MDES Minister Chaiwut Thanakamanusorn said the ministry had been in talks with and sent a letter to the Meta-owned platform over the issue but claimed it's failing to screen advertisers.

Chaiwut Thanakamanusorn at an Aug. 21 press conference regarding the ministry’s planned court action against Facebook. Source: MDES

The ministry is currently gathering evidence of the scam ads which it said numbered over 5,300 — at the end of the month, it’s ready to ask a court to shut down Facebook within seven days.

Related: Hong Kong’s crypto stance: Execs weigh in on Web3 in the region

The ministry warned on how such scams typically operate saying consumers should be wary of promises of high and guaranteed returns along with ads using images of well-known figures.

Investments that pressure or give incentives to quickly invest with limited offers should also be approached with caution as well as businesses or platforms with no verifiable information.

Cointelegraph contacted Meta but did not immediately receive a response.

Magazine: Web3 Gamer: GTA 6 crypto rumors, Dr Who/Sandbox, Thai tourist NFTs review

$113B Asset Manager Files to Launch XRP ETF in US Amid Shifting Crypto Policies

India House passes bill to ease BigTech data compliance

The lower house in the parliament of India approved updates to a bill that would ease data storage, processing and transfer standards for BigTech companies.

The lower house of India’s parliament voted in approval of a bill that will ease data compliance regulations for Big Tech companies, according to a report from Bloomberg. 

On Aug. 7, the legislation that was approved by the house will ease storage, processing and transfer standards for major global tech companies like Google, Meta and Microsoft and also local firms seeking international expansion.

The Digital Personal Data Protection Bill 2023 targets exports of data sourced from India, allowing companies to do so except to countries prohibited by the government.

As it currently stands, the bill requires government consent prior to BigTech companies collecting personal data. It also prevents them from selling it for reasons not listed in the contract, meaning no anonymization of personal data for use in artificial intelligence (AI) training, for example.

These updates to the bill would reduce compliance requirements for companies, though it has to pass through the upper parliamentary house prior to its finalization.

India is the world’s most populous country with billions of internet users, which makes it a key market for growth.

Related: Indian Supreme Court raps Union government on crypto rules delay: Report

Concerns over data misuse in the emerging tech industry and particularly from BigTech companies have been a growing priority for regulators across the globe. 

The rapid emergence of AI as an accessible tool for the general public has caused major concerns among regulators over the way these products collect and utilize user data.

India has also been named as one of the countries that is a part of collaborations with the Biden Administration in the United States to create an international framework for AI.

One recent and major development in the emerging tech scene that has caused concerns over data collection, has been with the launch of the decentralized digital identity verification protocol Worldcoin.

So far, the project has launched 1,500 of its iris scanning orbs in countries all around the world. India is home to two orbs in the northern city of Delhi and the southern city of Bangalore, according to the Worldcoin website

Magazine: Deposit risk: What do crypto exchanges really do with your money?

$113B Asset Manager Files to Launch XRP ETF in US Amid Shifting Crypto Policies