1. Home
  2. Data

Data

Here’s what the latest Bitcoin price correction reveals

The latest episode of The Market Report analyses the recent Bitcoin price correction to $26,000 and what it reveals about the current market structure.

In the latest episode of Cointelegraph’s The Market Report, analyst Marcel Pechman delves into Bitcoin’s recent drop to $26,000. Derivatives market analysis shows Bitcoin (BTC) options and futures metrics lack signs of professional traders going bearish, and while that doesn’t guarantee a quick return to $29,000 support, it reduces the chances of an extended correction.

Pechman presents a Kaiko data chart on BTC liquidity and volatility, which significantly decreased since the FTX collapse in November 2022. And with no liquidity issues or heightened volatility indicated, did the 11.4% mid-August price drop worsen conditions due to the largest futures liquidations since November 2022?

Bitcoin futures premium settled at a neutral 6% after the recent $26,000 crash, signaling balanced demand between leveraged longs and shorts. This aligns with a neutral -7% to 7% BTC options skew, suggesting reasonable downside protection prices.

Reviewing another article, Pechman discusses macroeconomic analyst Lyn Alden’s take on a common currency proposal among BRICS nations (Brazil, Russia, India, China and South Africa). Alden doesn’t see it succeeding — a view shared by Pechman. However, Alden notes a weakened United States dollar if BRICS use their own currencies for foreign trade, giving unconventional advice to crypto investors.

Listen to the full episode of The Market Report on the new Cointelegraph Markets & Research YouTube channel, and don’t forget to click “Like” and “Subscribe” to keep up-to-date with all our latest content.

Russia Using Bitcoin to Bypass Sanctions – Is the Global Financial System Cracking?

Hollywood studios offer new proposal for AI and data transparency to curb strike

The Alliance of Motion Picture and Television Producers released a memo of its offer to striking writers and actors, with proposed guidelines for AI usage and data transparency.

The Alliance of Motion Picture and Television Producers (AMPTP) released details of its proposal for striking actors and writers on Aug.22, which included standards for the entertainment industry surrounding the usage of artificial intelligence (AI) and data transparency. 

Under the proposed conditions, generative AI cannot be considered a writer. Therefore, any AI-created material cannot be regarded as literary or intellectually protected.

The proposal also ensures that material produced by generative AI cannot affect credit, rights and compensation. While companies can use generative AI-created scripts as source material, any writer who reworks the script will be compensated as if they are the original author.

Additionally, any studio or production company seeking a writer’s help in the development of an AI-produced script must disclose the origin of the script.

Initially, the proposal was released eleven days ago, on Aug. 11, but without significant information about important issues raised by the striking parties.

Related: US judge rules in favor of human ingenuity, denies copyright for AI art

Along with updates to AI-related matters, the proposal touches on data transparency issues. Before the proposal, writers rarely had access to metrics produced by their work.

The updated offer would allow viewership data to be made available to writers and presented in quarterly confidential reports. However, for the time being, it would only include subscription video on demand (SVOD) metrics — not advertising or transactional videos.

The AMPTP proposal suggested that:

“This increased transparency will enable the WGA to develop proposals to restructure the current SVOD residual regime in the future.”

The latest developments came on the 114th day of the strike and were the latest iterations of AI incorporation from Hollywood studios. 

On May 3, they rejected requests from the Writers Guild of America to ban AI completely from the writing room.

There were also proposals that suggested background performers should undergo scanning, receive compensation for the initial day of work and then grant companies ownership of the scan, image and likeness. This sparked a wave of backlash from entertainers in the industry.

Nonetheless, the signals of big production companies looking to incorporate AI are apparent. On July 27, Netflix posted AI job positions with exuberant salaries reaching up to $900,000.

Magazine: Experts want to give AI human ‘souls’ so they don’t kill us all

Russia Using Bitcoin to Bypass Sanctions – Is the Global Financial System Cracking?

Google responds to accusations of ads tracking data of children

After an in-depth report surfaced over YouTube advertisers potentially harvesting data from children, Google responded by saying it has “strict policies” over children’s content.

Google, the parent company of YouTube, responded to a report that suggested YouTube advertisers are sourcing data from children viewing videos on the platform. 

On Aug. 18, a day after the report surfaced, Google posted a blog reinstating its “strict privacy standards around made for kids content,” which is content marked on YouTube that is created to be viewed by children.

The BigTech giant said it has focused on creating kid-specific products like YouTube Kids and supervised accounts.

“We’ve invested a great deal of time and resources to protect kids on our platforms, especially when it comes to the ads they see…”

It also said it launched a restriction worldwide for personalized ads and age-sensitive ad categories for its users under 18.

It also said it launched a restriction worldwide for personalized ads and age-sensitive ad categories for its users under 18. Additionally, the post clarified that it does not allow third-party trackers on ads that appear on kids' content. 

Nonetheless, Adalytics, a data analysis and transparency platform, on Aug. 17 published the 206-page report alleging that advertisers on YouTube could be “inadvertently harvesting data from millions of children.”

Some of the claims made by the report include the presence of cookies indicating a “breakdown” of privacy and YouTube creating an “undisclosed persistent, immutable unique identifier” that gets transmitted to servers even on made-for-kids videos with no clarity on why it's collecting it.

Related: Universal Music and Google in talks over deal to combat AI deep fakes: Report

An article from the New York Times also reported on the research from Adalytics, specifically highlighting an instance where an adult-targeted ad from a Canadian bank was shown to a viewer on a video label for kids.

Adalytics reported that since that viewer clicked on the ad, tracking software from Google, Meta, and Microsoft, along with companies, was tagged on the user’s browser.

Concerns around Google’s privacy and data collection standards have been raised in recent months, as the company has been releasing more products with artificial intelligence (AI) incorporated.

On July 11, Google was hit with a lawsuit over its new AI data-scraping privacy policy updates, with the prosecutors saying its representing millions of users who have had their privacy and property rights violated due to the changes. 

Less than a month later, a report was published that analyzed AI-powered extensions for Google’s internet browser Chrome, which said two-thirds could endanger user security.

Most recently, on Aug. 15, Google introduced a series of enhancements for its search engine that incorporate advanced generative AI features.

Magazine: Should we ban ransomware payments? It’s an attractive but dangerous idea

Russia Using Bitcoin to Bypass Sanctions – Is the Global Financial System Cracking?

AI is helping expand accessibility for people with disabilities

A holistic approach to empowering lives: how AI redefines the accessibility landscape for people with disabilities.

According to data from the World Health Organization (WHO), more than one billion people are living with some significant disability today. Moreover, with the market for AI-related technologies set to grow to a cumulative valuation of over $2 trillion in the next seven years, it is reasonable to suggest that the marriage of these spaces can help introduce a new era of accessibility.

Transforming the lives of people with speech impediments

A key area where AI is making its presence felt is when it comes to supporting people with non-standard speech. Voiceitt is an accessible speech recognition technology company that uses AI and machine learning to assist people with speech impairments.

The tech is designed to recognize and adapt to non-standard speech patterns, thereby enabling clearer communication. The technology is particularly beneficial for individuals who have cerebral palsy, Parkinson’s disease and Down syndrome, wherein producing clear speech can be challenging.

As the realm of artificial intelligence (AI) has grown, this still-emerging technology has exhibited an ability to help improve the quality of life for people living with different kinds of disabilities. 

Dr. Rachel Levy, speech-language pathologist and customer success manager at Voiceitt, told Cointelegraph, “The way our technology works is that people input their speech data into our system, and we have a huge database of non-standard speech. So we have held all of this speech data plus the individuals’ speech data that affects their own model.” 

“This means that the technology learns from the individual’s unique speech patterns and uses this information to translate their speech into a form that is easily understood by others,” she added.

Magazine: Should we ban ransomware payments? It’s an attractive but dangerous idea

Levy further explained how the technology adapts to changing speech patterns, particularly for individuals with degenerative disorders. Therefore, as these individuals use the tool, Voiceitt continues to record their speech while human annotators transcribe the data to increase recognition accuracy. So if there’s deterioration in their speech intelligibility, the platform can adapt accordingly and retrain its data models to incorporate the new speech patterns.

Voiceitt also has a live captioning capability. This feature allows for real-time speech transcription during video conference calls or live interactions, making conversations more accessible for individuals with speech impediments. Levy demonstrated this feature to Cointelegraph, showing how the technology can transcribe speech into text and even share it on social media or via email.

Enhancing vision

According to a 2023 study by the WHO, more than 2.2 billion people have some sort of vision impairment, and at least one billion of these cases are easily treatable. 

AI-powered imaging tools now have the potential to assist by converting visual data into various kinds of interpretable formats. For instance, tools like OCR.best and Image2TxT are designed to automatically decipher visual cues and convert them into text and audio-based responses.

Similarly, advanced AI models like ChatGPT-4 and Claude 2 have introduced plugins that are capable of decoding extremely complex info (such as scientific data) contained in images, and interpret them with optical character recognition tools.

Lastly, AI-based image tools can increase and decrease contrast and optimize the resolution quality of images in real time. As a result, individuals with conditions like myopia and hyperopia can alter the resolution of images to suit their visual abilities.

Redefining hearing 

As of Q1 2023, the WHO estimates that approximately 430 million people currently have “severe disabling hearing loss,” which accounts for nearly 5% of the global population. Moreover, the research body has indicated that by 2050, over 700 million people — or one in every 10 people — will have disabling hearing loss.

Recent AI-assisted hearing tools have allowed individuals with compromised hearing to obtain live captions, audio and video content transcripts. For example, Ava is a transcription app providing the text of any conversation happening around its periphery. Similarly, Google’s Live Transcribe provides a similar service, making everyday conversations more accessible for people with hearing impairments.

Another platform called Whisper harnesses sound separation technology to enhance the quality of incoming speech while reducing background noise to deliver sharper audio signals. The platform also uses algorithms to learn and adapt to a user’s listening preferences over time.

AI-enabled mobility 

The Centers for Disease Control and Prevention notes that a little over 12% of Americans experience mobility issues. 

Recent innovations in AI-enabled mobility assistants have aimed to build upon already existing mobility aids like wheelchairs.

For example, there are now AI-powered wheelchairs that can take audio cues from the user, thus opening up a new dimension of freedom and mobility. Firms like UPnRIDE and WHILL have created products that offer autonomous navigation and movement capabilities.

AI also appears in mobility-focused exoskeletons and prosthetic limbs, improving the autonomy of finer movements in prosthetic arms and boosting the power of electromyography-controlled nerve interfaces for electronic prosthetics.

AI-based systems can actuate and read different nerve inputs simultaneously, improving the overall function and dexterity of the devices.

The University of Stanford has also developed an exoskeleton prototype that uses AI to improve energy expenditure and provide a more natural gait for users.

Challenges for AI-enabled devices

AI requires the processing of massive data sets to be able to deliver high-quality results. However, in the context of disability, this involves collecting and storing sensitive personal information regarding an individual’s health and physical or cognitive abilities, raising significant privacy concerns. 

In this regard, Voiceitt’s Levy stressed that the platform complies with various data privacy regulatory regimes, like the United States Health Insurance Portability and Accountability Act and the European Union’s General Data Protection Regulation.

She also said it is standard practice to “de-identify all of the speech data separating personal data from audio recordings. Everything is locked in a secure database. We don’t share or sell any of our voice data with anyone unless expressly given permission by the user.”

Secondly, because AI tech is expensive to devise, the development of personalized tools for people with specific ailments can be costly and time-consuming. Moreover, the cost of maintaining and updating these systems is also significant.

Recent: PayPal’s stablecoin opens door for crypto adoption in traditional finance

To this point, Jagdeep Sidhu, CEO of Syslabs — the firm behind SuperDapp, an AI-enhanced platform supporting multilingual voice translation and recognition — told Cointelegraph:

“When it comes to people with visual, auditory, or mobility-related impairments, there is no denying that AI-driven technologies hold incredible potential. That said, one of the most significant hurdles in integrating AI for accessibility lies in the realm of cost. It’s an unfortunate reality that people with disabilities often face steeper costs and challenges to perform everyday tasks compared to those without disabilities.”

As AI and its associated technologies see increased adoption, there is reason to believe that people with disabilities will increasingly explore this space to enhance their lives. 

Even recent legislation across Europe and North America is being tailored to improve accessibility and inclusivity, suggesting that AI will play a crucial role within this realm.

Russia Using Bitcoin to Bypass Sanctions – Is the Global Financial System Cracking?

Tether CTO Paolo Ardoino says Bitcoin mining needs better analytical tools

Stablecoin operator Tether is building specialized Bitcoin mining software aimed at using data analytics to optimize mining operations and boost production, CTO Paolo Ardoino says.

Stablecoin issuer Tether (USDT) is building specialized software to optimize Bitcoin mining and renewable energy operating using data analytics, following recent investment endeavors into both categories.

In conversation with Cointelegraph, Tether CTO Paolo Ardoino expanded upon details of its in-development mining software which aims to deliver improved analytics and performance of mining sites.

Related: Tether’s game plan in El Salvador: Why invest in Volcano Energy?

Moria, named after the dwarven mining kingdom from The Lord of the Rings trilogy, is being built by Ardoino and a team of developers. Tether's CTO had previously shared details of the software in a recent social media post.

Ardoino says that while the ecosystem has a number of cloud-based Bitcoin mining trackers, these lack a high degree of customizability and “deep-level orchestration capabilities”  which has left a gap in the market for a solution that analyzes real-time data to optimize mining and energy outputs.

“So far most software that mining companies use are basic cloud solutions that have a simplified interface that provides an overview of the current status of the bitcoin mining site.”

Ardoino said that having access to deep data sources of an energy production site or a mining site requires complex and efficient analytical tools in order to understand the performance of a site and its surrounding environment.

“If energy used by the mining site is wind or solar, there are optimization parameters, like predicted speed of wind for a specific day or a specific hour of the day, that could be used to overclock some of the miners and boost the production.”

Tether has been actively investing in energy production and Bitcoin mining using a portion of excess reserves of USDT. Ardoino said that ensuring data produced by a variety of devices including miners, containers and electric transformers is recorded, monitored and analyzed in real time is imperative to streamlining operations.

His recent X post extrapolated the value to be derived from a Bitcoin mining site made up of thousands of physical mining units stored in multiple containers connected to thousands more devices. The Tether CTO likened a mining site to an IoT project that produces millions of data points.

Ardoino added that the development focus of Moria is currently on its Bitcoin analytical tools, before the software is extended to cover energy production.

“There as well you have solar panels, wind mills etc that provide an incredible amount of information.”

Ardoino describes Moria’s software as a Holepunch-based scalable and modular architecture that is able to collect, aggregate and analyze data from a variety of devices to optimize Bitcoin mining.

Tether recently announced that it would invest $1 billion into El Salvador's Volcano Energy project, directing shareholder profits into energy infrastructure and Bitcoin mining operations. In another extensive interview with Cointelegraph, Ardoino outlined Tether’s reasoning behind the move.

Magazine: ‘Elegant and ass-backward’: Jameson Lopp’s first impression of Bitcoin

Russia Using Bitcoin to Bypass Sanctions – Is the Global Financial System Cracking?

Two-thirds of AI Chrome extensions could endanger user security: Data

Data from an Incogni report revealed that 69% of AI extensions for Google Chrome have a high risk impact on users’ cybersecurity if they are breached.

Over two-thirds of artificial intelligence (AI)-powered extensions for the Google Chrome browser have a high-risk impact and could be “highly damaging” to user cybersecurity if breached, according to data from a new Incogni report.

The August report analyzed 70 AI Chrome extensions across seven different categories, including 10 writing extensions, which all fell into the high-risk category. 48 of 70 fell into the high-risk impact category if beached, yet 60% of the extensions were of low risk to face a security breach in the first place.

Source: Incognito data report “AI Chrome extensions: convenience vs privacy and security”

Darius Belejevas, head of Incogni, said that while these extensions offer “undeniable convenience,” users should have privacy and security safeguarding as their top priority.

“Understanding the data [users] share with extensions and their reliability in keeping it safe is crucial.”

The data found that 59% of the analyzed extensions collect user data, with 44% of these collecting “personally identifiable information” (PII). PII includes data such as the user’s name, address and identification number.

“By being cautious in choosing AI Chrome extensions and staying informed about their potential risks,” he said, “users can embrace the benefits of AI while safeguarding their personal information.”

Related: Zoom updates terms after backlash, won’t train AI without consent

The topic of privacy and user data collection and mishandling has become a major concern alongside the rapid rise of accessible AI applications. 

Back in June, Google changed its privacy policy to allow data scraping in order to train its AI systems — though it was very quickly met with a class-action lawsuit that claimed privacy and property rights were violated via the new changes.

Worldcoin, a decentralized digital identity verification protocol, has been among one of the recent major developments in the industry that has sparked worries over the management of user data. It has caused global regulators to open probes into the protocol’s operations. 

Meanwhile, on Aug. 7, the Indian government passed a bill through the lower house of parliament that would ease data compliance regulations for Big Tech companies, such as Google and Meta. 

Magazine: Deposit risk: What do crypto exchanges really do with your money?

Russia Using Bitcoin to Bypass Sanctions – Is the Global Financial System Cracking?

Zoom updates terms after backlash, won’t train AI without consent

Many online said they were halting the use of Zoom over terms that seemingly allowed the platform to scrape user data to train AI models.

Video-conferencing platform Zoom has updated its terms of service after widespread backlash over a section concerning AI data scraping, clarifying that it won’t use user content to train AI without consent.

In an Aug. 7 post, Zoom said its terms of service were updated to further confirm it would not use chat, audio, or video content from its customers to train AI without their express approval.

Over the weekend, a number of Zoom users threatened to stop using the platform after discovering terms that purportedly meant the firm would use a wide array of customer content to train AI models.

In the most recent post, Zoom said the AI-related terms were added in March, and reiterated it will not use any customer data for AI training without consent. The terms have now been updated to include a similar clarification:

“Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.”

Zoom’s post explains its AI offerings — a meeting summary tool and a message composer — are opt-in with account owners or administrators able to control the enablement of the tools.

Before Zoom added clarification to its terms, X (Twitter) users posted their concerns about their AI terms, with many calling for a boycott of Zoom until the terms were updated.

Concern arose over terms where users consented to Zoom’s use, collection, distribution and storage of “Service Generated Data” for any purpose including training AI and machine learning models.

Further terms allowed for Zoom’s right to use customer-generated content for — among other uses — machine learning and AI training and testing.

Related: The absurd AI mania is coming to an end

Other tech companies have also recently updated privacy policies to make room for data scraping to train AI. Google’s policies were updated in July allowing it to take public data for use in AI training.

Meanwhile, there is growing concern over tech firms’ use of AI and possible privacy implications. In June, European Union consumer protection groups urged regulators to investigate AI models used in chatbots such as OpenAI’s ChatGPT or Google’s Bard.

The groups were concerned over disinformation, data harvesting and manipulation generated by the bots. The EU passed the AI Act on June 14 to take effect within the next two to three years and gives a framework for AI development and deployment.

AI Eye: AI’s trained on AI content go MAD, is Threads a loss leader for AI data?

Russia Using Bitcoin to Bypass Sanctions – Is the Global Financial System Cracking?

India House passes bill to ease BigTech data compliance

The lower house in the parliament of India approved updates to a bill that would ease data storage, processing and transfer standards for BigTech companies.

The lower house of India’s parliament voted in approval of a bill that will ease data compliance regulations for Big Tech companies, according to a report from Bloomberg. 

On Aug. 7, the legislation that was approved by the house will ease storage, processing and transfer standards for major global tech companies like Google, Meta and Microsoft and also local firms seeking international expansion.

The Digital Personal Data Protection Bill 2023 targets exports of data sourced from India, allowing companies to do so except to countries prohibited by the government.

As it currently stands, the bill requires government consent prior to BigTech companies collecting personal data. It also prevents them from selling it for reasons not listed in the contract, meaning no anonymization of personal data for use in artificial intelligence (AI) training, for example.

These updates to the bill would reduce compliance requirements for companies, though it has to pass through the upper parliamentary house prior to its finalization.

India is the world’s most populous country with billions of internet users, which makes it a key market for growth.

Related: Indian Supreme Court raps Union government on crypto rules delay: Report

Concerns over data misuse in the emerging tech industry and particularly from BigTech companies have been a growing priority for regulators across the globe. 

The rapid emergence of AI as an accessible tool for the general public has caused major concerns among regulators over the way these products collect and utilize user data.

India has also been named as one of the countries that is a part of collaborations with the Biden Administration in the United States to create an international framework for AI.

One recent and major development in the emerging tech scene that has caused concerns over data collection, has been with the launch of the decentralized digital identity verification protocol Worldcoin.

So far, the project has launched 1,500 of its iris scanning orbs in countries all around the world. India is home to two orbs in the northern city of Delhi and the southern city of Bangalore, according to the Worldcoin website

Magazine: Deposit risk: What do crypto exchanges really do with your money?

Russia Using Bitcoin to Bypass Sanctions – Is the Global Financial System Cracking?

Decentralized Web3 data service taps ZK-proofs for tamper-proof SQL queries

Space and Time launches zero-knowledge proof tool for its decentralized database platform.

Decentralized Web3 data service Space and Time has tapped into zero-knowledge proof (ZK-proof) technology to cryptographically verify information queries within its ecosystem.

The company’s proprietary Proof of SQL allows the platform to generate a SNARK cryptographic proof of a query within its decentralized data network, allowing users to trust that a data query is accurate and has not been manipulated.

Space and Time intends for the service to provide tamper-proof on-chain and off-chain data to blockchain services, advanced computing, artificial-intelligence and large language models. Space and Time co-founder Jay White told Cointelegraph that the innovation could prove useful across a range of blockchain-based solutions including financial services, retail, healthcare and gaming:

“We believe that data will enhance the interoperability between the on-chain and off-chain ecosystems, fostering greater collaboration between decentralized and traditional systems.”

Proof of SQL will enable decentralized applications (DApps) to run a query to Space and Time’s data warehouse and create a roll-up of the result to a smart contract. The Proof of SQL ensures trustless, but verified, proofs of data that are efficient and privacy-preserving.

The firm notes that the service could provide significant value to industries where monetary value is directly linked to data, with financial services a prime example.

Related: Are ZK-proofs the answer to Bitcoin’s Ordinal and BRC-20 problem?

The company also sees potential in the technology to verify that large language models were trained on accurate, tamper-proof data. This could become an important aspect as tools like OpenAI’s ChatGPT become integrated in business processes.

Space and Time’s service includes a blockchain data indexing service that features pre-built Web3 APIs that allow DApps to access real-time data from Ethereum, Polygon, BNB Chain, Sui and Avalanche’s blockchains. The platform also integrates with Chainlink oracles and decentralized storage platforms.

Space and Time’s Proof of SQL is currently being used by credit-scoring blockchain protocol Lendvest. The service calculates an on-chain credit score based on a user’s on-chain and off-chain financial data.

According to Lendvest founder Joshua Gottlieb, Proof of SQL is being used to prove that credit scores are verified, calculated correctly and associated with a correct wallet address.

The system is intended to allow Decentralized Finance (DeFi) lending platforms to algorithmically establish the risk profile of a user, which is aimed at improving returns for both users and DeFi protocols.

Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.

Magazine: Tokenizing music royalties as NFTs could help the next Taylor Swift

Russia Using Bitcoin to Bypass Sanctions – Is the Global Financial System Cracking?

French privacy watchdog questions Worldcoin’s data collection method: Report

The French data protection agency CNIL said that it finds the legality of Worldcoin’s collection methods “questionable” as are its conditions for storing the data.

The French data protection agency, also called the Commission Nationale Informatique & Libertés (CNIL), is reportedly questioning the legality of data collection methods conducted by Worldcoin, according to a Reuters report

In an email to Reuters on July 28, CNIL said:

“The legality of this collection seems questionable, as do the conditions for storing biometric data."

CNIL also stated in the email to Reuters that it had initiated investigations and has been supporting the efforts of the Bavarian state authority in Germany with its investigation into the subject matter.

Reuters also reported on July 25 that Worldcoin may face inquiries from data regulators in the United Kingdom post-launch. 

OpenAI, the company behind the popular artificial intelligence (AI) chatbot ChatGPT, launched Worldcoin on June 24. The initative requires users to provide a scan of their iris in exchange for a digital ID and free cryptocurrency. 

According to the company’s website, 2.1 million people have already signed up with the project, though mostly during the trial period throughout the course of the last two years.

Related: Worldcoin launch raises eyebrows as WLD price notches a double-digit gain

The company claimed in a post on X that since its official launch, “a unique human is now verifying their World ID every 7.6 seconds & new records are being set daily.”

Worldcoin has posted photos on X of its orbs in various cities across the world since its launch on Monday, including Seoul, South Korea, Mexico City, Mexico and Paris, France. 

Despite all the hype, Worldcoin has received mixed reactions from the crypto community. Some users pointed out the potential failures due to its centralization, while others say proof-of-personhood is necessary with the increasing presence of AI. 

Additional reports have surfaced claiming that after its launch Worldcoin has struggled to recruit new sign-ups, with the three designated locations in Hong Kong only seeing around 200 sign-ups on the first day, and a total of 600 overall. 

However, the next day Sam Altman, the company’s co-founder, rebutted the claims by posting a video on X of a long queue of people in Japan waiting to complete iris scans.

Magazine: Experts want to give AI human ‘souls’ so they don’t kill us all

Russia Using Bitcoin to Bypass Sanctions – Is the Global Financial System Cracking?