1. Home
  2. Data

Data

Meta Platforms fined $14 million for Onavo privacy issues: Report

The decision in an Australian court was driven by increasing concerns about illicit activities in the crypto market and the desire to safeguard investors.

An Australian court has reportedly ordered Meta Platforms, the owner of Facebook, to pay fines amounting to 20 million Australian dollars ($14 million) for collecting user data through a smartphone application, Onavo. 

According to a Reuters report, the Federal Court of Australia has directed Meta, along with its subsidiaries Facebook Israel and the discontinued app, to reimburse $270,356 (A$400,000) in legal costs to the Australian Competition and Consumer Commission (ACCC). The ACCC initiated the civil lawsuit against Meta, alleging that Onavo was promoted as a privacy protection tool, but failed to openly reveal its data collection methods.

Facebook used Onavo to collect users' location, time and frequency using other smartphone apps and websites they visited for its own advertising purposes, Judge Wendy Abraham said in a written judgment, according to the report.

Related: Alibaba to support Meta’s AI model Llama: Report

Meta reportedly stated that the ACCC had recognized their lack of intent to mislead customers and they emphasized their efforts in developing tools over the past few years to provide users with increased transparency and control over their data usage.

The imposed fine marks the conclusion of one aspect of Meta's legal challenges in Australia concerning its management of user data, Reuters said. This legal matter emerged amid a scandal involving Meta's association with data analytics firm Cambridge Analytica during the 2016 United States election.

However, Meta's legal woes are not over yet, as it is reportedly also facing a civil court action by Australia's Office of the Information Commissioner regarding its dealings with Cambridge Analytica specifically in Australia.

Cointelegraph reached out to Meta for more information, but had not received a response bthe time of publication.

Magazine: Is the Metaverse really turning out like ‘Snow Crash’?

AI, Blockchain Integration Can Boost Trust, Prevent Misuse, Expert Says

Google hit with lawsuit over new AI data scraping privacy policy

A week after Google updated its privacy policy to allow data scraping for AI training purposes the company is now facing a class action lawsuit.

Google is now facing a lawsuit in the follow-up to its recent privacy policy update that accuses the tech giant of misusing large amounts of data, including copyrighted material in artificial intelligence (AI) training. 

The lawsuit was filed on July 11 by eight individuals who claim to represent “millions of class members” - internet users and copyright holders- who have had their privacy and property rights violated in light of Google’s recent updates to its privacy policy.

In its opening statement, the plaintiffs accuse Google of “harvesting data in secret” to build its AI products without consent.

“It has very recently come to light that Google has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans.”

Google’s privacy policy changes now allow it to take data that is publicly available to use for artificial intelligence (AI) training purposes.

The lawsuit points out that Google’s decision not only violates rights, but gives it an “unfair advantage” compared to its competitors who lawfully obtain or purchase data to train AI. Ryan Clarkson of Clarkson Law Firm, the plaintiffs’ attorney, said in a statement in the suit that:

“Google must understand, once and for all: it does not own the internet, it does not own our creative works, it does not own our expressions of our personhood, pictures of our families and children, or anything else simply because we share it online.”

The plaintiffs argued that “publicly available” does not and has never entailed that it is “free to use for any purpose.”

Related: OpenAI pauses ChatGPT’s Bing feature, as users were jumping paywalls

According to the lawsuit, Google could potentially owe upwards of $5 billion in damages. It also requested a court order which would order Google to obtain users' explicit permission first. 

This includes allowing users to opt out of its "illicit data collection,” along with the ability to delete already existing data or provide "fair compensation” to owners of the data.

Earlier this week, author and comedian Sarah Silverman, together with two other authors, filed a lawsuit against ChatGPT maker OpenAI and Meta for their use of copyrighted work without permission in AI training. 

Prior to that, OpenAI was hit with another lawsuit for alleged data scraping,

Magazine: Super Mario: Crypto Thief, Sega blockchain game, AI games rights fight — Web3 Gamer

AI, Blockchain Integration Can Boost Trust, Prevent Misuse, Expert Says

Arkham CEO rebuts claims of ‘snitch-to-earn’ program, says it’s to find bad actors

Arkham CEO Miguel Morel said public blockchains are "probably the worst possible way of keeping one's private information private.”

The chief of the startup blockchain intelligence platform Arkham has refuted claims by the crypto community that its new “Intel Exchange” is a “snitch-to-earn” or “dox-to-earn” system.

On a July 11 Twitter Space, Arkham CEO Miguel Morel discussed the public relations debacle that has unfolded this week over its marketplace.

Arkham’s Intel Exchange aimed to “deanonymize the blockchain” by rewarding users with a new token, ARKM, for revealing the identities behind otherwise anonymous blockchain addresses. It was launched on Binance Launchpad as a token sale this week.

The platform rapidly generated a lot of criticism on Crypto Twitter and was dubbed a “snitch-to-earn” system.

Morel disagreed with these claims and justified the platform saying it was designed to uncover scammers and hackers behind crypto exploits.

“Publicly available blockchains are probably the worst possible way of keeping one's private information private,” he said before adding that Arkham would retain control of the data:

“It's not a completely free market. So it's not like anybody can just post any piece of information and then it can go online.”

“There are a bunch of restrictions and guidelines, all of which we will be rolling out,” he added.

Morel stated that the primary focus of its info exchange is uncovering trading firms, market makers, exchanges and very large institutions.

He added these large hedge funds and trading entities are “making money off of information about who's buying and selling large positions of a particular token.”

Related: Crypto hacks and exploits snatch over $300M in Q2 2023

Another participant in the Twitter Space pointed out that Arkham has a responsibility to prevent abuse and may facilitate false accusations by so-called “crypto detectives,” however Morel maintained it will be properly governed.

“Thankfully, it'll actually be more vetted and more regulated than something like Twitter or Facebook because every bounty needs to be approved.”

This raised even more concerns from TV host Ran Neuner who said, “my issue is not with the system. My issue is with your company managing the data.”

Arkham came under fire this week for leaking user emails via its weblink referrals program which includes an easily decipherable string of characters in referral links that reveal the referring email address.

Magazine: Should crypto projects ever negotiate with hackers? Probably

AI, Blockchain Integration Can Boost Trust, Prevent Misuse, Expert Says

48% fewer new crypto coders last year – developer report

Newcomers account for the highest percentage of developers that have left the industry over the past 12 months.

The number of new developers entering the cryptocurrency sector has dropped by nearly 50 percent over the past year, according to research from Electric Capital’s Developer Report.

The latest gauge of the state of the cryptocurrency developer ecosystem indicates that long-term coders that have worked in the industry for over a year commit more code and work more days than developers that have left.

According to the data, the cryptocurrency ecosystem has an estimated 21,300 monthly active open source developers as of June 1. The space has seen a 22% decline in the number of developers since June 2022.

The caveat is that developers that have exited the space are classified as “newcomers” that worked in the industry for less than a year. The impact of the departure of these developers was made less significant considering that they were responsible for less than 20% of all code commits over the past 12 months.

Related: Searches for ‘AI jobs’ in 2023 are 4x higher than ‘crypto jobs’ when BTC hit $69K

Long term cryptocurrency developers who’ve worked in the industry for more than a year are responsible for over 80% of committed code.

The Developer Report estimates that some 7,700 newcomer developers left the space since June 2022. Emerging developers that have worked in the industry for up to two years has increased by 1650 while established developers that have over two years of experience in the cryptocurrency space increase by 150.

The report notes that the decline in newcomer developers is due to fewer coders exploring work in the cryptocurrency space. This has been exacerbated by an ongoing bear market which has suppressed wider cryptocurrency markets.

Source: Electric Capital Developer Report

The analysts also suggest that while 2023’s retention of new developers has been significantly less that 2022 and 2021, the trend is not “abnormal” across a longer time frame.

“If we look at cohort retention analysis starting from 2015, we see that developers who join during bear markets leave faster.”

Newcomer developers typically enter the cryptocurrency sector around market peaks. There was a 70% dominance of newcomer developers six months after January 2018’s cryptocurrency market peak. This was followed by a 60% newcomer dominance in the six months following the November 2021 market all-time high.

Meanwhile emerging and established developers tend to dominate the sector when the cryptocurrency space enters bear market territory.

The second half of 2022 saw a spate of layoffs across the cryptocurrency industry as companies looked to downsize in response to tough market conditions. The industry then saw a decline in layoffs from February 2023, according to market research conducted by Cointelegraph.

Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.

Magazine: Make 500% from ChatGPT stock tips? Bard leans left, $100M AI memecoin: AI Eye

AI, Blockchain Integration Can Boost Trust, Prevent Misuse, Expert Says

Ethereum NFT royalties hit two-year low as Bored Ape floor price falls below 30 ETH

NFT royalties act as an important gauge of incoming revenues to fund ongoing development of various projects in the ecosystem.

Royalties earned by nonfungible token (NFT) projects have reached their lowest point in two years, according to a report from blockchain analytics firm Nansen.

Data shared with Cointelegraph highlights the low point for NFT royalties before the impact of a recent drop in the floor price of Bored Ape Yacht Club NFTs as well as controversy surrounding the launch of the Azuki Elementals collection.

April 2022 saw the peak of NFT royalties, with NFT creators bagging an estimated $75.7 million in royalties in a single week. According to Nansen’s data, BAYC creators Yuga Labs has earned a total of $165.5 million in royalties across its portfolio of NFT collections.

Related: Planet of the Bored Apes: BAYC’s success morphs into ecosystem

RTFKT has earned a total of $79.9 million in royalties from its collections, which includes the likes of CloneX. Azuki has scored $58.2 million from its zuki, Beanz, Elemental Beansa and Elementals collections.

Proof, the studio behind Moonbirds, netted $35 million in revenues while Doodles has made $27.4 million from its Doodles, Space Doodles, Genesis Box and Dooplicato collections. Pudgy Penguins’s revenue amounts to $8.3 million across its Pudgy Penguins, Lil Pudgys and Pudgy Rods drops.

Nansen highlights the importance of NFT royalties as an indicator of a studio’s financial foundation for ongoing development, given their role in generating revenue.

NFT marketplace OpenSea had been primarily responsible for distributing royalty payments to NFT projects up until 2023. The report notes that this trend changed once rival marketplace Blur implemented a policy which required a minimum of 0.5% royalties unless projects opted out or enforced full percentages.

OpenSea gave buyers the choice to pay royalties unless projects had opted out or imposed their specific percentage:

“Currently, OpenSea and Blur are on par with each other when it comes to the royalties paid through their respective marketplaces, with more royalties paid on Blur when the trading volume surges.”
Total royalties paid to NFT projects. (Source: Nansen’s NFT Trends dashboard)

Nansen’s data also reveals that the top 10 NFT collections have earned more than $345 million in royalties. Yuga Labs’ $150 million in royalties makes up 44% of the top 10. Interestingly, just 20 NFT projects have earned more than $10 million in royalties to date.

Magazine: Ordinals turned Bitcoin into a worse version of Ethereum: Can we fix it?

AI, Blockchain Integration Can Boost Trust, Prevent Misuse, Expert Says

Google updates its privacy policy to allow data scraping for AI training

The latest updates to Google’s privacy policy reveal that Google may use any public information available to train its various AI products and services.

Google has made updates to its privacy policy which now allows it to take any publicly available data and use it for artificial intelligence (AI) training purposes.

The update to the company’s privacy policy came on July 1 and can be compared to previous versions of the policy via a link published on the site’s update page.

In the latest version, changes can be seen that include the addition of Google’s AI models, Bard and Cloud AI capabilities to the services it may train by using “information that’s publicly available online” or from “other public sources.”

The updated Google policy conditions (in green) as of July 1, 2023. Source: screenshot 

The policy update infers that Google is now making it clear to the public and its users that anything that is publicly uploaded online could be used in its training processes with the current and future AI systems it develops. 

This update from Google comes shortly after OpenAI, the developer of the popular AI chatbot ChatGPT was charged with a class-action lawsuit in California over allegedly scraping private information from users via the internet.

It claimed that OpenAI used data from millions of comments on social media, blogs, Wikipedia and other personal information from users to train ChatGPT without first getting consent to do so. The lawsuit concluded that this, therefore, violated the copyrights and privacy rights of millions of users on the internet.

Related: US vice president gathers top tech CEOs to discuss dangers of AI

Twitter’s recent change in the number of tweets users are able to access depending on their account verification status has caused rumors across the internet that it was imposed partially due to AI data scraping.

The documents of Twitter’s developers read that rate limits were imposed as a method to manage the volume of requests made to Twitter’s application program interface (API).

Elon Musk, the owner and former CEO of Twitter recently tweeted about the platform “getting data pillaged so much that it was degrading service for normal users.”

Magazine: AI Eye: 25K traders bet on ChatGPT’s stock picks, AI sucks at dice throws, and more

AI, Blockchain Integration Can Boost Trust, Prevent Misuse, Expert Says

Crypto Twitter has a persistent ‘fake followers’ problem, data reveals

Shiba Inu had the greatest number of fake followers at 10.26% or 80,000 accounts, while Avalanche and Polygon followed suit with 8.14% and 7.58% of fake accounts respectively.

Despite changes introduced by Twitter management since Elon Musk’s takeover, the issue around fake followers remains a persistent problem. As many as 10% of the followers of accounts belonging to crypto influencers and companies are fake, new data from dappGambl has revealed.

In April 2023, Musk introduced Twitter Blue — an $8 monthly subscription for verification — to increase the platform’s revenue while making it financially inviable for bots and fake accounts to operate. However, months later, dappGambl’s investigation found that up to 10% of followers from the most followed crypto accounts are fake.

When it comes to the official accounts of cryptocurrency tokens and ecosystems, Shiba Inu (SHIB) had the highest number of fake followers at 10.26% or 80,000 accounts, while Avalanche (AVAX) ranked second with 8.14% fake followers, followed by Polygon (MATIC) with 7.58% or 73,000 fake accounts.

dappGambl suspected that the relationship between Twitter accounts and their fake followers is dependent on the popularity of the tokens. By analyzing the social sentiment behind crypto accounts, dappGambl found that:

“Dai (DAI) is the most loved (popular) coin on Twitter whilst XRP (XRP) is the most hated (unpopular).”

Generally, the crypto community on Twitter sees DAI as the “future of money” while it tends to associate XRP with scams, states dappGambl.

When it comes to crypto influencers and entrepreneurs, Samson Mow boasts the highest percentage of fake followers among his total following. Mow is currently being followed by 26,000 fake accounts that represent 10% of his total following on Twitter.

Twitter co-founder Jack Dorsey has 560,000 (8.62%) fake followers, while El Salvador President Nayib Bukele and Ethereum co-founder Vitalik Buterin had nearly 6.5% of fake followers among his total count.

Other prominent figures with substantial fake followers include MicroStrategy co-founder Michael Saylor (6.16%), Binance CEO Changpeng ‘CZ’ Zhao (5.58%) and Tesla CEO Elon Musk (4.76%) among others.

Based on the total number of followers, over 6.7 million fake accounts currently follow Musk as he tries to eradicate the very problem. Some of the methods to identify fake accounts are — checking when the account was created, investigating the profile picture, account bio and tweets sent out by the account and checking the account's followers and following.

Related: Elon Musk imposes ‘rate limit’ on Twitter, citing extreme ‘system manipulation’

A popular Twitter bot that goes by the name of “Explain This Bob” was recently suspended after Musk called it a scam.

As Cointelegraph previously reported, the bot was created by Prabhu Biswal from India, which used OpenAI’s GPT-4 model to comprehend and provide responses to tweets by those who tagged the account.

AI, Blockchain Integration Can Boost Trust, Prevent Misuse, Expert Says

Is blockchain technology ready for high-storage applications?

What does the blockchain space need to enable the development of high-storage apps?

Web3 — the third generation of the internet — refers to a decentralized and distributed version of the web that uses blockchain technology, and other decentralized technologies, to enable greater user control, privacy and data ownership. It aims to redefine how we interact with digital services, moving from traditional centralized models to decentralized peer-to-peer networks.

At its core, Web3 is built on blockchain technology, which is a distributed ledger that maintains a cryptographically-secured, continuously growing list of records called blocks. This decentralized nature enables direct peer-to-peer interactions.

Web3 brings several key features and capabilities with the potential to revolutionize high-storage applications. Examples of high-storage applications include content delivery networks (CDNs) to host images and other visual media, online gaming platforms, and blockchain-based websites.

A single server distribution scheme (left) versus a CDN distribution scheme (right).

Unlike traditional centralized systems, Web3 ensures that no single entity has complete control or ownership over data. This decentralized approach makes the data resistant to censorship, manipulation, or single-point-of-failure risks, thereby enhancing data integrity and availability.

Harrison Hines, CEO and Co-founder of Fleek — a decentralized development platform — told Cointelegraph, “The well-designed protocols powering Web3 ensure decentralization through their network architecture, cryptography and token-economic incentive system.” He added:

“The benefits of this approach largely center around being trustless, permissionless, tamper-proof and censorship-resistant. These are increasingly important problems/issues, especially on corporate-owned Web2 cloud platforms, and Web3 does a great job addressing them.”

Ankur Banerjee, chief technology officer at Cheqd — a decentralized payments and identity platform — also weighed in, telling Cointelegraph, “Focusing specifically on decentralization, it provides resiliency away from single providers. There have historically been lots of outages due to cloud providers failing, e.g., only a week ago, Microsoft Outlook was down, and in January, Outlook, Teams, and 365 were all down, which shows the danger of centralization. Facebook’s global outage in 2021 took down not just their services, but large parts of the rest of the web which relied on Facebook’s ad tracking and log in.”

Another significant aspect of Web3 is interoperability. Blockchains work independently of each other, but there are interoperability protocols that aim to connect different blockchain networks. For example, cross-chain bridges allow users to transfer assets from one blockchain to another. If leveraged correctly, interoperability can play a role in developing high-storage applications by making them accessible on multiple blockchain networks.

Web3 incorporates distributed file systems, such as the InterPlanetary File System (IPFS) and Swarm, to provide secure and scalable storage solutions for high-storage applications. These distributed file systems break down files into smaller chunks, distribute them across multiple nodes and utilize content-based addressing. In addition, by ensuring data redundancy and efficient retrieval, they enhance the reliability and performance of storage systems.

For example, Fleek enables users to build websites by hosting their files using the IPFS protocol. When a website is deployed on the network, users get an IPFS hash, and the websites are archived to Filecoin. Users have software development kits and graphical user interfaces to interact with the storage infrastructure.

Magazine: Peter McCormack’s Real Bedford Football Club puts Bitcoin on the map

Moreover, Web3 enables the use of smart contracts. Smart contracts are self-executing contracts with predefined rules and conditions encoded within the blockchain. They facilitate trustless and automated interactions, allowing high-storage applications to enforce rules, handle transactions, and manage access control for data storage and retrieval.

Web3 also introduces tokenization, where digital assets or tokens represent ownership or access rights. In high-storage applications, tokenization can incentivize participants to contribute their storage resources. Users can earn tokens by sharing unused storage space, creating a cost-effective and scalable decentralized network. Tokenization adds an economic layer to the storage ecosystem, encouraging active participation and resource sharing.

Web3’s potential for high-storage applications lies in its decentralized nature, interoperability, distributed file systems, smart contracts and tokenization mechanisms. These features provide a secure, scalable, and incentivized infrastructure for storing and retrieving large volumes of data.

What blockchain tech needs to be ready

In its current form, blockchain technology faces scalability challenges when handling large amounts of data. Traditional blockchain architectures like Bitcoin and Ethereum have limited throughput and storage capacities. 

To support high-storage applications, blockchain networks need to enhance their scalability. This can be achieved by implementing solutions like sharding, layer-2 protocols or sidechains. These techniques enable parallel processing of transactions and data, effectively increasing the capacity and performance of the blockchain network.

High-storage applications require efficient utilization of storage resources. Therefore, blockchain networks need to optimize data storage to reduce redundancy and improve storage efficiency. Techniques such as data compression, deduplication, and data partitioning can be employed to minimize storage requirements while maintaining data integrity and availability.

Banerjee noted, “Blockchains aren’t directly used to store heavy files since this would be a non-optimal way of storing and distributing them. Many use cases that require storing large amounts of data achieve this by storing a cryptographic hash or proof on the chain, and storing the file on decentralized storage (like IPFS, Swarm, Ceramic, etc.), or even centralized storage.” He added:

“That way, the ‘heavier’ files don’t need to be split and stored in blocks, and are available in a form most optimized for distributing large files fast, while ensuring they are tamper-proof by checking against the hash. A good example of this in action is the Sidetree protocol, which uses a combination of IPFS and Bitcoin for storage.”

Data availability is crucial for high-storage applications. Blockchain networks must ensure that storage nodes are consistently online and accessible to provide data retrieval services. Incentives and penalties can be incorporated to encourage storage nodes to maintain high availability. Additionally, integrating distributed file systems like IPFS or Swarm can enhance data availability by replicating data across multiple nodes.

Fleek’s Hines told Cointelegraph, “Scalability is still an issue that all Web3 storage protocols need to work on, and it’s an issue we are specifically addressing with Fleek Network. Regarding IPFS and Swarm specifically, I’d put IPFS in a category of its own. In contrast, Swarm is more similar to Filecoin, Arweave, etc., in that those protocols guarantee the storage of files/data,” adding:

“IPFS, on the other hand, does not guarantee the storage of files/data. A better way to think about IPFS is more similar to HTTP, meaning its primary use is for content addressing and routing.”

Hines even believes that IPFS can potentially replace the HTTPS protocol: “In the future, we see IPFS being used on top of all storage protocols and eventually replacing HTTP, for the simple reason that content addressing makes more sense than location-based addressing (IP address) for the internet and its growing global user base.”

“For the other storage protocols like Filecoin, Arweave, Swarm, etc., they guarantee security through their network architecture, cryptography and token-economic incentive system.”

Since high-storage applications often deal with sensitive data, data privacy and security are paramount. Blockchain networks need to incorporate robust encryption techniques and access control mechanisms to protect stored data. Privacy-focused technologies, such as zero-knowledge proofs or secure multiparty computation, can be integrated to enable secure, private data storage and retrieval.

Blockchain networks can provide cost-effective storage solutions with decentralized storage networks or implementing token-based economies. In addition, blockchain networks can create a distributed, cost-efficient storage infrastructure by incentivizing individuals or organizations to contribute their unused storage resources.

Interoperability is crucial for high-storage applications that involve data integration from various sources and systems. Therefore, blockchain networks must promote interoperability between blockchains and external systems. Standards and protocols, such as cross-chain communication protocols or decentralized oracles, can enable seamless integration of data from different sources into the blockchain network.

Effective governance and consensus mechanisms are essential for blockchain networks that handle large volumes of data. Transparent and decentralized governance models, such as on-chain or decentralized autonomous organizations (DAOs), can be implemented to make collective decisions regarding storage-related policies and upgrades.

Efficient consensus algorithms like proof-of-stake (PoS) or delegated proof-of-stake (DPoS) can be adopted to achieve faster, more energy-efficient consensus for data storage transactions. Improving the user experience is also crucial for blockchain technology in high-storage applications.

The complexity and technicality associated with blockchain should be abstracted away to provide a user-friendly interface and seamless integration with existing applications. In addition, tools, libraries, and frameworks that simplify the development and deployment of high-storage blockchain applications should be readily available.

Recent: EU’s new crypto law: How MiCA can make Europe a digital asset hub

High storage applications may need to adhere to specific regulatory requirements, such as data protection regulations or industry-specific compliance standards. Therefore, blockchain networks must provide features and mechanisms that allow compliance with such regulations.

This can include built-in privacy controls, auditability features, or integration with identity management systems to ensure regulatory compliance while utilizing blockchain-based storage.

In summary, to be ready for high-storage applications, blockchain must address several key features, including security and cost-efficiency. By overcoming these challenges and incorporating the necessary improvements, blockchain technology can provide a robust, scalable infrastructure for high-storage applications.

AI, Blockchain Integration Can Boost Trust, Prevent Misuse, Expert Says

AI automation could take over 50% of today’s work activity by 2045: McKinsey

Management consulting firm McKinsey & Co believes AI will have the “biggest impact” on high-wage workers.

In just 22 years, generative AI may be able to fully automate half of all work activity conducted today, including tasks related to decision-making, management, and interfacing with stakeholders, according to a new report from McKinsey & Co.

The prediction came from the management consulting firm report on June 14, forecasting 75% of generative AI value creation will come from customer service operations, marketing and sales, software engineering, as well as research and development positions.

The firm explained that recent developments in generative AI has “accelerated” its “midpoint” prediction by nearly a decade from 2053 — its 2016 estimate — to 2045.

McKinsey explained that its broad range of 2030-2060 was made to encompass a range of outcomes — such as the rate at which generative AI is adopted, investment decisions and regulation, among other factors.

Its previous range for 50% of work being automated was 2035-2070.

McKinsey’s new predicted “midpoint” time at which automation reaches 50% of time on work-related activities has accelerated by eight years to 2045. Source: McKinsey

The consulting firm said, however, the pace of adoption across the globe will vary considerably from country to country:

“Automation adoption is likely to be faster in developed economies, where higher wages will make it economically feasible sooner.”
Early and late scenario midpoint times for the United States, Germany, Japan, France, China, Mexico and India. Source: McKinsey.

Generative AI systems now have the potential to automate work activities that absorb 60-70% of employees’ time today, McKinsey estimated.

Interestingly, the report estimates generative AI will likely have the “biggest impact” on high-wage workers applying a high degree of “expertise” in the form of decision making, management and interfacing with stakeholders.

The report also predicts that the generative AI market will add between $2.6 to $4.4 trillion to the world economy annually and be worth a whopping $15.7 trillion by 2030.

This would provide enormous economic value on top of non-generative AI tools in mainstream use today, the firm said:

“That would add 15 to 40 percent to the $11.0 trillion to $17.7 trillion of economic value that we now estimate nongenerative artificial intelligence and analytics could unlock.”

Generative AI systems are capable of producing text, images, audio and videos in response to prompts by receiving input data and learning its patterns. OpenAI’s ChatGPT is the most commonly used generative AI tool today.

McKinsey’s $15.7 trillion prediction by 2030 is more than a three-fold increase in comparison to its $5 trillion prediction for the Metaverse over the same timeframe.

Related: The need for real, viable data in AI

However, the recent growth of generative AI platforms hasn’t come without concerns.

The United Nations recently highlighted “serious and urgent” concerns about generative AI tools producing fake news and information on June 12.

Meta CEO Mark Zuckerberg received a grilling by United States Senators of a “leaked” release of the firm’s AI tool “LLaMA” which the senators claim to be potentially “dangerous” and be possibly used for “criminal tasks.”

Magazine: AI Eye: ‘Biggest ever’ leap in AI, cool new tools, AIs are the real DAOs

AI, Blockchain Integration Can Boost Trust, Prevent Misuse, Expert Says

Vitalik Buterin: Ethereum ‘fails’ without these 3 important ‘transitions’

Layer-2 scaling, wallet security and privacy-preserving features are all necessary to secure Ethereum’s future, according to the Ethereum co-founder.

Ethereum co-founder Vitalik Buterin believes the success of Ethereum will come down to three major technical “transitions” that need to happen almost simultaneously — layer-2 scaling, wallet security and privacy-preserving features.

In a June 9 post via his personal blog, Buterin explained that the Ethereum blockchain outright “fails” without sufficient scaling infrastructure to make transactions cheap.

“Ethereum fails because each transaction costs $3.75 ($82.48 if we have another bull run), and every product aiming for the mass market inevitably forgets about the chain and adopts centralized workarounds for everything,” he said.

Another point of failure, according to Buterin, is around wallet security as it relates to smart contract wallets. 

He explained that a move to smart contract wallets has added more complexity for users wishing to obtain the same address across Ethereum and various layer-2s.

Buterin said this issue stands for both Ethereum Virtual Machine (EVM)-equivalent and non-equivalent layer-2s:

“Even when you can have hash equivalence, the possibility of wallets changing ownership through key changes creates other unintuitive consequences.”
Ethereum needs to improve its layer-2 scalability, wallet security and privacy features, according to Buterin. Source: Vitalik Buterin’s website

In addition to wallets securing crypto assets, Buterin explained that wallets would need to secure data in order to truly transition into an on-chain world with zero-knowledge rollups:

“In a ZK world, however, this is no longer true: the wallet is not just protecting authentication credentials, it's also holding your data.”

The last of Buterin’s three transitions — privacy — will need to come in the form of improved identity, reputation and social recovery systems.

“Without the third, Ethereum fails because having all transactions (and POAPs, etc) available publicly for literally anyone to see is far too high a privacy sacrifice for many users, and everyone moves onto centralized solutions that at least somewhat hide your data,” he said.

The Ethereum co-founder suggested that stealth addresses could be implemented to resolve this issue.

Related: Vitalik Buterin reveals 3 ‘huge’ opportunities for crypto in 2023

Buterin said that achieving all three will be “challenging” because of the “intense coordination” involved between them.

He admitted that each of the three transitions “weaken” the “one user — one address” model, which, in turn, may complicate the way transactions are executed.

“If you want to pay someone, how will you get the information on how to pay them?”

“If users have many assets stored in different places across different chains, how do they do key changes and social recovery?" he added.

Buterin concluded by stressing the need to build infrastructure that ultimately improvers user experience:

“Despite the challenges, achieving scalability, wallet security, and privacy for regular users is crucial for Ethereum's future. It is not just about technical feasibility but about actual accessibility for regular users. We need to rise to meet this challenge.”

Magazine: ZK-rollups are ‘the endgame’ for scaling blockchains, Polygon Miden founder

AI, Blockchain Integration Can Boost Trust, Prevent Misuse, Expert Says