1. Home
  2. Italy

Italy

EU data watchdog warns of ‘hell on Earth’ scenario for US AI companies

Europe's Data Protection Supervisor predicts trouble abroad and at home for U.S. AI companies that run afoul of GDPR.

Europe’s data watchdog, Wojciech Wiewiórowski, predicts a sour predicament for United States-based artificial intelligence (AI) companies currently being investigated for alleged GDPR violations.

“The breathless pace of development means data protection regulators need to be prepared for another scandal,” Wiewiórowski told MIT’s Technology Review during a recent interview, invoking the Cambridge Analytica scandal for reference.

Wiewiórowski’s comments come after a tumultuous week for leading AI outfit OpenAI, creator of the massively popular GPT suite of products and services. The company’s suite of GPT services has been outright banned in Italy pending further information about its intent and ability to comply with GDPR, with similar actions pending in Ireland, France and Germany.

According to the European Union (EU) data watchdog, OpenAI currently finds itself between a European rock and a U.S. hard place, legally speaking. As regulators in the EU look to crack down, U.S. lawmakers could be eyeing the European prescription as a possible local template:

“The European approach is connected with the purpose for which you use the data. So when you change the purpose for which the data is used, and especially if you do it against the information that you provide people with, you are in breach of law.”

Under this premise, for example, OpenAI could find itself unable to deploy and operate models such as GPT-3.5 and GPT-4 due to how they are designed and trained. GDPR law requires that citizens in the EU be given the ability to opt out of data collection and, in the event a system outputs erroneous data, to have those errors corrected.

However, some experts believe it will be next to impossible for developers to bring GPT and similar large language AI models (LLMs) in line with GDPR. One reason for this is that the data they’re trained on is often conflated, thus making individual data points inseparable from one another.

Wiewiórowski’s assessment, per the Technology Review article, is that this represents something like a worst-case scenario for companies such as OpenAI that allegedly rushed to deployment without a public plan to address privacy issues such as those regulated by GDPR.

Citing a “big player in the tech market,” the data watchdog quipped that “the definition of hell is European legislation with American enforcement.”

Related: OpenAI’s CTO says government regulators should be ‘very involved’ in regulating AI

OpenAI faces various official inquiries in Europe with deadlines approaching — April 30 in Italy, June 11 in Germany — and it remains unclear how the company intends to approach regulators’ privacy concerns.

Once again caught in the middle are those using products and services built on the back of the GPT API and other large language models who, at this time, can’t be sure how much longer those models will be legally available.

An outright ban under GDPR could have devastating consequences for Europeans using LLMs to power their businesses and individual projects, especially in the fintech market where cryptocurrency exchanges, analysts, and traders have embraced the new technology.

And, in the U.S., where many of the most dominant cryptocurrency and blockchain companies are headquartered, a similar ban could be a massive blow to the financial sector.

As recently as April 25, analysts at financial services company JP Morgan Chase say that at least half of the gains in the S&P 500 Index this year have been driven by ChatGPT.

If the U.S. takes up the European mantle and institutes privacy regulations in line with GDPR, both the traditional and cryptocurrency trading markets could face massive disruption.

Crypto firm pleads guilty to wash trading FBI-made token

All eyes are on stablecoins: Law Decoded, April 10–17

A week before a hearing in Congress, the United States gets stablecoin regulatory framework.

Last week came in preparation for April 19, when the United States House Financial Services Committee will hold a hearing on stablecoins. The hearing will include information collected by various federal government agencies over the last year. Among the participants are Jake Chervinsky, the chief political officer at the Blockchain Association, and Dante Disparte, the chief strategy officer of Circle. 

On the hearing threshold, a new draft bill appeared in the House of Representatives document repository. The draft provides a framework for stablecoins in the United States, putting the Federal Reserve in charge of non-bank stablecoin issuers. According to the document, insured depository institutions seeking to issue stablecoins would fall under the appropriate federal banking agency supervision, while non-bank institutions would be subject to Federal Reserve oversight. Failure to register could result in up to five years in prison and a fine of $1 million. Foreign issuers would also have to seek registration to do business in the country.

Among the factors for approval is the ability of the applicant to maintain reserves backing the stablecoins with U.S. dollars or Federal Reserve notes, Treasury bills with a maturity of 90 days or less, repurchase agreements with a maturity of seven days or less backed by Treasury bills with a maturity of 90 days or less, and central bank reserve deposits.

SEC targets DeFi in a vote to revisit proposal concerning the definition of ‘exchange’

The U.S. Securities and Exchange Commission (SEC) has announced it will be revisiting the proposed redefinition of an “exchange” under the agency’s rules — a move that could include crypto market participants in decentralized finance (DeFi). Under the proposal, an “exchange” would be more closely defined as a system that “bring[s] together buyers and sellers of securities through structured methods to negotiate a trade” and explicitly include DeFi.

The commission proposed similar amendments in January 2022, keeping the comment period for the public open until June. Some crypto advocacy groups criticized the SEC’s actions at the time, suggesting it was an overreach of the commission’s authority.

Continue reading

Arizona governor vetoes bill targeting taxes on blockchain node hosts

Katie Hobbs, the governor of the American state of Arizona, has vetoed legislation that would have largely stopped local authorities from imposing taxes on individuals and businesses running blockchain nodes. The governor vetoed Arizona Senate Bill 1236, first introduced in January. The legislation aimed to revise sections of statutes pertaining to blockchain technology, mainly reducing or eliminating regulation and taxation of node operators at the state level.

Continue reading

The Italian regulator sets strict guidelines for OpenAI’s ChatGPT

Italy’s data protection agency, known as Garante, has specified the actions that OpenAI must take to revoke an order imposed on ChatGPT. OpenAI must increase its transparency and issue an information notice comprehensively outlining its data processing practices. Additionally, the statement requires OpenAI to implement age-gating measures immediately to prevent minors from accessing its technology and adopt more stringent age verification methods. 

In addition, the regulatory agency mandated that OpenAI allow users to object to processing their data to train its algorithms. Also, OpenAI is required to conduct an awareness campaign in Italy to inform individuals that their information is being processed to train its AIs.

Continue reading

Crypto firm pleads guilty to wash trading FBI-made token

First of many? How Italy’s ChatGPT ban could trigger a wave of AI regulation

A data breach and a lack of transparency led Italy to ban ChatGPT, the popular AI-powered chatbot, sparking a debate on the future of AI regulation and innovation.

Italy has recently made headlines by becoming the first Western country to ban the popular artificial intelligence (AI)-powered chatbot ChatGPT.

The Italian Data Protection Authority (IDPA) ordered OpenAI, the United States-based company behind ChatGPT, to stop processing Italian users’ data until it complies with the General Data Protection Regulation (GDPR), the European Union’s user privacy law.

The IDPA cited concerns about a data breach that exposed user conversations and payment information, the lack of transparency, and the legal basis for collecting and using personal data to train the chatbot.

The decision has sparked a debate about the implications of AI regulation for innovation, privacy and ethics. Italy’s move was widely criticized, with its Deputy Prime Minister Matteo Salvini saying it was “disproportionate” and hypocritical, as dozens of AI-based services like Bing’s chat are still operating in the country.

Salvini said the ban could harm national business and innovation, arguing that every technological revolution brings “great changes, risks and opportunities.”

AI and privacy risks

While Italy’s outright ChatGPT ban was widely criticized on social media channels, some experts argued that the ban might be justified. Speaking to Cointelegraph, Aaron Rafferty, CEO of the decentralized autonomous organization StandardDAO, said the ban “may be justified if it poses unmanageable privacy risks.”

Rafferty added that addressing broader AI privacy challenges, such as data handling and transparency, could “be more effective than focusing on a single AI system.” The move, he argued, puts Italy and its citizens “at a deficit in the AI arms race,” which is something “that the U.S. is currently struggling with as well.”

Recent: Shapella could bring institutional investors to Ethereum despite risks

Vincent Peters, a Starlink alumnus and founder of nonfungible tokens project Inheritance Art, said that the ban was justified, pointing out that GDPR is a “comprehensive set of regulations in place to help protect consumer data and personally identifiable information.”

Peters, who led Starlink’s GDPR compliance effort as it rolled out across the continent, commented that European countries who adhere to the privacy law take it seriously, meaning that OpenAI must be able to articulate or demonstrate how personal information is and isn’t being used. Nevertheless, he agreed with Salvini, stating:

“Just as ChatGPT should not be singled out, it should also not be excluded from having to address the privacy issues that almost every online service needs to address.”

Nicu Sebe, head of AI at artificial intelligence firm Humans.ai and a machine learning professor at the University of Trento in Italy, told Cointelegraph that there’s always a race between the development of technology and its correlated ethical and privacy aspects.

ChatGPT workflow. Source: OpenAI

Sebe said the race isn’t always synchronized, and in this case, technology is in the lead, although he believes the ethics and privacy aspects will soon catch up. For now, the ban was “understandable” so that “OpenAI can adjust to the local regulations regarding data management and privacy.”

The mismatch isn’t isolated to Italy. Other governments are developing their own rules for AI as the world approaches artificial general intelligence, a term used to describe an AI that can perform any intellectual task. The United Kingdom has announced plans for regulating AI, while the EU is seemingly taking a cautious stance through the Artificial Intelligence Act, which heavily restricts the use of AI in several critical areas like medical devices and autonomous vehicles.

Has a precedent been set?

Italy may not be the last country to ban ChatGPT. The IDPA’s decision to ban ChatGPT could set a precedent for other countries or regions to follow, which could have significant implications for global AI companies. StandardDAO’s Rafferty said:

“Italy’s decision could set a precedent for other countries or regions, but jurisdiction-specific factors will determine how they respond to AI-related privacy concerns. Overall, no country wants to be behind in the development potential of AI.”

Jake Maymar, vice president of innovation at augmented reality and virtual reality software provider The Glimpse Group, said the move will “establish a precedent by drawing attention to the challenges associated with AI and data policies, or the lack thereof.”

To Maymar, public discourse on these issues is a “step in the right direction, as a broader range of perspectives enhances our ability to comprehend the full scope of the impact.” Inheritance Art’s Peters agreed, saying that the move will set a precedent for other countries that fall under the GDPR.

For those who don’t enforce GDPR, it sets a “framework in which these countries should consider how OpenAI is handling and using consumer data.” Trento University’s Sebe believes the ban resulted from a discrepancy between Italian legislation regarding data management and what is “usually being permitted in the United States.”

Balancing innovation and privacy

It seems clear that players in the AI space need to change their approach, at least in the EU, to be able to provide services to users while staying on the regulators’ good side. But how can they balance the need for innovation with privacy and ethics concerns when developing their products?

This is not an easy question to answer, as there could be trade-offs and challenges involved in developing AI products that respect users’ rights.

Joaquin Capozzoli, CEO of Web3 gaming platform Mendax, said that a balance can be achieved by “incorporating robust data protection measures, conducting thorough ethical reviews, and engaging in open dialogue with users and regulators to address concerns proactively.”

StandardDAO’s Rafferty stated that instead of singling out ChatGPT, a comprehensive approach with “consistent standards and regulations for all AI technologies and broader social media technologies” is needed.

Balancing innovation and privacy involves “prioritizing transparency, user control, robust data protection and privacy-by-design principles.” Most companies should be “collaborating in some way with the government or providing open-source frameworks for participation and feedback,” said Rafferty.

Sebe noted the ongoing discussions on whether AI technology is harmful, including a recent open letter calling for a six-month stop in advancing the technology to allow for a deeper introspective analysis of its potential repercussions. The letter garnered over 20,000 signatures, including tech leaders like Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and Ripple co-founder Chris Larsen — among many others.

The letter raises a valid concern to Sebe, but such a six-month stop is “unrealistic.” He added:

“To balance the need for innovation with privacy concerns, AI companies need to adopt more stringent data privacy policies and security measures, ensure transparency in data collection and usage, and obtain user consent for data collection and processing.”

The advancement of artificial intelligence has increased the capacity it has to gather and analyze significant quantities of personal data, he said, prompting concerns about privacy and surveillance. To him, companies have “an obligation to be transparent about their data collection and usage practices and to establish strong security measures to safeguard user data.”

Other ethical concerns to be considered include potential biases, accountability and transparency, Sebe said, as AI systems “have the potential to exacerbate and reinforce pre-existing societal prejudices, resulting in discriminatory treatment of specific groups.”

Mendax’s Capozzoli said the firm believes it’s the “collective responsibility of AI companies, users and regulators to work together to address ethical concerns, and create a framework that encourages innovation while safeguarding individual rights.”

Recent: Pro-XRP lawyer John Deaton ‘10x more into BTC, 4x more into ETH’: Hall of Flame

The Glimpse Group’s Maymar stated that AI systems like ChatGPT have “infinite potential and can be very destructive if misused.” For the firms behind such systems to balance everything out, they must be aware of similar technologies and analyze where they ran into issues and where they succeeded, he added.

Simulations and testing reveal holes in the system, according to Maymar; therefore, AI companies should seemingly strive for innovation, transparency and accountability.

They should proactively identify and address potential risks and impacts of their products on privacy, ethics and society. By doing so, they will likely be able to build trust and confidence among users and regulators, avoiding — and potentially reverting — the fate of ChatGPT in Italy.

Crypto firm pleads guilty to wash trading FBI-made token

Italy ChatGPT ban: Data watchdog demands transparency to lift restriction

The Italian regulator sets strict guidelines for OpenAI’s ChatGPT, mandating increased transparency and age verification measures to protect user privacy before lifting restrictions.

Italy’s data protection agency, known as Garante, has specified the actions that OpenAI must take to revoke an order imposed on ChatGPT. The order was issued in March 2023. The watchdog suspected the artificial intelligence (AI) chatbot service of violating the European Union’s General Data Protection Regulation (GDPR) and mandated the United States-based firm to halt the processing of data belonging to individuals residing in the country.

The regulator’s press release mandates that OpenAI must increase its transparency and issue an information notice comprehensively outlining its data processing practices. Additionally, the statement requires OpenAI to implement age-gating measures immediately to prevent minors from accessing its technology and adopt more stringent age verification methods.

OpenAI must specify the legal grounds it relies upon for processing individuals’ data to train its AI, and it cannot rely on contract performance. This means that OpenAI must choose between obtaining user consent or relying on legitimate interests. OpenAI’s privacy policy currently references three legal bases but appears to give more weight to the performance of a contract when providing services such as ChatGPT.

Furthermore, OpenAI must enable users and non-users to exercise their rights regarding their personal data, including requesting corrections for any misinformation generated by ChatGPT or deleting their data.

In addition, the regulatory agency mandated that OpenAI allow users to object to processing their data to train its algorithms. Also, OpenAI is required to conduct an awareness campaign in Italy to inform individuals that their information is being processed to train its AIs.

Garante has set a deadline of April 30 for OpenAI to complete most of these tasks. OpenAI has been granted additional time to comply with the extra demand of migrating from the existing, age-gating child safety technology to a more resilient age verification system.

Related: ‘ChatGPT-like personal AI’ can now be run locally, Musk warns ‘singularity is near’

Specifically, OpenAI has until May 31 to submit a plan outlining the implementation of age verification technology that screens out users under 13 years old (and those aged 13 to 18 who have not obtained parental consent). The deadline for deploying this more robust system is set for Sept. 30.

On Friday, March 31, following the concerns raised by the national data protection agency about possible privacy violations and failure to verify the age of users, Microsoft-backed OpenAI took ChatGPT offline in Italy.

Magazine: Best and worst countries for crypto taxes — plus crypto tax tips

Crypto firm pleads guilty to wash trading FBI-made token

Italian regulator draws criticism for blocking AI chatbot ChatGPT

ChatGPT’s temporary ban in Italy over privacy concerns draws criticism from figures in the tech industry and the country, including Deputy PM Matteo Salvini and expert Ron Moscona.

Italy’s ban on conversational artificial intelligence (AI), ChatGPT, sparked significant controversy among the tech industry and the country. The Italian deputy prime minister also criticized the ban as excessive. 

On Friday, March 31, following concerns raised by the national data agency about possible privacy violations and failure to verify the age of users, Microsoft-backed OpenAI took ChatGPT offline in Italy. This action by the independent agency marked the first instance of a Western country taking measures against the AI chatbot.

The Italian Deputy Prime Minister Matteo Salvini took to Instagram to share his thoughts: “I find the decision of the Privacy Watchdog that forced #ChatGPT to prevent access from Italy disproportionate,” says a translated version of his post.

Salvini expressed that the regulator’s move was hypocritical as there are dozens of services based on artificial intelligence and named examples like Bing’s chat. Salvini said that common sense was needed as "privacy issues concern practically all online services.”

The ChatGPT ban could harm national business and innovation, Salvini said, adding that he hoped for a rapid solution to be found and for the chatbot's access to Italy to be restored.

"Every technological revolution brings great changes, risks, and opportunities. It is right to control and regulate through international cooperation between regulators and legislators, but it cannot be blocked," he said.

Another objection to the ban was heard from Ron Moscona, a partner at the international law firm Dorsey & Whitney in its London office and an expert in technology and data privacy. He said the ban by the Italian regulators comes as a surprise as it is unusual to ban a service because of a data breach incident completely.

Related: ChatGPT and AI must pay for the news it consumes: News Corp Australia CEO

Following the request from the authorities, OpenAI has blocked ChatGPT for users in Italy. However, the company stated that it adheres to privacy regulations in Europe and is willing to cooperate with Italy's privacy regulatory body. OpenAI claimed that it takes measures to minimize personal data when training its AI systems, including ChatGPT, as its goal is for the AI to acquire knowledge about the world and not to obtain information about specific individuals.

The AI chatbot is not only encountering difficulties in Italy but is also under scrutiny in other regions worldwide. The Center for Artificial Intelligence and Digital Policy (CAIDP) lodged a complaint against ChatGPT on March 31 with the aim of preventing the deployment of potent AI systems to the general public. The CAIDP characterized the chatbot as a "partial" and "misleading" platform that jeopardizes public safety and confidentiality.

Magazine: All rise for the robot judge: AI and blockchain could transform the courtroom

Crypto firm pleads guilty to wash trading FBI-made token

BRICS Emerges as the World’s Largest GDP Bloc, Propelled by China’s Rapid Expansion

BRICS Emerges as the World’s Largest GDP Bloc, Propelled by China’s Rapid ExpansionBRICS, a set of countries grouped as an alternative to the G7, is now the world’s largest gross domestic product (GDP) bloc, taking purchasing power parity into account, according to reports from Acorn Macro Consulting. Powered By China’s growth, the group now contributes 31.5% to the global GDP, while the G7 provides 30.7%. BRICS Countries […]

Crypto firm pleads guilty to wash trading FBI-made token

LBank Secures Virtual Asset Provider Registration to Operate in Italy

LBank Secures Virtual Asset Provider Registration to Operate in ItalyPRESS RELEASE. Global crypto exchange LBank has registered as a Virtual Asset Provider with Italian regulator Organismo degli Agenti e dei Mediatori (OAM). The regulatory approval allows the exchange to offer a range of services and products to Italian users. On the 1st of February 2023, LBank completed its registration with the OAM as a […]

Crypto firm pleads guilty to wash trading FBI-made token

Bank of Italy selectively encouraging DLT, preparing for MiCA, governor says

Italian central banker Ignazio Visco talked about fostering or discouraging crypto assets during a lengthy speech to the Italian financial markets association.

The Bank of Italy is looking for new ways to apply distributed ledger technology (DLT) and is preparing for the advent of Markets in Crypto-Assets (MiCA) regulation, bank governor Ignazio Visco told a congress of Assiom Forex, the Italian financial markets association, on Feb. 4. 

DLT may offer benefits such as cheaper cross-border transactions and increased financial system efficiency, Visco said. The Italian central bank “is focused on the need to identify areas” where DLT can contribute to financial stability and consumer protection.

Visco expressed the desire to see regulations that sorted out the crypto-asset market to separate “highly risky instruments and services that divert resources from productive activities and collective well-being” from those that bring tangible benefit to the economy:

“The spread of the latter can be fostered by developing rules and controls similar to those already enforced in the traditional financial system; the former, instead, must be strongly discouraged.”

Visco specifically mentioned “crypto-assets with no intrinsic value” among the former group.

The Bank of Italy is working at the European and global levels to develop the technology and a framework of standards, Visco said. It is also collaborating with Italian securities market regulator CONSOB and the Ministry of Economy and Finance to initiate the “authorization and supervision activities” of MiCA.

Related: EU postpones final vote on MiCA for the second time in two months

Italy recently imposed a 26% capital gains tax on crypto-asset trading over 2,000 euros in 2023. However, Italian taxpayers have the choice of paying a 14% tax on their crypto-asset holding as of Jan. 1. This alternative is intended to incentivize taxpayers to declare their digital holdings.

Visco estimated the number of Italian households that own crypto assets at 2% and said those holdings were “modest amounts on average.”

Crypto firm pleads guilty to wash trading FBI-made token

Italy approves 26% capital gains tax on cryptocurrencies

The Italian Senate approved the new tax rate for crypto trading as part of the budget legislation for 2023.

On Dec. 29, 2022, days before the year’s end, Italy’s Senate approved its budget for 2023, which included an increase in taxation for crypto investors — a 26% tax on capital gains on crypto-asset trading over 2,000 euros (approximately $2,13 at time of publication).

The approved legislation defines crypto assets as “a digital representation of value or rights that can be transferred and stored electronically, using distributed ledger technology or similar technology.” Previously, crypto assets were treated as foreign currencies in the country, with lower taxes.

As reported by Cointelegraph, the bill also establishes that taxpayers will have the option to declare the value of their digital-asset holdings as of Jan. 1 and pay a 14% tax, incentives that are intended to encourage Italians to declare their digital assets.

Other changes introduced by the budget law include tax amnesties to reduce penalties on missed tax payments, fiscal incentives for job creation and a reduction in the retirement age. It also includes 21 billion euros ($22.4 billion) of tax breaks for businesses and households dealing with the energy crisis.

Related: MiCA bill contains a clear warning for crypto influencers

Giorgia Meloni, the first woman to serve as Italy’s prime minister, received wide support for her bill from the legislative body, even though she promised dramatic tax cuts when elected in September.

According to local media reports, measures from Italy’s government to reduce gas consumption across the country including over 15 days without central heating for buildings, with the population being asked to turn their heating down one degree and turn it off one hour more per day during the winter.

Italy‘s legislation follows the approval of the Markets in Crypto Assets (MiCA) bill on Oct. 10, establishing a consistent regulatory framework for cryptocurrency in the 27 member countries of the European Union. MiCA is expected to come into effect in 2024.

Crypto firm pleads guilty to wash trading FBI-made token