1. Home
  2. Italy

Italy

Amnesty International head says AI innovation vs. regulation is ‘false dichotomy’

Amnesty’s secretary-general said the EU has a chance to lead with new AI regulations and member states shouldn’t “undermine” the forthcoming AI Act.

The secretary-general of Amnesty International, Anges Callamard, released a statement on Nov. 27 in response to three European Union member states pushing back on regulating artificial intelligence (AI) models. 

France, Germany and Italy reached an agreement that included not adopting such stringent regulations for foundation models of AI, which is a core component of the EU’s forthcoming EU AI Act.

This came after the EU received multiple petitions from tech industry players asking the regulators not to over-regulate the nascent industry.

However, Callamard said the region has an opportunity to show “international leadership” with robust regulation of AI, and member states “must not undermine the AI Act by bowing to the tech industry’s claims that adoption of the AI Act will lead to heavy-handed regulation that would curb innovation.”

“Let us not forget that ‘innovation versus regulation’ is a false dichotomy that has for years been peddled by tech companies to evade meaningful accountability and binding regulation.”

She said this rhetoric from the tech industry highlights the “concentration of power” from a small group of tech companies who want to be in charge of the “AI rulebook.”

Related: US surveillance and facial recognition firm Clearview AI wins GDPR appeal in UK court

Amnesty International has been a member of a coalition of civil society organizations led by the European Digital Rights Network advocating for EU AI laws with human rights protections at the forefront.

Callamard said human rights abuse by AI is “well documented” and “states are using unregulated AI systems to assess welfare claims, monitor public spaces, or determine someone’s likelihood of committing a crime.”

“It is imperative that France, Germany and Italy stop delaying the negotiations process and that EU lawmakers focus on making sure crucial human rights protections are coded in law before the end of the current EU mandate in 2024.”

Recently, France, Germany and Italy were also part of a new set of guidelines developed by 15 countries and major tech companies, including OpenAI and Anthropic, which suggest cybersecurity practices for AI developers when designing, developing, launching and monitoring AI models.

Magazine: AI Eye: Get better results being nice to ChatGPT, AI fake child porn debate, Amazon’s AI reviews

TRUMP dips after president admits ‘I don’t know much about it’

Italian regulators investigate online AI data scraping

The Italian Data Protection Authority has launched a “fact-finding” probe into the security practices of public and private websites to prevent AI data scraping.

The Italian Data Protection Authority, a local privacy regulator, announced the launch of a “fact-finding” investigation on Nov. 22, in which it will look into the practice of data gathering to train artificial intelligence (AI) algorithms.

The investigation aims to verify the adoption of appropriate security measures on public and private websites to prevent “web scraping” of personal data used for AI training via third parties from “the ‘spiders’ of the manufacturers of artificial intelligence algorithms.”

According to the regulator, this “fact-finding survey” applies to all public and private subjects operating as data controllers, established in Italy or offering services in Italy that provide freely accessible personal data online.

Although it did not name specific companies, it said that it is “in fact” known that “various AI platforms” scrape the web for the purpose of collecting large quantities of personal data. It said after the investigation it will take any necessary measures “even urgently.”

In July, Google was hit with a class-action lawsuit in the United States over its new AI data-scraping privacy policy across its web services for its own AI algorithmic training purposes. 

Related: Italian senator provokes parliament with AI-generated speech

Italian regulators invited AI industry experts, academics and others to participate in the process and share views or comments within 60 days.

The Italian privacy watchdog was one of the first to quickly scrutinize AI after it banned the popular AI chatbot ChatGPT from operating in Italy due to privacy breaches in March 2023. In May, Italy set aside millions of euros in a designated fund for workers at risk of AI replacement. 

Earlier this week, Italy, France and Germany entered into an agreement on future AI regulation, according to a joint paper seen by Reuters. The agreement is expected to help further similar negotiations on a European Union level. 

The three countries backed the idea of creating voluntary commitments for large and small AI providers in the European Union.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

TRUMP dips after president admits ‘I don’t know much about it’

TikTok, Snapchat, OnlyFans and others to combat AI-generated child abuse content

Major social platforms, AI companies, governments and NGOs issued a joint statement pledging to combat AI-generated abusive content, such as explicit images of children.

A coalition of major social media platforms, artificial intelligence (AI) developers, governments and non-governmental organizations (NGOs) have issued a joint statement pledging to combat abusive content generated by AI.

On Oct. 30, the United Kingdom issued the policy statement, which includes 27 signatories, including the governments of the United States, Australia, Korea, Germany and Italy, along with social media platforms Snapchat, TikTok and OnlyFans.

It was also undersigned by the AI platforms Stability AI and Ontocord.AI and a number of NGOs working toward internet safety and children’s rights, among others.

The statement says that while AI offers “enormous opportunities” in tackling threats of online child sexual abuse, it can also be utilized by predators to generate such types of material.

It revealed data from the Internet Watch Foundation that, within a month of 11,108 AI-generated images shared in a dark web forum, 2,978 depicted content related to child sexual abuse.

Related: US President Joe Biden urges tech firms to address risks of AI

The U.K. government said the statement stands as a pledge to “seek to understand and, as appropriate, act on the risks arising from AI to tackling child sexual abuse through existing fora.”

“All actors have a role to play in ensuring the safety of children from the risks of frontier AI.”

It encouraged transparency on plans for measuring, monitoring and managing ways AI can be exploited by child sexual offenders and on a country level to build policies regarding the topic.

Additionally, it aims to maintain a dialogue around combating child sexual abuse in the AI age. This statement was released in the run-up to the U.K. hosting its global summit on AI safety this week.

Concerns over child safety in relation to AI have been a major topic of discussion in the face of the rapid emergence and widespread use of the technology.

On Oct. 26, 34 states in the U.S. filed a lawsuit against Meta, the Facebook and Instagram parent company, over child safety concerns.

Magazine: AI Eye: Get better results being nice to ChatGPT, AI fake child porn debate, Amazon’s AI reviews

TRUMP dips after president admits ‘I don’t know much about it’

Bank of Italy innovation hub supports research into security tokens on secondary markets

The Italian central bank’s Milano Hub has selected a project headed by Cetif Advisory and Polygon Labs in its second round of proposals.

The Bank of Italy’s Milano Hub innovation center will provide support for a project developed by Cetif Advisory to research a security token ecosystem for institutional decentralized finance (DeFi). 

The project has no “commercialisation purpose,” but will extend “the scope of analysis” of security tokens on secondary markets. Security tokens are digitized representations of the ownership of real-world assets. Cetif Advisory general manager Imanuel Baharier said in a statement:

“We believe it is vitally important to create the conditions for DeFi to become a safe and open operating environment for supervised entities.”

The project will strive to allow institutional market participants to operate in a DeFi environment while complying with regulatory guidelines. It will further develop Cetif Advisory’s Lionity platform, which it describes as an “institutional grade automated market maker.”

Related: INX security token platform gets its first token from a public company, Greenbriar

Cetif Advisory is a spinoff of the Cetif Research Centre at the Università Cattolica del Sacro Cuore in Milan. The project is a collaboration with Polygon Labs, Fireblocks and other organizations. Italian banks, asset management companies and ten other financial institutions will take part in it.

The Cetif Advisory project was chosen during the hub’s second call for proposals. The project was one of seven projects given the green light in the fintech category. It will receive support from the Milano Hub for six months, beginning this month, in the form of expert advice and in-depth regulatory research.

Securities tokenization is an emerging field in blockchain technology. Citi GPS recently predicted that the tokenized securities market may be worth $4 trillion to $5 trillion by 2030, with private equity and venture capital becoming the most tokenized, followed by real estate.

Magazine: Block by block: Blockchain technology is transforming the real estate market

TRUMP dips after president admits ‘I don’t know much about it’

Privacy advocates score a win after Binance buckles on coin listings

Those of us in Italy and surrounding countries will be allowed to continue trading Zcash, Monero and other coins that Binance sought to condemn as unworthy.

Privacy advocates scored a big win in June with Binance’s announcement that it was backtracking on a decision to delist privacy coins for users in a number of European countries.

As a result of the move, users in Italy, Poland, Spain and France will be permitted to continue trading tokens including Zcash (ZEC), Monero (XMR), Decred (DCR), Horizen’s ZEN, Verge (XVG), Dash (DASH), Secret (SCRT), Firo, Navcoin (NAV), MobileCoin (MOB), Beam and PIVX.

Banning the coins would have been a big, big mistake. Privacy coins empower individuals against financial surveillance by offering enhanced transactional security, and crypto communities should be thankful that Binance is no longer planning to remove them from its listings. In the modern climate of excessive surveillance and overall lack of confidentiality for users everywhere, their significance cannot be overstated.

Related: Binance was wrong to boot Monero, Zcash and other privacy coins

These coins’ fungibility, which makes each individual unit interchangeable and censorship-resistant, is an advantage they hold over almost every other cryptocurrency, and losing these additional layers of security and anonymity would have been an incredible loss for the community.

Privacy coins have gained traction in recent years due to the surfacing of a series of harsh regulations. Binance’s decision, in fact, comes on the heels of the European Union ironing out its much-discussed standards for digital assets, the recent Markets in Crypto-Assets (MiCA) regulations. Having just signed this into law, July will also see the European Securities and Markets Authority launch a MiCA consultation process. It’s fair to say that there’s quite some movement in the space, and we may not have seen the last of what Europe has in store for the crypto industry.

ZCash's price sank to a low of $21.70 a week after Binance's May 31 threat to delist it — and rocketed back to $33 after the decision was reversed. Source: Binance

But the truth is that privacy is a fundamental human right protected by the United Nations. Article 12 of the United Nations’ Universal Declaration of Human Rights states that “no one shall be subjected to arbitrary interference with his privacy” and that “everyone has the right to the protection of the law against such interference or attacks,” so why should crypto be any different?

This concept is even more crucial in the digital era as data exploitation risks increase exponentially and tech giants have every tool at their disposal to try to prevent people from getting control over their private information.

As a matter of fact, Binance’s decision reflects the complex balance between regulatory compliance and users’ privacy needs that exchanges must strive for at all times, even as they face international regulations varying from country to country, and even as some countries decide to enforce stricter rules than others.

Related: SEC charges against Binance and Coinbase are terrible for DeFi

As for the future implications of the Binance decision — but also those stemming from the intense regulatory pressure looming over Europe — we could see a potential increase in the demand and, subsequently, the development of the privacy coins sector. Ironically, the precedent set by Binance could very well lead to more widespread acceptance of privacy coins, as it might prompt other exchanges to rethink their stance on privacy coins, potentially leading to wider availability. We shall see.

At the end of the day, this week’s news calls attention to the real power of community sentiment when it comes to shaping crypto policies and regulations. “We have revised how we classify privacy coins,” the official statement released by the cryptocurrency exchange read, “after carefully considering feedback from our community.” Reading between the lines, what’s clear is that the backlash they received in the past month worked.

It’s hard to overstate how necessary privacy in the crypto industry really is, and that’s why we cannot back down when it comes to fighting for it at every chance we get.

At the heart of it, the community’s influence on Binance’s decision demonstrates its power to shape the future of the crypto industry — and we’d do well not to forget that.

The crypto community should come together to continue fighting for privacy. It forms the very foundation of Web3. And, as the Romans used to say, ibi semper est victoria ubi est concordia: There is always victory where there is unity.

Daniele Servadei is the co-founder and CEO of Sellix, an e-commerce platform based in Italy.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

TRUMP dips after president admits ‘I don’t know much about it’

How regulators are mitigating the risk of extinction from AI: Law Decoded, May 29–June 5

There is no shortage of regulatory efforts to mitigate the negative impacts of artificial intelligence.

The lively discussion around artificial intelligence (AI) continues. Last week, dozens of AI experts — including the CEOs of OpenAI, Google DeepMind and Anthropic — signed an open statement with a single sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Despite the ominous statement, there is no shortage of regulatory efforts to mitigate the negative impacts of AI. In China, the “improvement of governance” in digital data and AI is being discussed by President Xi Jinping and prominent members of the Communist party. The Australian government has announced a sudden eight-week consultation that will seek to understand whether any “high-risk” AI tools should be banned.

Italian Senator Marco Lombardo found a creative way to join the discussion by performing a speech entirely composed by OpenAI’s ChatGPT-4. He also trained the chatbot with the draft law of the Italian-Swiss agreement on cross-border workers, which was the topic of the meeting, along with other recent developments on the subject.

In Japan, the government’s AI strategy council blows the whistle on the lack of laws protecting copyright from AI. The Personal Information Protection Commission has demanded OpenAI minimize the sensitive data it collects for machine learning purposes. Previously, local politicians voiced support for AI, with Chief Cabinet Secretary Hirokazu Matsuno even saying Japan would consider incorporating AI technology into government systems.

CNHC stablecoin issuer detained by Chinese police

Employees of Trust Reserve — the issuer of the Chinese yuan-pegged stablecoin CNH Coin (CNHC) — have been detained by Chinese police. The company’s office in Pudong, Shanghai, was empty as of May 31. The door was sealed on May 29, with a notice saying, “Judicial seizure, strictly no vandalism.” In March, Trust Reserve secured $10 million in funding in a round led by KuCoin Ventures, the venture capital arm of the major cryptocurrency exchange. 

Continue reading

Binance to delist privacy tokens in France, Italy, Spain and Poland

Starting from June 26, privacy tokens, such as Monero (XMR) and Zcash (ZEC), will no longer be available for trading for Binance customers in France, Italy, Poland and Spain. The new restrictions affect a total of 12 coins: Decred (DCR), Dash (DASH), ZEC, Horizen (ZEN), PIVX (PIVX), Navcoin (NAV), Secret (SCRT), Verge (XVG), Firo (FIRO), Beam (BEAM), XMR and MobileCoin (MOB). 

The move comes as part of ongoing compliance processes within the company. “While we aim to support as many quality projects as possible, we are required to follow local laws and regulations regarding the trading of privacy coins to ensure we can continue to serve as many users as we can,” a Binance representative told Cointelegraph.

Continue reading

EU officials sign MiCA into law

Sweden’s minister for rural affairs, Peter Kullgren, and European Parliament President Roberta Metsola signed the long-anticipated Markets in Crypto-Assets (MiCA) cryptocurrency regulatory framework into law roughly three years after the European Commission introduced the measure. The framework is expected to come into effect following publication in the Official Journal of the European Union, with many of MiCA’s regulations on crypto firms likely starting sometime in 2024.

Continue reading

US lawmakers aim to limit the SEC’s power with a new bill 

Lawmakers in the United States House Financial Services Committee and House Agriculture Committee have released a draft discussion offering certain crypto assets a pathway to being labeled digital commodities. The draft bill would prohibit the U.S. Securities and Exchange Commission (SEC) from denying digital asset trading platforms from registering as a regulated alternative trading system, allowing such firms to offer “digital commodities and payment stablecoins.”

Specifically, the proposed legislation cracks down on the SEC’s approach, which many in the crypto space have criticized. The framework under the bill would allow certain digital assets to qualify as digital commodities if they are “functional and considered decentralized,” and would require the SEC to provide a “detailed analysis” of any objections to a classification of a firm as decentralized.

Continue reading

TRUMP dips after president admits ‘I don’t know much about it’

Italian senator provokes parliament with AI-generated speech

A senator in Italy utilized OpenAI’s GPT-4 to generate a speech during a parliamentary meeting to spark a “serious debate” about the technology.

An Italian senator spoke before members of the Italian parliament with — unbeknownst to parliament members — a speech entirely composed by artificial intelligence (AI).

In the parliamentary meeting on May 31, Italian Senator Marco Lombardo pulled the stunt to spark a “serious debate” among his colleagues about what’s at stake and in-store from AI usage, according to a local news report.

Lombardo’s speech was reportedly created via OpenAI’s GPT-4 chatbot. In the same interview with local media, he revealed that he trained the chatbot with the draft law of the Italian-Swiss agreement on cross-border workers, which was the topic of the meeting, along with other recent developments on the subject.

“It seemed important to me that the Italian parliament also open its eyes to a phenomenon that is now unavoidable.“

Carlo Calenda, the leader of the Italian political party Azione, of which Lombardo is a member, tweeted that the speech was “flawless.” However, he didn’t know whether to call this development “progress” or “taking a step back.”

Lombardo told local reporters that he wanted to show politicians AI could also " threaten” their jobs. 

“Not even politics can think of exempting itself from a comparison with algorithms. You need to know how to use it consciously.”

On May 18, Italian officials set aside $33 million to enhance the development of digital skills for workers at risk of termination due to automation and AI in various professional sectors. 

Related: OpenAI warns European officials over upcoming AI regulations

Back in March, Italy banned the use of ChatGPT in the country after the application suffered a data breach. After demanding more transparency from OpenAI, the application was able to reenter the country once again about a month later, on April 29.

However, Italy’s ban prompted other global governments to pay closer attention to the technology and consider regulations. Some governments, such as Romania, have already begun implementing AI for policy recommendations.

Currently, regulators in the European Union are working through drafts of the forthcoming EU Artificial Intelligence Act, which is set to take effect in the following two to three years to regulate the public use of generative AI tools.

Most recently, the EU’s tech chief said regulators should push out a voluntary code of conduct for AI companies and not waste any time before laws come into place.

Magazine: ‘Moral responsibility’: Can blockchain really improve trust in AI?

TRUMP dips after president admits ‘I don’t know much about it’

Italy sets aside millions for workers at risk of AI replacement

Italian officials are setting aside $33 million to enhance the development of digital skills for workers at risk termination due to automation and AI.

Italy’s back and forth with the emergence of artificial intelligence (AI) continues, after lawmakers in the country announced funds for those at risk of losing their jobs to automation. 

On May 15 Italian officials set aside 30 million euros ($33 million) towards the Fondo per la Repubblica Digitale (FRD) to enhance the capabilities of the unemployed and those whose jobs are at risk of automation and AI-takeover.

The FRD was initially set up by the Italian government back in 2021 with the mission of boosting digital skills and “developing the country's digital transition.” According to the foundation, 54% of Italians between the ages of 16-74 are without basic digital skills as opposed to the European Union average of 46%.

While two-thirds of the funds will go towards helping the unemployed develop digital skills to enter the job market in the first place, 10 million will go towards those already at risk of AI replacement.

The FRD singled out professional industries that are categorized a high risk of AI replacement, including “transport and logistics, office and administrative support, production, services and the sales sector.”

Related: AI chatbot usage causes concern among 70% of Japanese adults: Survey

This development comes after Italy became one of the first countries to briefly ban the usage of the AI chatbot ChatGPT. The initial ban followed a data breach on the AI system which caused exposure to user data. 

Regulators in Italy demanded more transparency from OpenAI, the company behind ChatGPT, along with implementing strict guidelines prior to lifting the ban. The application reentered the country after meeting such standards on April 29, nearly a month after its ban.

Despite being banned for only one month, the move triggered officials in Europe and around the world to consider policy towards AI. German regulators followed and launched an inquiry into ChatGPT’s GDPR compliance.

Currently parliament members in the E.U. are voting on a brand new AI Act, which would be among the first set of regulations for the emerging AI technology.

Magazine: ‘Moral responsibility’: Can blockchain really improve trust in AI?

TRUMP dips after president admits ‘I don’t know much about it’

Coinbase exec uses ChatGPT ‘jailbreak’ to get odds on wild crypto scenarios

According to ChatGPT, Bitcoin has a 15% chance it will “fade to irrelevancy” with prices down 99.99% by 2035.

A Coinbase executive claims to have discovered a “jailbreak” for artificial intelligence tool ChatGPT, allowing it to calculate the probability of bizarre crypto price scenarios.

The crypto exchange’s head of business operations and avid ChatGPT user Conor Grogan shared a screenshot of the results in an April 30 Twitter post — showing that ChatGPT states there be a 15% chance that Bitcoin (BTC) will “fade to irrelevancy” with prices falling over 99.99% by 2035.

Meanwhile, the chatbot assigned a 20% chance for Ethereum (ETH) becoming irrelevant and approaching near-zero price levels by 2035.

ChatGPT was even less confident about Litecoin (LTC) and Dogecoin (DOGE) however, attributing probabilities of 35% and 45% respectively for the coins to go to near zero.

The Coinbase executive concluded that ChatGPT is “generally” a “big fan” of Bitcoin but remains “more skeptical” when it comes to altcoins.

Prior to the cryptocurrency predictions, Grogan asked ChatGPT to assign odds to several political predictions involving Russian president Vladimir Putin, U.S. President Joe Biden and former U.S. president Donald Trump.

Other predictions were aimed towards the impact of AI on humanity, religion and the existence of aliens.

“Aliens have visited Earth and are being covered up by the government” — one wild prediction read — to which ChatGPT assigned a 10% probability.

The executive also shared a script of the prompt, which he then fed to ChatGPT to build the tables.

Grogan backed up the preciseness of the results by claiming to tested out the prompt over 100 times:

“I ran this prompt 100 times on a wiped memory GPT 3.5 and 4 and GPT would return very consistent numbers; standard deviation was <10% in most cases, and directionally it was extremely consistent.”

Related: Here’s how ChatGPT-4 spends $100 in crypto trading

It isn’t the first time the executive experimented with crypto-related issues using ChatGPT.

On March 15. Grogan showed that GPT-4 — the latest iteration of ChatGPT — can spot security vulnerabilities in Ethereum smart contracts and provide an outline to exploit faulty contracts.

Studies carried out by OpenAI — the team behind ChatGPT — have shown GPT-4 to pass high school tests and law school exams with scores ranking in the 90th percentile.

Meanwhile, Italy recently lifted a ban on the AI tool after banning it for one month following a series of privacy concerns that were raised to Italian regulators.

Magazine: How to prevent AI from ‘annihilating humanity’ using blockchain

TRUMP dips after president admits ‘I don’t know much about it’

OpenAI’s ChatGPT re-enters Italy after obliging transparency demands

The revocation of the ban in Italy required ChatGPT to reveal its data processing practices and implement age-gating measures among other legal requirements.

Popular interactive artificial intelligence (AI) chatbot, ChatGPT, has been reallowed to provide services in Italy after addressing the privacy concerns raised by the region’s data protection agency, Garante.

On March 31, OpenAI’s ChatGPT was placed on a temporary ban in Italy after a watchdog suspected the AI chatbot of violating the European Union’s General Data Protection Regulation (GDPR) requirement.

Exactly 29 days after the ban, on April 29, OpenAI CEO Sam Altman announced that ChatGPT was “available in Italy again” without revealing the steps taken by the company to comply with the Italian regulator’s transparency demands.

The revocation of the ban required ChatGPT to reveal its data processing practices and implement age-gating measures among other legal requirements. As highlighted by the Italian regulator, the temporary ban was a response to the recent data breach that CHaptGPT suffered on March 20.

While the abrupt ban initially raised possibilities about a wave of AI regulations, the willingness of ChatGPT to swiftly comply with local authorities is seen as an overall positive move, widely welcomed by its users globally.

Related: Bitget pledges $10M for Fetch.ai ecosystem amid ChatGPT boom

European Union legislators are working on a new bill to keep a check on the explosive AI developments.

As Cointelegraph reported, the bill aims to classify AI tools according to the perceived risk levels based on their capability. The risk levels range from minimal to unacceptable. According to the bill, high-risk tools will not be banned entirely but will be subjected to stricter transparency requirements.

If signed into law, generative AI tools, including ChatGPT and Midjourney, will be subject to disclosing the use of copyrighted materials in AI training.

Magazine: Why join a blockchain gaming guild? Fun, profit and create better games

TRUMP dips after president admits ‘I don’t know much about it’