1. Home
  2. ai laws

ai laws

UN report highlights ‘serious and urgent’ concerns about AI deepfakes

The UN wants to address AI-generated fake news and information as the organization looks to bring in voluntary guidelines for the technology.

The United Nations has called artificial intelligence-generated media a “serious and urgent” threat to information integrity, particularly on social media.

In a June 12 report, the UN claimed the risk of disinformation online has “intensified” due to “rapid advancements in technology, such as generative artificial intelligence” and singled out deepfakes in particular.

The UN said false information and hate speech generated by AI is “convincingly presented to users as fact.” Last month, the S&P 500 briefly dipped due to an AI-generated image and faked news report of an explosion near the Pentagon.

It called for AI stakeholders to address the spread of false information and asked them to take “urgent and immediate” action to ensure the responsible use of AI, and added:

“The era of Silicon Valley’s ‘move fast and break things’ philosophy must be brought to a close.”

The same day UN Secretary-General António Guterres held a press conference and said “alarm bells” over generative AI are “deafening” and “are loudest from the developers who designed it.”

Guterres added the report “will inform a UN Code of Conduct for Information Integrity on Digital Platforms.” The code is being developed ahead of the Summit of the Future — a conference to be held in late September 2024 aiming to host inter-government discussions for a raft of issues.

“The Code of Conduct will be a set of principles that we hope governments, digital platforms and other stakeholders will implement voluntarily,” he said.

'Most substantial policy challenge ever’

Meanwhile, on June 13 the former Prime Minister of the United Kingdom, Tony Blair, and Conservative Party politician William Hague released a report on AI.

The pair suggested the governments of the U.K., United States and “other allies” should “push for a new UN framework on urgent safeguards.”

Related: UK to get ‘early or priority access’ to AI models from Google and OpenAI

The arrival of AI “could present the most substantial policy challenge ever faced” due to its “unpredictable development” and “ever-increasing power,” the pair said.

Blair and Hague added that the government’s “existing approaches and channels are poorly configured” for such a technology.

Magazine: ‘Moral responsibility’ — Can blockchain really improve trust in AI?

IMF execs float raising crypto mining electricity prices by 85%

Australia asks if ‘high-risk’ AI should be banned in surprise consultation

The Australian government suddenly announced a new eight-week consultation to ask how heavily it should police the AI sector.

The Australian government has announced a sudden eight-week consultation that will seek to understand whether any “high-risk” artificial intelligence tools should be banned.

Other regions, including the United States, the European Union and China, have also launched measures to understand and potentially mitigate risks associated with rapid AI development in recent months.

On June 1, Industry and Science Minister Ed Husic announced the release of two papers — a discussion paper on “Safe and Responsible AI in Australia” and a report on generative AI from the National Science and Technology Council.

The papers came alongside a consultation that will run until July 26.

The government is wanting feedback on how to support the “safe and responsible use of AI” and discusses if it should take either voluntary approaches such as ethical frameworks, if specific regulation is needed or undertake a mix of both approaches.

A map of options for potential AI governance with a spectrum from “voluntary” to “regulatory.” Source: Department of Industry, Science and Resources

A question in the consultation directly asks, “whether any high-risk AI applications or technologies should be banned completely?” and what criteria should be used to identify such AI tools that should be banned.

A draft risk matrix for AI models was included for feedback in the comprehensive discussion paper. While only to provide examples it categorized AI in self-driving cars as “high risk” while a generative AI tool used for a purpose such as creating medical patient records was considered “medium risk.”

Highlighted in the paper was the “positive” AI use in the medical, engineering and legal industries but also its “harmful” uses such as deepfake tools, use in creating fake news and cases where AI bots had encouraged self-harm.

The bias of AI models and “hallucinations” — nonsensical or false information generated by AI’s — were also brought up as issues.

Related: Microsoft’s CSO says AI will help humans flourish, cosigns doomsday letter anyway

The discussion paper claims AI adoption is “relatively low” in the country as it has “low levels of public trust.” It also pointed to AI regulation in other jurisdictions and Italy’s temporary ban on ChatGPT.

Meanwhile, the National Science and Technology Council report said that Australia has some advantageous AI capabilities in robotics and computer vision, but its “core fundamental capacity in [large language models] and related areas is relatively weak,” and added:

“The concentration of generative AI resources within a small number of large multinational and primarily US-based technology companies poses potentials [sic] risks to Australia.”

The report further discussed global AI regulation, gave examples of generative AI models, and opined they “will likely impact everything from banking and finance to public services, education and creative industries.”

AI Eye: 25K traders bet on ChatGPT’s stock picks, AI sucks at dice throws, and more

IMF execs float raising crypto mining electricity prices by 85%

US vice president gathers top tech CEOs to discuss dangers of AI

Vice President Harris gathered the heads of several AI development firms to discuss potential risks posed by the budding technology.

The United States vice president and President Biden’s top advisors have held a meeting with several AI industry CEOs to discuss “concerns about the risks associated with AI.”

On May 4, U.S. vice president Kamala Harris was joined by nine top Biden administration advisors in science, national security, policy and economics, meeting with the CEOs of OpenAI, Microsoft, Google and AI startup Anthropic.

Notably, tech giant Meta’s CEO Mark Zuckerberg was absent from the meeting.

Before the meeting, the White House released a flurry of AI-related announcements regarding funding AI research facilities, government AI policy, and AI systems evaluation.

The meeting focused on the transparency of AI systems, the importance of evaluating and validating the safety of AI and ensuring AI is secured from malicious actors, as per the announcement.

Reportedly, the government and the tech CEOs agreed “more work is needed to develop and ensure appropriate safeguards and protections” for AI.

The CEOs committed to engaging with the White House to ensure Americans can “benefit from AI innovation.” No specific details were shared on what safeguards were needed or what the engagement with the government exactly entails.

Meta chief Mark Zuckerberg was absent from the meeting despite the company working on AI for years. A White House official told CNN “It was focused on companies currently leading in the space.”

The Biden administration also highlighted — without going into specifics — its work to address national security concerns posed by AI, specifically mentioning cybersecurity and biosecurity.

It said these efforts would ensure AI firms “have access to best practices” to protect AI networks from state cybersecurity experts from the “national security community.”

White House banks big on AI

On the same day, the Biden Administration announced it would put aside $140 million to launch seven new National AI Research Institutes, bringing the total to 25 across the country.

“These Institutes bolster America’s AI [research and development] infrastructure,” the White House said. It added the institutes would “drive breakthroughs” in areas such as “climate, agriculture, energy, public health, education, and cybersecurity.”

Related: Google DeepMind CEO: We may have AGI ‘in the next few years’

In a separate announcement, the government said AI development firms including Anthropic, Google, Microsoft, OpenAI, NVIDIA, Hugging Face and Stability AI will also participate in publicly evaluating AI systems on a platform from AI training firm Scale AI at the hacker convention DEFCON in August.

Finally, the White House said it would release a draft policy on how the U.S. government will use AI which will be will be made available for public comment “this summer.”

Policies around the development, use and procurement of AI by federal departments and agencies will be drafted. It said the policies will be a “model” for state and local governments, in their own procurement and use of AI.

AI Eye: ‘Biggest ever’ leap in AI, cool new tools, AIs are the real DAOs

IMF execs float raising crypto mining electricity prices by 85%