1. Home
  2. UK Government

UK Government

Bitcoin ransomware Akira drains $42M from more than 250 companies: FBI

The U.S. FBI found that the Akira ransomware group has been targeting businesses and critical infrastructure entities in North America, Europe and Australia since March 2023.

Akira, a year-old ransomware group, breached more than 250 organizations and extracted approximately $42 million in ransomware proceeds, top global cybersecurity agencies alerted.

Investigations conducted by the United States Federal Bureau of Investigation (FBI) found that Akira ransomware has been targeting businesses and critical infrastructure entities in North America, Europe and Australia since March 2023. While the ransomware initially targeted Windows systems, the FBI recently found Akira’s Linux variant as well.

The FBI, along with Cybersecurity and Infrastructure Security Agency (CISA), Europol’s European Cybercrime Centre (EC3) and the Netherlands’ National Cyber Security Centre (NCSC-NL), released a joint cybersecurity advisory (CSA) to “disseminate” the threat to masses.

Read more

Hut 8 ‘self-mining plans’ make it competitive post-halving: Benchmark

Cleartoken Secures $10M in Funding to Pioneer UK’s First Digital Asset Clearing House

Cleartoken Secures M in Funding to Pioneer UK’s First Digital Asset Clearing HouseOn Monday, the crypto clearing house startup Cleartoken revealed that it raised $10 million from strategic investors including Nomura’s Laser Digital, Flow Traders, and LMAX Digital. Cleartoken Nets $10M to Launch U.K. Digital Asset Clearing Services The horizontal clearing house for the digital asset market, Cleartoken, revealed that it’s raised $10 million in order to […]

Hut 8 ‘self-mining plans’ make it competitive post-halving: Benchmark

Crypto in the well and snake villain star in FCA’s pixelated animation

The United Kingdom’s financial regulator has published a pixelated, video game-styled Wild West cartoon to enlighten investors.

The United Kingdom’s financial regulator, the Financial Conduct Authority (FCA), has vigorously promoted its marketing rules for crypto firms since they were published in June. It's now found a way to bring them to life, in the form of a pixelated Wild West cartoon to enlighten investors. 

A minute-long animation mimicking the style and sound of a video game appeared as an MP4 file on the FCA’s website on Dec. 13. The cartoon isn't presented as part of a press release but is listed as a standalone, with no caption or explanation around it, on the publications page.

The cartoon explains how to judge whether crypto companies play by the FCA’s marketing rules. Crypto promo campaigns are not allowed to propose free gifts or referral bonuses and must display a “prominent” warning about the risk of losing money when investing in crypto.

Read more

Hut 8 ‘self-mining plans’ make it competitive post-halving: Benchmark

AI regulations in global focus as EU approaches regulation deal

Concerns over potential misuse of AI have prompted the U.S., U.K., China, and the G7 to speed up regulation of the technology, though Europe is already way ahead.

The surge in generative AI development has prompted governments globally to rush toward regulating this emerging technology. The trend matches the European Union’s efforts to implement the world’s first set of comprehensive rules for artificial intelligence.

The artificial intelligence (AI) Act of the 27-nation bloc is recognized as an innovative set of regulations. After much delay, reports indicate that negotiators agreed on Dec. 7 to a set of controls for generative artificial intelligence tools such as OpenAI Inc.’s ChatGPT and Google’s Bard.

Concerns about potential misuse of the technology have also propelled the U.S., U.K., China, and international coalitions such as the Group of 7 countries to speed up their work toward regulating the swiftly advancing technology.

Read more

Hut 8 ‘self-mining plans’ make it competitive post-halving: Benchmark

UK Legislators urge caution in retail digital pound rollout

The committee’s report recommends imposing lower initial limits on the value of retail digital pounds to alleviate the risk of potential bank runs amid market instability.

British legislators are urging a careful stance regarding implementing a retail digital pound.

Members of the Treasury Select Committee have expressed reservations regarding the possible launch of a retail digital pound, underscoring the need for thoughtful examination before execution.

In the interim, the committee’s report recommends imposing lower initial limits on the value of retail digital pounds to alleviate the risk of potential bank runs amid market instability.

Screenshot of the Treasury Committee report   Source: UK Parliament

The report addressed privacy concerns, recommending that any legislation introducing a digital pound should strictly limit the use of data by the government or the BoE.

The report proposes that in the event of legislation for the introduction of a digital pound, it should expressly limit the Government and Bank of England from utilizing data acquired through the digital pound for purposes beyond those already sanctioned for law enforcement.

Related: UK crypto hodlers get a call from the tax grinch

Committee chair Harriett Baldwin stressed the need for compelling evidence before contemplating the introduction of a retail digital pound.

Read more

Hut 8 ‘self-mining plans’ make it competitive post-halving: Benchmark

London Stock Exchange seeks digital assets director

In a LinkedIn job posting the London Stock Exchange Group says it's seeking a digital assets lead with a “passion” for digital assets, crypto and blockchain.

The London Stock Exchange Group (LSEG), the parent company of the London Stock Exchange and other fintech companies, has posted on LinkedIn that it's seeking a director of digital assets. 

LSEG says it is looking for candidates who have a “passion for and understanding of digital assets, cryptocurrencies and distributed ledger technology,” among other skills and requirements.

According to the posting the future digital asset manager for LSEG will be helping the company outline and deploy a commercial strategy for “a suite of new infrastructure solutions and capabilities, as well as developing LSEG’s brand and ecosystem in digital private markets.”

A representative from LSEG told Cointelegraph that it could not provide any further details on the development at the time. 

Related: London Stock Exchange Group may provide clearing services for BTC derivatives in Q4

This latest development after the London Stock Exchange announced that it will create a traditional assets trading platform using blockchain technology. On Sept. 4, the legacy financial institution said it plans to use the technology to enhance the efficiency of holding, buying and selling traditional assets.

However, Murray Roos, the LSE Group’s head of capital markets, said at the time that it would not be building anything around cryptocurrencies.

The United Kingdom has been cracking down on its local crypto scene after passing a bill allowing authorities to seize Bitcoin (BTC) used for crime and announced plans for upcoming stablecoin regulations in October. 

In September, the U.K. financial watchdog gave crypto companies a marketing compliance warning and a deadline to align with its standards by January 2024.

Magazine: Australia’s $145M exchange scandal, Bitget claims 4th, China lifts NFT ban: Asia Express

Hut 8 ‘self-mining plans’ make it competitive post-halving: Benchmark

DeepMind co-founder chides Elon over UK AI summit comments: ‘He’s not an AI scientist’

World renowned AI scientist Mustafa Suleyman appears to disagree with Elon Musk’s assessment of the threats and benefits A.I. technologies could pose in the future.

Mustafa Suleyman, the CEO of Inflection AI and co-founder of Google’s DeepMind, had some strong words for Elon Musk during a post-event interview with the BBC after the recent United Kingdom artificial intelligence (AI) summit concluded on Nov. 2.

As Cointelegraph reported, Musk leaned in to his penchant for sensational commentary during an interview with U.K. prime minister Rishi Sunak at the close of the two day event.

During the conversation, Musk remarked that AI was like “a magic genie,” before warning “usually those stories don’t end well.”

The sometimes richest man in the world also warned that AI would eventually do virtually every job, something he apparently believes will cause humans to struggle to find purpose for their lives.

Musk also discussed the existential dangers that he believes AI presents, including the necessity to include a “physical off switch” for AI systems so that we can control the machines.

Sunak for his part, agreed with Musk’s intimation that Hollywood stories concerning AI, such as The Terminator, appeared to be foundation points for the basis of both men’s views on the technology. “All these movies with the same plot fundamentally all end with the person turning it off,” quipped Sunak.

Related: NIST establishes AI Safety Institute Consortium in response to Biden executive order

It’s unclear what technology the two men were referring to. Most AI systems created in the past decade would ostensibly be resistant to attempts at “turning it off” via a single physical switch due to the nature of distributed and cloud computing and server technologies.

Suleyman, who was also in attendance at the U.K. AI Summit, later dismissed Musk’s commentary as pedestrian during an interview with the BBC’s Question Time.

Per the interview, Suleyman asserted that:

“This is why we need an impartial, independent assessment of the trajectory of this technology. [Elon Musk is] not an AI scientist. He owns a small AI company. He has many other companies. His expertise is more in space and cars.”

Suleyman isn’t the first AI expert or CEO to question Musk’s grasp on AI at the scientific level. In 2022 NYU computer science professor and best-selling author Gary Marcus and Vivek Wadha, a distinguished fellow at both Carnegie Mellon and Harvard, challenged Musk’s assertion that “AGI,” artificial general intelligence, would be realized by 2029.

The two experts offered Musk a wager in the amount of $500,000 which would pay off if AGI was realized before the 2029 deadline. To the best of our knowledge, Musk has yet to acknowledge or respond to the proposed wager.

AGI is a nebulous concept with no agreed-upon benchmarks or measurement standards for achieving. The basic premise of the idea is that, one day, due to currently unknowable technological follow-on effects, AI technology will become capable of performing any task requiring intelligence.

While some so-called experts believe that AGI, or at least sentient AI, may already exist, many other experts working in the field assert that current systems aren’t as intelligent or capable as humans or other animals due to their reliance on training, programming, procedure, and guardrails.

Hut 8 ‘self-mining plans’ make it competitive post-halving: Benchmark

UK AI Safety Summit: Musk likens AI to ‘magic genie,’ says no jobs needed in future

The second day of the U.K. AI summit featured a one-on-one talk between Prime Minister Rishi Sunak and Elon Musk, who discussed the future of the job market, China and AI as a “magic genie.”

The United Kingdom’s global summit on artificial intelligence safety, the AI Safety Summit, concluded on Nov. 2 with a one-on-one chat between U.K. Prime Minister Rishi Sunak and billionaire Elon Musk. 

Musk was one of the many big names to attend the summit, including heads of OpenAI, Meta, Google and its AI division DeepMind, along with leaders from 27 countries. Musk’s nearly hour-long chat with Sunak was one of the main events of the second day.

Their conversation touched on everything from AI risks to China and opened with Elon Musk likening the emerging technology to a “magic genie.”

“It is somewhat of the magic genie problem, where if you have a magic genie that can grant all the wishes, usually those stories don’t end well. Be careful what you wish for.”

Both mentioned these intelligent bots needing a physical “off-switch” and drew parallels to science-fiction movies like The Terminator. “All these movies with the same plot fundamentally all end with the person turning it off,” Sunak said.

Musk commented: 

“It’s both good and bad. One of the challenges in the future will be, how do we find meaning in life if you have a magic genie that can do everything you want?”

This was brought up after governments and AI companies came to an agreement to put new models through official testing before their public release, which Sunak called a “landmark agreement.”

Related: NIST establishes AI Safety Institute Consortium in response to Biden executive order

When asked about AI's impact on the labor market, Musk called it the most “disruptive force in history” and said the technology will be smarter than the smartest human. 

“There will come a point where no job is needed. You can have a job if you want to have a job for personal satisfaction, but the AI will be able to do everything.”

"I don't know if that makes people comfortable or uncomfortable,” Musk concluded.

In addition, Musk commented on China’s inclusion in the summit, saying their presence was “essential.” “If they’re not participants, it’s pointless,” he said. 

"If the United States and the UK and China are aligned on safety, then that's going to be a good thing, because that's where the leadership is generally.”

Over the last year, the U.S. and China have gone head-to-head in the race to develop and deploy the most advanced AI systems.

When Sunak asked Musk what he believes governments should be doing to mitigate risk, Musk responded:

“I generally think that it is good for the government to play a role when public safety is at risk; for the vast majority of software, public safety is not at risk. But when we talk about digital super intelligence, which does pose a risk to the public, then there is a role for the government to play to safeguard the public.”

He said while there are people in “Silicon Valley” who believe it will crush innovation and slow it down, Musk assured that regulations will “be annoying” but having what he called a “referee” will be a good thing. 

“Government to be a referee to make sure there is sportsmanlike conduct and public safety are addressed because at times I think there is too much optimism about technology.”

Since the rapid emergence of AI into the mainstream, governments worldwide have been rushing to find suitable solutions for regulating the technology. 

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Hut 8 ‘self-mining plans’ make it competitive post-halving: Benchmark

UK to invest 300M pounds in 2 AI supercomputers; Harris presses for AI safety

The U.K. says the investments will help its local scientific talent have the tools they need to ensure that the most advanced AI models are up to safety standards.

The United Kingdom announced on Nov. 1 after the conclusion of the first day of its global AI Safety Summit that it will increase funding for two artificial intelligence (AI) supercomputers to 300 million British pounds ($363.57 million).

These supercomputers, also known as the “AI Research Resource,” are intended to support research into creating safer advanced AI models, which was the primary topic of the summit.

In a post on X, U.K. Prime Minister Rishi Sunak commented that, as frontier AI models become more powerful, this investment will “make sure Britain’s scientific talent have the tools they need to make the most advanced models of AI safe.”

The two new supercomputers will give U.K. researchers more than 30 times the capacity of the country’s current largest public AI computing tools. The computers should be up and running by summer 2024.

This development also bolsters the U.K.’s quickest computer, which will be the Isambard-AI. It will be built by Hewlett-Packard Enterprise and equipped with 5,000 advanced Nvidia AI chips.

Related: AI and real-world assets gain prominence in investor discussions

The second machine, called “Dawn,” will be created with Dell and powered via 1,000 AI chips from Intel. In August, it was reported that the U.K. spent $130 million on AI chips.

According to the U.K.’s announcement, Isambard-AI will be able to compute over 200 “petaflops,” or 200,000,000,000,000,000 calculations (200 quadrillion) each second.

United States Vice President Kamala Harris was also in attendance on the first day of the summit. Prior to this, she and Sunak agreed on the need for “close collaboration on the opportunities and risks posed by frontier AI.”

In her speech, Harris warned of potential “cyberattacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions.”

She said the moment is “urgent” for collective action on the matter.

These remarks from the U.S. vice president came only a few days after the Biden administration released an executive order on AI safety standards it plans to implement.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Hut 8 ‘self-mining plans’ make it competitive post-halving: Benchmark

UK AI Safety Summit begins with global leaders in attendance, remarks from China and Musk

The U.K. AI Safety Summit concluded its first day with a common declaration, the U.S. announcing an AI safety institute, China willing to communicate on AI safety and comments from Elon Musk.

The United Kingdom’s global summit on artificial intelligence (AI) safety, “AI Safety Summit” began on Nov. 1 and will carry on through Nov. 2 with government officials and leading AI companies from the world in attendance, including from the United States and China. 

U.K. Prime Minister Rishi Sunak is hosting the event, which is taking place nearly 55 miles north of London in Bletchley Park. It comes at the end of a year of rapid advancements in the widespread use and accessibility of AI models following the emergence of OpenAI’s popular AI chatbot ChatGPT.

Who is in attendance?

The AI Safety Summit expects to have around 100 guests in attendance. This includes leaders of many of the world’s prominent AI companies such as Microsoft president Brad Smith, OpenAI CEO Sam Altman, Google and DeepMind CEO Demis Hassabis, Meta’s AI chief Yann LeCunn and its president of global affairs Nick Clegg and billionaire Elon Musk.

On a governmental level, global leaders from around 27 countries are expected to be in attendance including the U.S. Vice President Kamala Harris, the president of the European Commission Ursula von der Leyen and the secretary-general of the United Nations Antonio Guterres.

The U.K. also extended the invitation to China, which has been a major competitor to Western governments and companies in AI development. Chinese Vice Minister of Science and Technology, Wu Zhaohui will be attending, along with companies Alibaba and Tencent.

Initial summit proceedings

The two-day summit’s primary aim is to create dialogue and cooperation between its dynamic group of international attendees to shape the future of AI, with a focus on “frontier AI models.” These AI models are defined as highly capable, multipurpose AI models that equal or surpass the capabilities of current models available.

The first day included several roundtable discussions on risks to global safety and integrating frontier AI into society. There was also an “AI for good” discussion on the opportunities presented by AI to transform education.

The 'Bletchley Declaration' and the U.S.’s AI Safety Institute

During the summit, Britain published the "Bletchley Declaration” which serves as an agreement to boost global efforts of cooperation in AI safety. The signatories of said declaration included 28 countries, including the U.S. and China, along with the European Union.

In a separate statement on the declaration, the U.K. government said:

"The Declaration fulfills key summit objectives in establishing shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety and research, particularly through greater scientific collaboration.”

Other countries endorsing the statement include Brazil, France, India, Ireland, Japan, Kenya, Saudi Arabia, Nigeria and the United Arab Emirates.

Related: Biden administration issues executive order for new AI safety standards

In addition, the U.S. Secretary of Commerce Gina Raimondo said that it plans to create its own AI Safety Institute, focusing on the risks of frontier models.

Raimondo said she will “certainly” be calling on many in the audience who are “in academia and the industry” to participate in the initiative. She also suggested a formal partnership with the U.K.’s Safety Institute.

Musk calls summit a “referee" 

Elon Musk, the owner of social media platform X and CEO of both SpaceX and Tesla, has been a prominent voice in the AI space. He has already participated in talks with global regulators on the subject. 

At the U.K’s AI Safety Summit on Wednesday, he said the summit wanted to create a “"third-party referee" oversee AI development and warn of any concerns.

According to a Reuters report Musk is quoted saying:

"What we're really aiming for here is to establish a framework for insight so that there's at least a third-party referee, an independent referee, that can observe what leading AI companies are doing and at least sound the alarm if they have concerns.”

He also said before there is “oversight” there must be “insight” inference to global leaders making any mandates. “I think there's a lot of concern among people in the AI field that the government will sort of jump the gun on rules, before knowing what to do," Musk said.

Related: UN launches international effort to tackle AI governance challenges

China says it's ready to bolster communications

Also in attendance was China’s Vice Minister of Science and Technology, Wu Zhaohui who emphasized that everyone has the right to develop and deploy AI.

"We uphold the principles of mutual respect, equality and mutual benefits. Countries regardless of their size and scale have equal rights to develop and use AI," he said.

"We call for global cooperation to share AI knowledge and make AI technologies available to the public on open source terms."

He said that China is “willing to enhance our dialogue and communication in AI safety” with “all sides.” These remarks come as China and many Western countries, particularly the U.S., have been racing to create the most advanced technology on the market. 

The summit will continue for its final day on Nov. 2 with remarks from the U.K. Prime Minister and U.K. Technology Secretary Michelle Donelan.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Hut 8 ‘self-mining plans’ make it competitive post-halving: Benchmark