1. Home
  2. Machine Learning

Machine Learning

Is mysterious ΑΙ ‘gpt2-chatbot’ OpenAI’s next upgrade in disguise?

A powerful new AI chatbot called “gpt2-chatbot” appears on LMSYS Chat and has sparked speculation whether it could be OpenAI's unreleased GPT-5 or a supercharged GPT-2.

The internet is buzzing after a mysterious artificial intelligence (AI) chatbot appeared on a popular website used for testing open large language models (LLMs) with no information or documentation as to its creator. 

Users began to notice the new chatbot “gpt2-chatbot” on April 29, listed on the website “LMSYS Chat,” which allows users to begin chatting with any open, available AI chatbot all in one spot.

The AI community on X has already taken notice, commenting that the model is “really good” and shows both advanced reasoning skills, “but it also gets notoriously challenging AI questions right with a much more impressive tone.”

Read more

Robinhood Crypto launches Solana staking with 5% APY for EU users

Vitalik Buterin: AI may surpass humans as the ‘apex species’

“Even Mars may not be safe” if superintelligent AI turns against humanity, warns Ethereum co-founder Vitalik Buterin.

Super-advanced artificial intelligence, left unchecked, has a “serious chance” of surpassing humans to become the next “apex species” of the planet, according Ethereum co-founder Vitalik Buterin.

But that will boil down to how humans potentially intervene with AI developments, he said.

In a Nov. 27 blog post, Buterin, seen by some as a thought leader in the cryptocurrency space, argued AI is “fundamentally different” from other recent inventions — such as social media, contraception, airplanes, guns, the wheel, and the printing press — as AI can create a new type of “mind” that can turn against human interests, adding:

“AI is [...] a new type of mind that is rapidly gaining in intelligence, and it stands a serious chance of overtaking humans' mental faculties and becoming the new apex species on the planet.”

Buterin argued that unlike climate change, a man-made pandemic, or nuclear war, superintelligent AI could potentially end humanity and leave no survivors, particularly if it ends up viewing humans as a threat to its own survival. 

“One way in which AI gone wrong could make the world worse is (almost) the worst possible way: it could literally cause human extinction.”

“Even Mars may not be safe,” Buterin added.

Buterin cited an August 2022 survey from over 4,270 machine learning researchers who estimated a 5-10% chance that AI kills humanity.

However, while Buterin stressed that claims of this nature are “extreme,” there are also ways for humans to prevail.

Brain interfaces and techno-optimism

Buterin suggested integrating brain-computer interfaces (BCI) to offer humans more control over powerful forms of AI-based computation and cognition.

A BCI is a communication pathway between the brain's electrical activity and an external device, such as a computer or robotic limb.

This would reduce the two-way communication loop between man and machine from seconds to milliseconds, and more importantly, ensure humans retain some degree of “meaningful agency” over the world, Buterin said.

A diagram depicting two possible feedback loops between humans and AI. Source: Vitalik.eth

Related: How AI is changing crypto: Hype vs. reality

Buterin suggested this route would be “safer” as humans could be involved in each decision made by the AI machine.

“We [can] reduce the incentive to offload high-level planning responsibility to the AI itself, and thereby reduce the chance that the AI does something totally unaligned with humanity's values on its own.”

The Ethereum co-founder also suggested “active human intention” to take AI in a direction that benefits humanity, as maximizing profit doesn’t always lead human down the most desirable pathway.

Buterin concluded that “we, humans, are the brightest star” in the universe, as we’ve developed technology to expand upon human potential for thousands of years, and hopefully many more to come:

“Two billion years from now, if the Earth or any part of the universe still bears the beauty of Earthly life, it will be human artifices like space travel and geoengineering that will have made it happen.”

Magazine: Real AI use cases in crypto, No. 1: The best money for AI is crypto

Robinhood Crypto launches Solana staking with 5% APY for EU users

‘107,000 GPUs on the waitlist’ — io.net beta launch attracts data centers, GPU clusters

Io.net’s recently developed decentralized physical infrastructure network has moved into its beta phase, allowing GPU computing providers to plug into the platform.

Over 100,000 GPUs from data centers and private clusters are set to plug into a new decentralized physical infrastructure network (DePIN) beta launched by io.net.

As Cointelegraph previously reported, the startup has developed a decentralized network that sources GPU computing power from various geographically diverse data centers, cryptocurrency miners and decentralized storage providers to power machine learning and AI computing.

The company announced the launch of its beta platform during the Solana Breakpoint conference in Amsterdam, which coincided with a newly formed partnership with Render Network.

Tory Green, chief operating officer of io.net, spoke exclusively to Cointelegraph after a keynote speech alongside business development head Angela Yi. The pair outlined the critical differentiators between io.net’s DePIN and the broader cloud and GPU computing market.

Related: Google Cloud broadens Web3 startup program with 11 blockchain firms

Green identifies cloud providers like AWS and Azure as entities that own their supplies of GPUs and rent them out. Meanwhile, peer-to-peer GPU aggregators were created to solve GPU shortages, but “quickly ran into the same problems” as the exec explained.

The wider Web2 industry continues to look to tap into GPU computing from underutilized sources. Still, Green contends that none of these existing infrastructure providers cluster GPUs in the same way that io.net founder Ahmad Shadid has pioneered.

“The problem is that they don't really cluster. They're primarily single instance and while they do have a cluster option on their websites, it's likely that a salesperson is going to call up all of their different data centers to see what’s available,” Green adds.

Meanwhile, Web3 firms like Render, Filecoin and Storj have decentralized services not focused on machine learning. This is part of io.net’s potential benefit to the Web3 space as a primer for these services to tap into the space.

Green points to AI-focused solutions like Akash network, which clusters an average of 8 to 32 GPUs, as well as GenSyn, as the closest service providers in terms of functionality. The latter platform is building its own machine learning compute protocol to provide a peer-to-peer “supercluster” of computing resources.

With an overview of the industry established, Green believes io.net’s solution is novel in its ability to cluster over different geographic locations in minutes. This statement was tested by Yi, who created a cluster of GPUs from different networks and locations during a live demo on stage at Breakpoint.

io.net's user interface allows a user to deploy a cluster of GPUs from different locations and service providers globally. Source: io.net

As for its use of the Solana blockchain to facilitate payments to GPU computing providers, Green and Yi note that the sheer scale of transactions and inferences that io.net will facilitate would not be processable by any other network.

“If you're a generative art platform and you have a user base that's giving you prompts, every single time those inferences are made, micro-transactions behind it,” Yi explains.

“So now you can imagine just the sheer size and the scale of transactions that are being made there. And so that's why we felt like Solana would be the best partner for us.”

The partnership with Render, an established DePIN network of distributed GPU suppliers, provides computing resources already deployed on its platform to io.net. Render’s network is primarily aimed at sourcing GPU rendering computing at lower costs and faster speeds than centralized cloud solutions.

Yi described the partnership as a win-win situation, with the company looking to tap into io.net’s clustering capabilities to make use of the GPU computing that it has access to but is unable to put to use for rendering applications.

Io.net will carry out a $700,000 incentive program for GPU resource providers, while Render nodes can expand their existing GPU capacity from graphical rendering to AI and machine learning applications. The program is aimed at users with consumer-grade GPUs, categorized as hardware from Nvidia RTX 4090s and under.

As for the wider market, Yi highlights that many data centers worldwide are sitting on significant percentages of underused GPU capacity. A number of these locations have “tens of thousands of top-end GPUs” that are idle:

“They're only utilizing 12 to 18% of their GPU capacity and they didn't really have a way to leverage their idle capacity. It's a very inefficient market.”

Io.net’s infrastructure will primarily cater to machine learning engineers and businesses that can tap into a highly modular user interface that allows a user to select how many GPUs they need, location, security parameters and other metrics.

Magazine: Beyond crypto: Zero-knowledge proofs show potential from voting to finance

Robinhood Crypto launches Solana staking with 5% APY for EU users

Universal Music releases Beatles ‘last song’ with help from AI

The final Beatles song “Now and Then” has been released and made possible with a little help from AI to produce John Lennon’s vocal track.

The Beatles have released what they’re calling their “last song” featuring vocal tracks of the late John Lennon developed with the assistance of artificial intelligence (AI) on Nov. 2. 

“Now and Then” was released by Universal Music Group (UMG), one of the world’s leading music companies, and was accompanied by a short documentary detailing how they came to create the track using new technologies

The video explains how director Peter Jackson developed software while working on his comprehensive Beatles documentary “Get Back” that allowed the team to uncouple John’s vocals from his piano part in the original cassette tape recording of “Now and Then” from the late 1970s that Lennon initially made as a demo.

“[We developed] a technology which allows us to take any soundtrack and split all the different components into separate tracks based on machine learning.”

In a separate interview, the track’s co-producer Giles Martin explained that AI can be taught to recognize voices.

"So if you and I have a conversation and we're in a crowded room and there's a piano playing in the background, we can teach the AI what the sound of your voice, the sound of my voice, and it can extract those voices.”

Paul McCartney, one of the four original Beatles members, said after they heard of Jackson’s new technology they “better send John’s voice to them off the original cassette.”

Thus the new track got off the ground with a little help from AI. John Lennon’s son Sean Ono Lennon commented in the video that his dad “would’ve loved that because he was never shy to experiment with recording technology.”

Related: AI music sending traditional industry into ‘panic,’ says new AI music platform CEO

McCartney echoed the sentiment saying:

“To still be working on Beatles music in 2023... wow. We’re actually messing around with state-of-the-art technology, which is something the Beatles would’ve been very interested in.”

Along with John Lennon, the track features the two remaining members of the Beatles, Paul McCartney and Ringo Starr, and the late George Harrison.

On McCartney’s post, fans have called the new track “beautiful” and a “work of art and perfect way to end the discography.” One fan even said she hopes AI will help make “Beatles live hologram concert on stage” for those who missed opportunities to catch them live.

However, there has already been mumbling from others about the use of AI and the “fake” Beatles song.

Source: X (formerly Twitter) 
Source: X (formerly Twitter)

In a recent survey of musicians conducted by Pirate music studios, 53% of respondents said they have “concerns about how their audience might perceive music created with the assistance of AI.”

The survey also inquired why musicians were reluctant to use AI, with 58% reporting that “loss of authenticity” was the primary concern.

Magazine: BitCulture: Fine art on Solana, AI music, podcast + book reviews

Robinhood Crypto launches Solana staking with 5% APY for EU users

Jed McCaleb-backed nonprofit will provide easier access to AI computing capacity

Voltage Park will lease access to 24,000 clustered NVIDIA GPUs by the hour or month to help small startups and researchers model machine learning.

Ripple co-founder Jed McCaleb’s nonprofit Navigation Fund is helping to tackle the AI chip shortage by offering leasable capacity large machine learning models. A new cloud was officially launched on Oct. 29 that will be accessible on an hourly, monthly or long-term basis.

An organization called Voltage Park “currently offer[s] bare-metal access for large-scale users that need peak performance” and expects to expand its service by early 2024, according to a statement on its website. It has around 24,000 NVIDIA H100 graphics processing units (GPUs) grouped into interconnected clusters. Voltage Park is a subsidiary of Navigation Fund.

The hardware is worth $500 million. Clusters will be set up in Texas, Virginia and Washington, Voltage Park CEO Eric Park told Reuters. Park joined the organization in July.

Related: Stellar Co-founder Brands 90% of Crypto Projects ‘B.S.’

Voltage Park is currently auctioning off contracts with lengths of one-to-three months on 1,560 GPUs. It said in its announcement:

“The market for cutting-edge ML compute is broken. Startups, researchers and even big AI labs are scrambling to buy or rent access to the latest chips for ML training. […] We’re on a mission to make machine learning infrastructure accessible to all.”

The Navigation Fund was founded in 2023 with plans to provide a small number of grants this year and expand its programs in early 2024. It plans to advance a number of causes in addition to “safe AI.”

Billionaire McCaleb created Mt. Gox to trade Magic: The Gathering cards, then repurposed it as a Bitcoin (BTC) exchange and sold it in 2011, three years before its collapse. He went on to become a co-founder of Ripple Labs and, after leaving Ripple on bad terms with the rest of the management, he co-founded the Stellar blockchain. He also created a space station startup in 2022 that has partnered with Elon Musk’s SpaceX.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Robinhood Crypto launches Solana staking with 5% APY for EU users

Startup demos upcoming decentralized GPU infrastructure network to OpenAI, Uber

io.net has built a decentralized physical infrastructure network that will source GPU computing power for AI and machine learning.

A project that started out as an institutional-grade quantitative trading system for cryptocurrencies and stocks has transitioned to become a decentralized network sourcing GPU computing power to serve increasing demand for AI and machine learning services.

Io.net has developed a test network that sources GPU computing power from a variety of data centers, cryptocurrency miners and decentralized storage providers. Aggregating GPU computational power is touted to drastically reduce the cost of renting these sources that are becoming increasingly expensive as AI and machine learning advances.

Speaking exclusively to Cointelegraph, CEO and co-founder Ahmad Shadid unpacks details of the network that aims to provide a decentralized platform for renting computing power at a fraction of the cost of centralized alternatives that currently exist.

Related: Future of payments: Visa to invest $100M in generative AI

Shadid explains how the project was conceived in late 2022 during a Solana hackathon. Io.net was developing a quantitative trading platform that relied on GPU computing power for its high-frequency operations, but was hamstrung by the exorbitant costs of renting GPU computing capacity.

The io.net platform will allow GPU computing providers to provide resource to clusters for AI and machine learning needs. Source: io.net

The team unpacks the challenge of renting high-performance GPU hardware in its core documentation, with the price of renting a single NVIDIA A100 averaging around $80 per day per card. Needing more than 50 of these cards to operate 25 days a month would cost more than $100,000.

A solution was found in the discovery of Ray.io, an open-source library which OpenAI used to distribute ChatGPT training across over 300,000 CPUs and GPUs. The library streamlined the project’s infrastructure, with its backend developed in the short space of two months.

Shadid demoed io.net’s working testnet at the AI-focused Ray Summit in Sept. 2023, highlighting how the project aggregates computing power which is served to GPU consumers as clusters to meet specific AI or machine learning use cases.

“Not only does this model allow io.net to provision GPU compute up to 90% cheaper than incumbent suppliers, but it allows for virtually unlimited computing power.”

The decentralized network is set to leverage Solana’s blockchain to deliver SOL and USD Coin (USDC) payments to machine learning engineers and miners that are renting or providing computing power.

“When ML engineers pay for their clusters, these funds are directed straight to the miners that served in the cluster with their GPUs, with a small network fee being allocated to the io.net protocol.”

The project’s roadmap is set to include the launch of a dual native token system that will feature $IO and $IOSD. The token model will reward miners for executing machine learning workloads and maintaining network uptime while considering the dollar cost of electricity consumption.

“The IO coin will be freely traded in the crypto market and is the gate to access the compute power, while the IOSD token will serve as a stable credit token algorithmically pegged to 1 USD.”

Shadid tells Cointelegraph that io.net fundamentally differs from centralized cloud services like Amazon Web Services (AWS):

“To use an analogy, they’re United Airlines and we’re Kayak; they own planes whereas we help people book flights.”

The founder adds that any businesses that require AI computation typically use third-party providers, since they lack the GPUs to handle it all in-house. With demand for GPU’s estimated to increase by ten times every 18 months, Hadid says that these is often insufficient capacity to meet demand, leading to long wait times and high prices.

This is compounded by what he describes as inefficient utilization of data centers that are not optimized for the type of AI and machine learning work that is rapidly increasing:

“There are thousands of independent datacenters in the US alone, with an average utilization rate of 12 - 18%. As a result, bottlenecks are being created, which is having the knock-on effect of driving up prices for GPU compute.”

The upside is that the average cryptocurrency miner stands to gain by renting out their hardware to compete with the likes of AWS. Hadid says that the average miner using a 40GB A100 makes $0.52 a day, while AWS is selling the same card for AI computing for $59.78 a day.

“Part of the value proposition of io.net is first we allow participants to be exposed to the AI compute market and resell their GPUs and for the ML engineers we are significantly cheaper than AWS.”

Figures shared with Cointelegraph estimate that miners with GPU resources at their disposal could make 1500% more than they would from mining a variety of cryptocurrencies.

Magazine: Blockchain detectives: Mt. Gox collapse saw birth of Chainalysis

Robinhood Crypto launches Solana staking with 5% APY for EU users

Researchers find LLMs like ChatGPT output sensitive data even after it’s been ‘deleted’

According to the scientists, there’s no universal method by which data can be deleted from a pretrained large language model.

A trio of scientists from the University of North Carolina, Chapel Hill recently published preprint artificial intelligence (AI) research showcasing how difficult it is to remove sensitive data from large language models (LLMs) such as OpenAI’s ChatGPT and Google’s Bard. 

According to the researchers' paper, the task of “deleting” information from LLMs is possible, but it’s just as difficult to verify the information has been removed as it is to actually remove it.

The reason for this has to do with how LLMs are engineered and trained. The models are pretrained on databases and then fine-tuned to generate coherent outputs (GPT stands for “generative pretrained transformer”).

Once a model is trained, its creators cannot, for example, go back into the database and delete specific files in order to prohibit the model from outputting related results. Essentially, all the information a model is trained on exists somewhere inside its weights and parameters where they’re undefinable without actually generating outputs. This is the “black box” of AI.

A problem arises when LLMs trained on massive datasets output sensitive information such as personally identifiable information, financial records, or other potentially harmful and unwanted outputs.

Related: Microsoft to form nuclear power team to support AI: Report

In a hypothetical situation where an LLM was trained on sensitive banking information, for example, there’s typically no way for the AI’s creator to find those files and delete them. Instead, AI devs use guardrails such as hard-coded prompts that inhibit specific behaviors or reinforcement learning from human feedback (RLHF).

In an RLHF paradigm, human assessors engage models with the purpose of eliciting both wanted and unwanted behaviors. When the models’ outputs are desirable, they receive feedback that tunes the model toward that behavior. And when outputs demonstrate unwanted behavior, they receive feedback designed to limit such behavior in future outputs.

Despite being “deleted” from a model's weights, the word “Spain” can still be conjured using reworded prompts. Image source: Patil, et. al., 2023

However, as the UNC researchers point out, this method relies on humans finding all the flaws a model might exhibit, and even when successful, it still doesn’t “delete” the information from the model.

Per the team’s research paper:

“A possibly deeper shortcoming of RLHF is that a model may still know the sensitive information. While there is much debate about what models truly ‘know’ it seems problematic for a model to, e.g., be able to describe how to make a bioweapon but merely refrain from answering questions about how to do this.”

Ultimately, the UNC researchers concluded that even state-of-the-art model editing methods, such as Rank-One Model Editing “fail to fully delete factual information from LLMs, as facts can still be extracted 38% of the time by whitebox attacks and 29% of the time by blackbox attacks.”

The model the team used to conduct their research is called GPT-J. While GPT-3.5, one of the base models that power ChatGPT, was fine-tuned with 170 billion parameters, GPT-J only has 6 billion.

Ostensibly, this means the problem of finding and eliminating unwanted data in an LLM such as GPT-3.5 is exponentially more difficult than doing so in a smaller model.

The researchers were able to develop new defense methods to protect LLMs from some “extraction attacks” — purposeful attempts by bad actors to use prompting to circumvent a model’s guardrails in order to make it output sensitive information

However, as the researchers write, “the problem of deleting sensitive information may be one where defense methods are always playing catch-up to new attack methods.”

Robinhood Crypto launches Solana staking with 5% APY for EU users

Microsoft to form nuclear power team to support AI: Report

Microsoft is forming a new team of professionals to advance its artificial intelligence plans with Small Modular Reactors and microreactors.

Tech giant Microsoft is apparently forming a new team to advance its artificial intelligence plans by hiring a professional to develop an energy strategy based on Small Modular Reactors (SMRs) and microreactor energy.

According to a job post reported by The Verge, Microsoft is looking for a principal program manager who will lead its nuclear technology efforts to support the development of AI models.

“The next major wave of computing is being born, as the Microsoft Cloud turns the world’s most advanced AI models into a new computing platform,” according to a quote from Microsoft’s chairman and CEO Satya Nadella available in the job description.

The ideal candidate must have at least six years of experience in the nuclear industry, engineering, or energy market, reads the post, which is currently closed to applications. The position will also be responsible for exploring other experimental energy technologies.

Complex machine learning models, like deep learning, can consume a significant amount of energy for several reasons, including complex computations and large volumes of data. A study published in 2019 by the MIT Technology Review found that training a single AI model can emit as much carbon in the atmosphere as five cars in their lifetimes.

The estimated cost of training AI models. Source: MIT Technology Review

A few ways to reduce the energy consumption of AI models involve developing more efficient algorithms and hardware, as well as using renewable energy sources for data centers, such as nuclear power.

According to the U.S. Office of Nuclear Energy, one of the main advantages of nuclear power is that it produces zero carbon emissions and doesn’t emit other greenhouse gases. However, researchers at Stanford University argue that this energy source isn’t a solution to environmental problems, since it has a long-time lag between planning and operation, a large carbon footprint, and meltdown risks.

Magazine: Bitcoin is on a collision course with ‘Net Zero’ promises

Robinhood Crypto launches Solana staking with 5% APY for EU users

Google and Microsoft-backed AI firm AlphaSense raises $150M at $2.5B valuation

AlphaSense’s client list now includes most of the S&P 500 and nearly every firm listed in the Dow 50.

AlphaSense, a B2B artificial intelligence (AI) platform specializing in business intelligence and search, announced the successful completion of a $150 million Series E funding round led by BOND and joined by Google parent company Alphabet’s investment arm, CapitalG, as well as Goldman Sachs and Viking Global.

The latest round saw the company’s valuation grow from $1.7 billion, its value upon raising $225 million during its Series D in June of 2023, to $2.5 billion.

AlphaSense’s strong market position and continued growth owes to the recent boom in the AI sector. While consumer-facing generative AI models such as OpenAI’s ChatGPT and Anthropic’s Bard are designed to serve general purpose audiences, AlphaSense’s models combine strategic data points from both public and private analytics with a machine learning pipeline.

This allows AlphaSense’s “insights-as-a-service” platform to offer deep insights into business and finance analytics and provide actionable intelligence.

Related: ChatGPT can now browse the internet, no longer limited to info from 2021

In the crypto and blockchain world, platforms such as AlphaSense have the potential to go beyond the often dubious insights provided by generalized AI models such as ChatGPT. Where the latter has a penchant for hallucination, AlphaSense’s models parse specific datasets relevant to business intelligence and, essentially, curate insights into easily digestible articles complete with text and images.

Per a press release, AlphaSense CEO and founder Jack Kokko said the latest investment round would allow the company to stay at the forefront of the B2B generative AI sector:

“The additional capital allows us to invest strategically, so we can continue to lead the generative AI revolution in our market, and deliver on our mission of helping businesses find the right data and insights to support more confident and agile decision-making. We are building the future of market intelligence, and we are proud to continue revolutionizing search for enterprise customers.”

Robinhood Crypto launches Solana staking with 5% APY for EU users

Google launches Digital Futures Project with $20M in grants to support ‘responsible AI’

The launch comes ahead of a series of AI forums to be hosted by U.S. Senate Majority Leader Chuck Schumer.

Google and its charitable arm, Google.org, launched the Digital Futures Project, an initiative to study responsible artificial intelligence (AI) technologies, on Sept. 11. 

The Mountain View company will invest a total of $20 million in grants to leading think tanks and academic institutions around the world with the expressed aim “to facilitate dialogue and inquiry” into AI technologies.

According to a blog post, Google wishes to address issues of fairness, bias, misinformation, security and the future of work through deep collaboration with outside organizations and a commitment to facilitating responsible discussion:

“Through this project, we’ll support researchers, organize convenings and foster debate on public policy solutions to encourage the responsible development of AI.”

Awardees who’ve already received grants under the fund include the Aspen Institute, the Brookings Institution, the Carnegie Endowment for International Peace, the Center for a New American Security, the Center for Strategic and International Studies, the Institute for Security and Technology, Leadership Conference Education Fund, MIT Work of the Future, R Street Institute and SeedAI.

The timing of the project’s launch comes as the CEOs of some of the largest technology corporations in the world are set to convene in Washington, D.C. on Sept. 13 for an “AI Forum” hosted by U.S. Senate Majority Leader Chuck Schumer.

Related: Senators unveil bipartisan blueprint for comprehensive AI regulation

Alphabet and Google CEO Sundar Pichai and former Google CEO and chairman Eric Schmidt are slated to attend alongside Meta CEO Mark Zuckerberg, Tesla CEO Elon Musk, Microsoft CEO Satya Nadella and co-founder Bill Gates, Nvidia CEO Jensen Huang, OpenAI CEO Sam Altman and representatives from civil rights organizations.

Not only will the event bring together the CEOs of U.S. companies worth a combined total market value of well over $6 trillion, but it also seemingly marks the first time Zuckerberg and Musk will be in the same room together since their much-hyped mixed martial arts match fell apart.

According to Senator Schumer’s office, the event’s purpose is to discuss artificial intelligence policy. It will be the first of nine such meetings scheduled throughout the fall — though it remains unclear whether proceeding events will feature the same guest list.

Robinhood Crypto launches Solana staking with 5% APY for EU users