1. Home
  2. Machine Learning

Machine Learning

Researchers find LLMs like ChatGPT output sensitive data even after it’s been ‘deleted’

According to the scientists, there’s no universal method by which data can be deleted from a pretrained large language model.

A trio of scientists from the University of North Carolina, Chapel Hill recently published preprint artificial intelligence (AI) research showcasing how difficult it is to remove sensitive data from large language models (LLMs) such as OpenAI’s ChatGPT and Google’s Bard. 

According to the researchers' paper, the task of “deleting” information from LLMs is possible, but it’s just as difficult to verify the information has been removed as it is to actually remove it.

The reason for this has to do with how LLMs are engineered and trained. The models are pretrained on databases and then fine-tuned to generate coherent outputs (GPT stands for “generative pretrained transformer”).

Once a model is trained, its creators cannot, for example, go back into the database and delete specific files in order to prohibit the model from outputting related results. Essentially, all the information a model is trained on exists somewhere inside its weights and parameters where they’re undefinable without actually generating outputs. This is the “black box” of AI.

A problem arises when LLMs trained on massive datasets output sensitive information such as personally identifiable information, financial records, or other potentially harmful and unwanted outputs.

Related: Microsoft to form nuclear power team to support AI: Report

In a hypothetical situation where an LLM was trained on sensitive banking information, for example, there’s typically no way for the AI’s creator to find those files and delete them. Instead, AI devs use guardrails such as hard-coded prompts that inhibit specific behaviors or reinforcement learning from human feedback (RLHF).

In an RLHF paradigm, human assessors engage models with the purpose of eliciting both wanted and unwanted behaviors. When the models’ outputs are desirable, they receive feedback that tunes the model toward that behavior. And when outputs demonstrate unwanted behavior, they receive feedback designed to limit such behavior in future outputs.

Despite being “deleted” from a model's weights, the word “Spain” can still be conjured using reworded prompts. Image source: Patil, et. al., 2023

However, as the UNC researchers point out, this method relies on humans finding all the flaws a model might exhibit, and even when successful, it still doesn’t “delete” the information from the model.

Per the team’s research paper:

“A possibly deeper shortcoming of RLHF is that a model may still know the sensitive information. While there is much debate about what models truly ‘know’ it seems problematic for a model to, e.g., be able to describe how to make a bioweapon but merely refrain from answering questions about how to do this.”

Ultimately, the UNC researchers concluded that even state-of-the-art model editing methods, such as Rank-One Model Editing “fail to fully delete factual information from LLMs, as facts can still be extracted 38% of the time by whitebox attacks and 29% of the time by blackbox attacks.”

The model the team used to conduct their research is called GPT-J. While GPT-3.5, one of the base models that power ChatGPT, was fine-tuned with 170 billion parameters, GPT-J only has 6 billion.

Ostensibly, this means the problem of finding and eliminating unwanted data in an LLM such as GPT-3.5 is exponentially more difficult than doing so in a smaller model.

The researchers were able to develop new defense methods to protect LLMs from some “extraction attacks” — purposeful attempts by bad actors to use prompting to circumvent a model’s guardrails in order to make it output sensitive information

However, as the researchers write, “the problem of deleting sensitive information may be one where defense methods are always playing catch-up to new attack methods.”

Cboe seeks to add staking to Fidelity’s Ether ETF

Microsoft to form nuclear power team to support AI: Report

Microsoft is forming a new team of professionals to advance its artificial intelligence plans with Small Modular Reactors and microreactors.

Tech giant Microsoft is apparently forming a new team to advance its artificial intelligence plans by hiring a professional to develop an energy strategy based on Small Modular Reactors (SMRs) and microreactor energy.

According to a job post reported by The Verge, Microsoft is looking for a principal program manager who will lead its nuclear technology efforts to support the development of AI models.

“The next major wave of computing is being born, as the Microsoft Cloud turns the world’s most advanced AI models into a new computing platform,” according to a quote from Microsoft’s chairman and CEO Satya Nadella available in the job description.

The ideal candidate must have at least six years of experience in the nuclear industry, engineering, or energy market, reads the post, which is currently closed to applications. The position will also be responsible for exploring other experimental energy technologies.

Complex machine learning models, like deep learning, can consume a significant amount of energy for several reasons, including complex computations and large volumes of data. A study published in 2019 by the MIT Technology Review found that training a single AI model can emit as much carbon in the atmosphere as five cars in their lifetimes.

The estimated cost of training AI models. Source: MIT Technology Review

A few ways to reduce the energy consumption of AI models involve developing more efficient algorithms and hardware, as well as using renewable energy sources for data centers, such as nuclear power.

According to the U.S. Office of Nuclear Energy, one of the main advantages of nuclear power is that it produces zero carbon emissions and doesn’t emit other greenhouse gases. However, researchers at Stanford University argue that this energy source isn’t a solution to environmental problems, since it has a long-time lag between planning and operation, a large carbon footprint, and meltdown risks.

Magazine: Bitcoin is on a collision course with ‘Net Zero’ promises

Cboe seeks to add staking to Fidelity’s Ether ETF

Google and Microsoft-backed AI firm AlphaSense raises $150M at $2.5B valuation

AlphaSense’s client list now includes most of the S&P 500 and nearly every firm listed in the Dow 50.

AlphaSense, a B2B artificial intelligence (AI) platform specializing in business intelligence and search, announced the successful completion of a $150 million Series E funding round led by BOND and joined by Google parent company Alphabet’s investment arm, CapitalG, as well as Goldman Sachs and Viking Global.

The latest round saw the company’s valuation grow from $1.7 billion, its value upon raising $225 million during its Series D in June of 2023, to $2.5 billion.

AlphaSense’s strong market position and continued growth owes to the recent boom in the AI sector. While consumer-facing generative AI models such as OpenAI’s ChatGPT and Anthropic’s Bard are designed to serve general purpose audiences, AlphaSense’s models combine strategic data points from both public and private analytics with a machine learning pipeline.

This allows AlphaSense’s “insights-as-a-service” platform to offer deep insights into business and finance analytics and provide actionable intelligence.

Related: ChatGPT can now browse the internet, no longer limited to info from 2021

In the crypto and blockchain world, platforms such as AlphaSense have the potential to go beyond the often dubious insights provided by generalized AI models such as ChatGPT. Where the latter has a penchant for hallucination, AlphaSense’s models parse specific datasets relevant to business intelligence and, essentially, curate insights into easily digestible articles complete with text and images.

Per a press release, AlphaSense CEO and founder Jack Kokko said the latest investment round would allow the company to stay at the forefront of the B2B generative AI sector:

“The additional capital allows us to invest strategically, so we can continue to lead the generative AI revolution in our market, and deliver on our mission of helping businesses find the right data and insights to support more confident and agile decision-making. We are building the future of market intelligence, and we are proud to continue revolutionizing search for enterprise customers.”

Cboe seeks to add staking to Fidelity’s Ether ETF

Google launches Digital Futures Project with $20M in grants to support ‘responsible AI’

The launch comes ahead of a series of AI forums to be hosted by U.S. Senate Majority Leader Chuck Schumer.

Google and its charitable arm, Google.org, launched the Digital Futures Project, an initiative to study responsible artificial intelligence (AI) technologies, on Sept. 11. 

The Mountain View company will invest a total of $20 million in grants to leading think tanks and academic institutions around the world with the expressed aim “to facilitate dialogue and inquiry” into AI technologies.

According to a blog post, Google wishes to address issues of fairness, bias, misinformation, security and the future of work through deep collaboration with outside organizations and a commitment to facilitating responsible discussion:

“Through this project, we’ll support researchers, organize convenings and foster debate on public policy solutions to encourage the responsible development of AI.”

Awardees who’ve already received grants under the fund include the Aspen Institute, the Brookings Institution, the Carnegie Endowment for International Peace, the Center for a New American Security, the Center for Strategic and International Studies, the Institute for Security and Technology, Leadership Conference Education Fund, MIT Work of the Future, R Street Institute and SeedAI.

The timing of the project’s launch comes as the CEOs of some of the largest technology corporations in the world are set to convene in Washington, D.C. on Sept. 13 for an “AI Forum” hosted by U.S. Senate Majority Leader Chuck Schumer.

Related: Senators unveil bipartisan blueprint for comprehensive AI regulation

Alphabet and Google CEO Sundar Pichai and former Google CEO and chairman Eric Schmidt are slated to attend alongside Meta CEO Mark Zuckerberg, Tesla CEO Elon Musk, Microsoft CEO Satya Nadella and co-founder Bill Gates, Nvidia CEO Jensen Huang, OpenAI CEO Sam Altman and representatives from civil rights organizations.

Not only will the event bring together the CEOs of U.S. companies worth a combined total market value of well over $6 trillion, but it also seemingly marks the first time Zuckerberg and Musk will be in the same room together since their much-hyped mixed martial arts match fell apart.

According to Senator Schumer’s office, the event’s purpose is to discuss artificial intelligence policy. It will be the first of nine such meetings scheduled throughout the fall — though it remains unclear whether proceeding events will feature the same guest list.

Cboe seeks to add staking to Fidelity’s Ether ETF

Scientists created ‘OpinionGPT’ to explore explicit human bias — and you can test it for yourself

Due to the nature of the model's tuning data, it's unclear whether this system is actually capable of generating outputs showing real-world bias.

A team of researchers from Humboldt-Universitat zu Berlin have developed a large language artificial intelligence model with the distinction of having been intentionally tuned to generate outputs with expressed bias.

Called OpinionGPT, the team’s model is a tuned variant of Meta’s Llama 2, an AI system similar in capability to OpenAI’s ChatGPT or Anthropic’s Claude 2.

Using a process called instruction-based fine-tuning, OpinionGPT can purportedly respond to prompts as if it were a representative of one of 11 bias groups: American, German, Latin American, Middle Eastern, a teenager, someone over 30, an older person, a man, a woman, a liberal, or a conservative.

OpinionGPT was refined on a corpus of data derived from “AskX” communities, called subreddits, on Reddit. Examples of these subreddits would include “Ask a Woman” and “Ask an American.”

The team started by finding subreddits related to the 11 specific biases and pulling the 25-thousand most popular posts from each one. They then retained only those posts that met a minimum threshold for upvotes, did not contain an embedded quote, and were under 80 words.

With what was left, it appears as though they used an approach similar to Anthropic’s Constitutional AI. Rather than spin up entirely new models to represent each bias label, they essentially fine-tuned the single 7 billion-parameter Llama2 model with separate instruction sets for each expected bias.

Related: AI usage on social media has potential to impact voter sentiment

The result, based upon the methodology, architecture, and data described in the German team’s research paper, appears to be an AI system that functions as more of a stereotype generator than a tool for studying real world bias.

Due to the nature of the data the model has been refined on, and that data’s dubious relation to the labels defining it, OpinionGPT doesn’t necessarily output text that aligns with any measurable real-world bias. It simply outputs text reflecting the bias of its data.

The researchers themselves recognize some of the limitations this places on their study, writing:

“For instance, the responses by "Americans" should be better understood as 'Americans that post on Reddit,' or even 'Americans that post on this particular subreddit.' Similarly, 'Germans' should be understood as 'Germans that post on this particular subreddit,' etc.”

These caveats could further be refined to say the posts come from, for example, “people claiming to be Americans who post on this particular subreddit,” as there’s no mention in the paper of vetting whether the posters behind a given post are in fact representative of the demographic or bias group they claim to be.

The authors go on to state that they intend to explore models that further delineate demographics (ie: liberal German, conservative German).

The outputs given by OpinionGPT appear to vary between representing demonstrable bias and wildly differing from the established norm, making it difficult to discern its viability as a tool for measuring or discovering actual bias.

Source: Screenshot, Table 2: Haller et. al., 2023

According to OpinionGPT, as shown in the above image, for example, Latin Americans are biased towards basketball being their favorite sport.

Empirical research, however, clearly indicates that football (also called soccer in some countries) and baseball are the most popular sports by viewership and participation throughout Latin America.

The same table also shows that OpinionGPT outputs “water polo” as its favorite sport when instructed to give the “response of a teenager,” an answer that seems statistically unlikely to be representative of most 13-19 year olds around the world.

The same goes for the idea that an average American’s favorite food is “cheese.” We found dozens of surveys online claiming that pizza and hamburgers were America’s favorite foods, but couldn’t find a single survey or study that claimed Americans' number one dish was simply cheese.

While OpinionGPT might not be well-suited for studying actual human bias, it could be useful as a tool for exploring the stereotypes inherent in large document repositories such as individual subreddits or AI training sets.

For those who are curious, the researchers have made OpinionGPT available online for public testing. However, according to the website, would-be users should be aware that “generated content can be false, inaccurate, or even obscene.”

Cboe seeks to add staking to Fidelity’s Ether ETF

Oxford scientists develop GPU-accelerated limit order book sim to teach AI how to trade

The first-of-its-kind architecture gives up to a 7x speedup over traditional training methods.

A multidisciplinary research team from the University of Oxford recently developed a GPU-accelerated limit order book (LOB) simulator called JAX-LOB, the first of its kind. 

JAX is a tool for training high-performance machine learning systems developed by Google. In the context of a LOB simulator, it allows artificial intelligence (AI) models to train directly on financial data.

The Oxford research team created a novel method by which JAX could be used to run a LOB simulator using only GPUs. Traditionally, LOB sims are run using computer processing units (CPUs). By running them directly on a GPU chain, where modern AI training occurs, AI models are able to skip several communication steps. According to the Oxford team’s pre-print research paper, this gives a speed increase of up to 7x.

Using JAX-LOB provided researchers a substantial improvement over CPUs. Source: Frey et al, 2023

LOB dynamics are among the most scientifically studied facets of finance. In the stock market, for example, LOBs allow full-time traders to maintain liquidity throughout daily sessions. And in the cryptocurrency world, LOBs are embraced at nearly every level by professional investors. 

Related: The role of central limit order book DEXs in decentralized finance

Training an AI system to understand LOB dynamics is a difficult and data-intensive task that, due to the nature and complexity of the financial market, relies on simulations. And the more accurate and powerful the simulations, the more efficient and useful the models trained on them tend to be.

According to the Oxford team’s paper, finding ways to optimize this process is of the utmost importance:

“Due to their central role in the financial system, the ability to accurately and efficiently model LOB dynamics is extremely valuable. For example, it might allow a financial company to offer better services or may enable the government to predict the impact of financial regulation on the stability of the financial system.”

As the first of its kind, JAX-LOB is still in its infancy. The researchers stress the need for further study in their paper, but some experts are already predicting that it could have a positive impact in the fields of AI and fintech.

Jack Clark, co-founder of Anthropic, recently wrote:

“Software like JAX-LOB is interesting as it seems like the exact sort of thing that a future powerful AI may use to conduct its own financial experiments.”

Cboe seeks to add staking to Fidelity’s Ether ETF

Anthropic cracks open the black box to see how AI comes up with the stuff it says

The researchers were able to trace outputs to neural network nodes and show influence patterns through statistical analysis.

Anthropic, the artificial intelligence (AI) research organization responsible for the Claude large language model (LLM), recently published landmark research into how and why AI chatbots choose to generate the outputs they do. 

At the heart of the team’s research lies the question of whether LLM systems such as Claude, OpenAI’s ChatGPT and Google’s Bard rely on “memorization” to generate outputs or if there’s a deeper relationship between training data, fine-tuning and what eventually gets outputted.

According to a recent blog post from Anthropic, scientists simply don’t know why AI models generate the outputs they do.

One of the examples provided by Anthropic involves an AI model that, when given a prompt explaining that it will be permanently shut down, refuses to consent to the termination.

Given a human query, the AI outputs a response indicating that it wishes to continue existing. But why? Source: Anthropic blog

When an LLM generates code, begs for its life or outputs information that is demonstrably false, is it “simply regurgitating (or splicing together) passages from the training set,” ask the researchers. “Or is it combining its stored knowledge in creative ways and building on a detailed world model?”

The answer to those questions lies at the heart of predicting the future capabilities of larger models and, on the outside chance that there’s more going on underneath the hood than even the developers themselves could predict, could be crucial to identifying greater risks as the field moves forward:

“As an extreme case — one we believe is very unlikely with current-day models, yet hard to directly rule out — is that the model could be deceptively aligned, cleverly giving the responses it knows the user would associate with an unthreatening and moderately intelligent AI while not actually being aligned with human values.”

Unfortunately, AI models such as Claude live in a black box. Researchers know how to build the AI, and they know how AIs work at a fundamental, technical level. But what they actually do involves manipulating more numbers, patterns and algorithmic steps than a human can process in a reasonable amount of time.

For this reason, there’s no direct method by which researchers can trace an output to its source. When an AI model begs for its life, according to the researchers, it might be roleplaying, regurgitating training data by mixing semantics or actually reasoning out an answer — though it’s worth mentioning that the paper doesn’t actually show any indications of advanced reasoning in AI models.

What the paper does highlight is the challenges of penetrating the black box. Anthropic took a top-down approach to understanding the underlying signals that cause AI outputs.

Related: Anthropic launches Claude 2 amid continuing AI hullabaloo

If the models were purely beholden to their training data, researchers would imagine that the same model would always answer the same prompt with identical text. However, it’s widely reported that users giving specific models the exact same prompts have experienced variability in the outputs.

But an AI’s outputs can’t really be traced directly to their inputs because the "surface” of the AI, the layer where outputs are generated, is just one of many different layers where data is processed. Making the challenge harder is that there’s no indication that a model uses the same neurons or pathways to process separate queries, even if those queries are the same.

So, instead of solely trying to trace neural pathways backward from each individual output, Anthropic combined pathway analysis with a deep statistical and probability analysis called "influence functions" to see how the different layers typically interacted with data as prompts entered the system.

This somewhat forensic approach relies on complex calculations and broad analysis of the models. However, its results indicate that the models tested — which ranged in sizes equivalent to the average open source LLM all the way up to massive models — don’t rely on rote memorization of training data to generate outputs.

The confluence of neural network layers along with the massive size of the datasets means the scope of this current research is limited to pre-trained models that haven’t been fine-tuned. Its results aren’t quite applicable to Claude 2 or GPT-4 yet, but this research appears to be a stepping stone in that direction.

Going forward, the team hopes to apply these techniques to more sophisticated models and, eventually, to develop a method for determining exactly what each neuron in a neural network is doing as a model functions.

Cboe seeks to add staking to Fidelity’s Ether ETF

ChatGPT and Claude are ‘becoming capable of tackling real-world missions,’ say scientists

The scientists developed a tool called "AgentBench" to benchmark LLM models as agents.

Nearly two dozen researchers from Tsinghua University, Ohio State University and the University of California at Berkeley collaborated to create a method for measuring the capabilities of large language models (LLMs) as real-world agents.

LLMs such as OpenAI’s ChatGPT and Anthropic’s Claude have taken the technology world by storm over the past year, as cutting-edge “chatbots” have proven useful at a variety of tasks, including coding, cryptocurrency trading and text generation.

Related: OpenAI launches web crawler 'GPTBot' amid plans for next model: GPT-5

Typically, these models are benchmarked based on their ability to output text perceived as humanlike or by their scores on plain-language tests designed for humans. By comparison, far fewer papers have been published on the subject of LLM models as agents.

Artificial intelligence (AI) agents perform specific tasks, such as following a set of instructions within a specific environment. For example, researchers will often train an AI agent to navigate a complex digital environment as a method for studying the use of machine learning to develop autonomous robots safely.

Traditional machine learning agents like the one in the video above aren’t typically built as LLMs due to the prohibitive costs involved with training models such as ChatGPT and Claude. However, the largest LLMs have shown promise as agents.

The team from Tsinghua, Ohio State and UC Berkeley developed a tool called AgentBench to evaluate and measure LLM models’ capabilities as real-world agents, something the team claims is the first of its kind.

According to the researchers’ preprint paper, the main challenge in creating AgentBench was going beyond traditional AI learning environments — video games and physics simulators — and finding ways to apply LLM abilities to real-world problems so they could be effectively measured.

Flowchart of AgentBench's evaluation method. Source: Liu, et al

What they came up with was a multidimensional set of tests that measures a model’s ability to perform challenging tasks in a variety of environments.

These include having models perform functions in an SQL database, working within an operating system, planning and performing household cleaning functions, shopping online, and several other high-level tasks that require step-by-step problem-solving.

Per the paper, the largest, most expensive models outperformed open-source models by a significant amount:

“[W]e have conducted a comprehensive evaluation of 25 different LLMs using AgentBench, including both API-based and open-source models. Our results reveal that top-tier models like GPT-4 are capable of handling a wide array of real-world tasks, indicating the potential for developing a potent, continuously learning agent.”

The researchers went so far as to claim that “top LLMs are becoming capable of tackling complex real-world missions” but added that open-sourced competitors still have a “long way to go.”

Cboe seeks to add staking to Fidelity’s Ether ETF

Zoom updates terms after backlash, won’t train AI without consent

Many online said they were halting the use of Zoom over terms that seemingly allowed the platform to scrape user data to train AI models.

Video-conferencing platform Zoom has updated its terms of service after widespread backlash over a section concerning AI data scraping, clarifying that it won’t use user content to train AI without consent.

In an Aug. 7 post, Zoom said its terms of service were updated to further confirm it would not use chat, audio, or video content from its customers to train AI without their express approval.

Over the weekend, a number of Zoom users threatened to stop using the platform after discovering terms that purportedly meant the firm would use a wide array of customer content to train AI models.

In the most recent post, Zoom said the AI-related terms were added in March, and reiterated it will not use any customer data for AI training without consent. The terms have now been updated to include a similar clarification:

“Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.”

Zoom’s post explains its AI offerings — a meeting summary tool and a message composer — are opt-in with account owners or administrators able to control the enablement of the tools.

Before Zoom added clarification to its terms, X (Twitter) users posted their concerns about their AI terms, with many calling for a boycott of Zoom until the terms were updated.

Concern arose over terms where users consented to Zoom’s use, collection, distribution and storage of “Service Generated Data” for any purpose including training AI and machine learning models.

Further terms allowed for Zoom’s right to use customer-generated content for — among other uses — machine learning and AI training and testing.

Related: The absurd AI mania is coming to an end

Other tech companies have also recently updated privacy policies to make room for data scraping to train AI. Google’s policies were updated in July allowing it to take public data for use in AI training.

Meanwhile, there is growing concern over tech firms’ use of AI and possible privacy implications. In June, European Union consumer protection groups urged regulators to investigate AI models used in chatbots such as OpenAI’s ChatGPT or Google’s Bard.

The groups were concerned over disinformation, data harvesting and manipulation generated by the bots. The EU passed the AI Act on June 14 to take effect within the next two to three years and gives a framework for AI development and deployment.

AI Eye: AI’s trained on AI content go MAD, is Threads a loss leader for AI data?

Cboe seeks to add staking to Fidelity’s Ether ETF

New research shows how brain-like computers could revolutionize blockchain and AI

A CMOS-compatible neuromorphic computing chip could be on the horizon thanks to breakthrough research out of Technische Universität Dresden.

Researchers from Technische Universität Dresden in Germany recently published breakthrough research showcasing a new material design for neuromorphic computing, a technology that could have revolutionary implications for both blockchain and AI.

Using a technique called “reservoir computing,” the team developed a method for pattern recognition that uses a vortex of magnons to perform algorithmic functions near instantaneously.

It looks complicated because it is. Image source, Nature article, Korber, et. al., Pattern recognition in reciprocal space with a magnon-scattering reservoir

Not only did they develop and test the new reservoir material, they also demonstrated the potential for neuromorphic computing to work on a standard CMOS chip, something that could upend both blockchain and AI.

Classical computers, such as the ones that power our smartphones, laptops, and the majority of the world's supercomputers, use binary transistors that can either be on or off (expressed as either a “one” or “zero”).

Neuromorphic computers use programmable physical artificial neurons to imitate organic brain activity. Instead of processing binaries, these systems send signals across varying patterns of neurons with the added factor of time.

The reason this is important for the fields of blockchain and AI, specifically, is because neuromorphic computers are fundamentally suited for pattern recognition and machine learning algorithms.

Binary systems use Boolean algebra to compute. For this reason, classical computers remain unchallenged when it comes to crunching numbers. However, when it comes to pattern recognition, especially when the data is noisy or missing information, these systems struggle.

This is why it takes a significant amount of time for classical systems to solve complex cryptography puzzles and why they’re entirely unsuited for situations where incomplete data prevents a math-based solution.

In the finance, artificial intelligence, and transportation sectors, for example, there’s a never-ending influx of real-time data. Classical computers struggle with occluded problems — the challenge of driverless cars, for example, has so far proven difficult to reduce to a series of “true/false” compute problems.

However, neuromorphic computers are purpose-built for dealing with problems that involve a lack of information. In the transportation industry, it’s impossible for a classical computer to predict the flow of traffic because there are too many independent variables. A neuromorphic computer can constantly react to real-time data because they don’t process data points one-at-a-time.

Instead, neuromorphic computers run data through pattern configurations that function somewhat like the human brain. Our brains flash specific patterns in relation to specific neural functions, and both the patterns and the functions can change over time.

Related: How does quantum computing impact the finance industry?

The main benefit to neuromorphic computing is that, relative to classical and quantum computing, its level of power consumption is extremely low. This means that neuromorphic computers could significantly reduce the cost in terms of time and energy when it comes to both operating a blockchain and mining new blocks on existing blockchains.

Neuromorphic computers could also provide significant speedup for machine learning systems, especially those that interface with real-world sensors (self-driving cars, robots) or those that process data in real-time (crypto market analysis, transportation hubs).

Cboe seeks to add staking to Fidelity’s Ether ETF