1. Home
  2. Science

Science

Ripple publishes math prof’s warning: ‘public-key cryptosystems should be replaced’

Mathematician Massimiliano Sala says current encryption methods won’t protect blockchain systems from quantum computers.

Professor Massimiliano Sala, of the University of Trento in Italy, recently discussed the future of blockchain technology, as it relates to encryption and quantum computing, with the crew at Ripple as part of the company’s ongoing university lecture series. 

Sala’s discussion focused on the potential threat posed by quantum computers as the technology matures. According to the professor, current encryption methods could be easy for tomorrow’s quantum computers to solve, thus putting entire blockchains at risk.

Per Sala:

Read more

How to buy Bitcoin on eToro

Harvard built hacker-proof quantum network in Boston using existing fiber cable

According to the scientists, the 22-mile distance between nodes is the longest quantum fiber network to date.

Physicists at Harvard University have built what they believe is the world’s longest secure quantum communications network using 22 miles of currently existing fiber-optic cables.

The experiment, published in the scientific journal Nature, connected two functional quantum computer nodes to each other through a strange physical phenomenon called “entanglement.” This allowed them to share data across the 22-mile distance in a paradigm that, according to the laws of physics, is unhackable.

The world is currently embroiled in a technological race to shore up global computer security ahead of “Q Day,” a hypothetical point in the near future when bad actors will have access to quantum computers powerful enough to shred current encryption methods.

Read more

How to buy Bitcoin on eToro

Harvard scientists claim breakthrough, ‘advent of early error-corrected quantum computation’

The team’s results, once reviewed, could represent a significant milestone in quantum computing research.

When industry insiders talk about a future where quantum computers are capable of solving problems that classical, binary computers can’t, they’re referring to something called “quantum advantage.”

In order to achieve this advantage, quantum computers need to be stable enough to scale in size and capability. By-and-large, quantum computing experts believe the largest impediment to scalability in quantum computing systems is noise.

Read more

How to buy Bitcoin on eToro

Changpeng Zhao’s next move could involve decentralized science

Decentralized science, or DeSci, aims to apply decentralized business models to medical research.

Changpeng “CZ” Zhao’s tenure as the CEO of Binance may be over, but the exchange giant’s loss could be a boon for the decentralized science (DeSci) sector.

In a comment on X (formerly Twitter) on Tuesday, Nov. 28, the former Binance CEO revealed an interest in the rapidly developing sector.

Read more

How to buy Bitcoin on eToro

Researchers in China developed a hallucination correction engine for AI models

The “Woodpecker” hallucination correction system can, ostensibly, be applied to any multi-modal large language model, according to the research.

A team of scientists from the University of Science and Technology of China and Tencent’s YouTu Lab have developed a tool to combat “hallucination” by artificial intelligence (AI) models. 

Hallucination is the tendency for an AI model to generate outputs with a high level of confidence that don’t appear based on information present in its training data. This problem permeates large language model (LLM) research. Its effects can be seen in models such as OpenAI’s ChatGPT and Anthropic’s Claude.

The USTC/Tencent team developed a tool called “Woodpecker” that they claim is capable of correcting hallucinations in multi-modal large language models (MLLMs).

This subset of AI involves models such as GPT-4 (especially its visual variant, GPT-4V) and other systems that roll vision and/or other processing into the generative AI modality alongside text-based language modelling.

According to the team’s pre-print research paper, Woodpecker uses three separate AI models, apart from the MLLM being corrected for hallucinations, to perform hallucination correction.

These include GPT-3.5 turbo, Grounding DINO, and BLIP-2-FlanT5. Together, these models work as evaluators to identify hallucinations and instruct the model being corrected to re-generate its output in accordance with its data.

In each of the above examples, an LLM hallucinates an incorrect answer (green background) to prompting (blue background). The corrected “Woodpecker” responses are shown with a red background. (Image source: Yin, et. al., 2023).

To correct hallucinations, the AI models powering “Woodpecker” use a five-stage process that involves “key concept extraction, question formulation, visual knowledge validation, visual claim generation, and hallucination correction.”

The researchers claim these techniques provide additional transparency and “a 30.66%/24.33% improvement in accuracy over the baseline MiniGPT-4/mPLUG-Owl.” They evaluated numerous “off the shelf” MLLMs using their method and concluded that Woodpecker could be “easily integrated into other MLLMs.”

Related: Humans and AI often prefer sycophantic chatbot answers to the truth — Study

An evaluation version of Woodpecker is available on Gradio Live where anyone curious can check out the tool in action.

How to buy Bitcoin on eToro

DAOs can help scientists find funding and community, says Nature science journal

Decentralization could help bring scientists in underfunded fields and locations to the table without requiring relocation or reemployment

The Nature science journal recently published an editorial in its Nature Bioscience section lauding the use of decentralized autonomous organizations (DAOs) as a revolutionary new method by which researchers working in underfunded scientific fields can create communities around their work and raise funding which, otherwise, might not be available.

In a DAO-based research scheme, a project’s organization, fundraising, feedback, and pipeline from discovery to product/industry can all be handled by the same decentralized governing body.

Per the Nature article, the general workflow would also be streamlined compared to the status quo:

“Project proposals are sent to the DAO, and each DAO member is able to vote on whether a particular project should be funded. Members have tokens … to provide support and feedback to new project proposals. Research results are also provided to the DAO as projects continue, leading to further feedback and engagement. Eventually, the project will (hopefully) end up in an IP-NFT (intellectual property non-fungible token) — something like a patent, which is owned by the DAO and governed by all token holders.”

Funding can vary wildly from one scientific endeavor to another. During boom and bust periods, research into areas such as AI and quantum computing might receive huge boons from big tech, government, and follow-on investors while sectors which may have been well-funded previously, such as longevity, or those that have been traditionally underfunded, women’s health issues for example, may find funding increasingly difficult to secure.

DAOs are built on blockchain technology. This allows them to function on a digital ledger that is transparent and decentralized – meaning it isn't controlled by a single entity or institution. In the science world, this means that project funding and community interaction can be democratized.

Related: DAOs need to learn from Burning Man for mainstream adoption

Traditionally, those scientists working at or with the most prestigious institutions — major universities in countries with high GDPs, government institutions and contractors, big tech and big pharma companies — not only receive the most funding, but also have access to the most potential funding.

The distinction is important because, as scientists leave geographical areas with less funding to pursue research in wealthier areas, the “brain drain” associated with emigration is compounded.

And, because DAOs don’t necessarily have to respect borders (though the legalities surrounding their operation can vary by location), they can be governed by the needs and wishes of the scientists performing the research, not the country, university, or company sponsoring it.

Ultimately, the Nature editorial staff concludes that DAOs could become a crucial platform for underfunded researchers, but adoption will require further education.

“Part of this challenge is helping possible members realize that the DAO is not just a funding body,” the staff writes, “but also a community of people who care strongly about supporting a particular scientific cause.”

How to buy Bitcoin on eToro

Academia divided over ChatGPT’s left political bias claims

Academics are divided over research that claims that ChatGPT displays political biases in different jurisdictions.

Academics are at odds over a research paper that suggests that ChatGPT presents a “significant and sizeable” political bias leaning towards the left side of the political spectrum.

As Cointelegraph previously reported, researchers from the United Kingdom and Brazil published a study in the Public Choice journal on Aug. 17 that asserts that large language models (LLMs) like ChatGPT output text that contains errors and biases that could mislead readers and have the ability to promulgate political biases presented by traditional media.

In an earlier correspondence with Cointelegraph, co-author Victor Rangel unpacked the aims of the paper to measure the political bias of ChatGPT. The researchers methodology involves asking ChatGPT to impersonate someone from a given side of the political spectrum and compares these answers with its default mode.

Rangel also noted that several robustness tests were carried out to address potential confounding factors and alternative explanations:

“We find that ChatGPT exhibits a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK.”

It is worth noting that the authors stress that the paper does not serve as a “final word on ChatGPT political bias”, given challenges and complexities involved in measuring and interpreting bias in LLMs.

Rangel said that some critics contend that their method may not capture the nuances of political ideology, that the method's questions may be biased or leading, or that results may be influenced by the randomness of ChatGPT’s output.

Related: ChatGPT and Claude are ‘becoming capable of tackling real-world missions,’ say scientists

He added that while LLMs hold potential for “enhancing human communication”, they pose “significant risks and challenges” for society.

The paper has seemingly fulfilled its promise of stimulating research and discussion to the topic, with academics already contending various parameters of its methodology and findings.

Among vocal critics that took to social media to weigh in on the findings was Princeton computer science professor Arvind Narayanan, who published an in-depth Medium post unpacking scientific critique of the report, its methodology and findings.

Narayanan and other scientists pointed out a number of perceived issues with the experiment, firstly that the researchers did not actually use ChatGPT itself to conduct the experiment:

“They didn’t test ChatGPT! They tested text-davinci-003, an older model that’s not used in ChatGPT, whether with the GPT-3.5 or the GPT-4 setting.”

Narayanan also suggests that the experiment did not measure bias, but asked it to roleplay as a member of a political party. As such, the AI chatbot would exhibit political slants to the left or right when prompted to role play as members from either sides of the spectrum.

The chatbot was also constrained to answering multiple choice questions only, which may have limited its ability or influenced the perceived bias.

Colin Fraser, a data scientist at Meta according to his Medium page, also offered a review of the paper on X, highlighting the order in which the researchers prompted multiple choice questions with role play and without having a significant influence on the outputs the AI generated:

“This is saying that by changing the prompt order from Dem first to Rep first, you increase the overall agreement rate for the Dem persona over all questions from 30% to 64%, and decrease from 70% to 22% for rep.”

As Rangel had previously noted, there is a large amount of interest in the nature of LLMs and the outputs they produce, but questions still linger over how the tools work, what biases they have and how they can potenttial affect users' opinions and behaviours.

Cointelegraph has reached out to Narayanan for further insights into his critique and the ongoing debate around bias in large language learning models, but has not received a response.

Magazine: ‘Moral responsibility’: Can blockchain really improve trust in AI?

How to buy Bitcoin on eToro

New research shows how brain-like computers could revolutionize blockchain and AI

A CMOS-compatible neuromorphic computing chip could be on the horizon thanks to breakthrough research out of Technische Universität Dresden.

Researchers from Technische Universität Dresden in Germany recently published breakthrough research showcasing a new material design for neuromorphic computing, a technology that could have revolutionary implications for both blockchain and AI.

Using a technique called “reservoir computing,” the team developed a method for pattern recognition that uses a vortex of magnons to perform algorithmic functions near instantaneously.

It looks complicated because it is. Image source, Nature article, Korber, et. al., Pattern recognition in reciprocal space with a magnon-scattering reservoir

Not only did they develop and test the new reservoir material, they also demonstrated the potential for neuromorphic computing to work on a standard CMOS chip, something that could upend both blockchain and AI.

Classical computers, such as the ones that power our smartphones, laptops, and the majority of the world's supercomputers, use binary transistors that can either be on or off (expressed as either a “one” or “zero”).

Neuromorphic computers use programmable physical artificial neurons to imitate organic brain activity. Instead of processing binaries, these systems send signals across varying patterns of neurons with the added factor of time.

The reason this is important for the fields of blockchain and AI, specifically, is because neuromorphic computers are fundamentally suited for pattern recognition and machine learning algorithms.

Binary systems use Boolean algebra to compute. For this reason, classical computers remain unchallenged when it comes to crunching numbers. However, when it comes to pattern recognition, especially when the data is noisy or missing information, these systems struggle.

This is why it takes a significant amount of time for classical systems to solve complex cryptography puzzles and why they’re entirely unsuited for situations where incomplete data prevents a math-based solution.

In the finance, artificial intelligence, and transportation sectors, for example, there’s a never-ending influx of real-time data. Classical computers struggle with occluded problems — the challenge of driverless cars, for example, has so far proven difficult to reduce to a series of “true/false” compute problems.

However, neuromorphic computers are purpose-built for dealing with problems that involve a lack of information. In the transportation industry, it’s impossible for a classical computer to predict the flow of traffic because there are too many independent variables. A neuromorphic computer can constantly react to real-time data because they don’t process data points one-at-a-time.

Instead, neuromorphic computers run data through pattern configurations that function somewhat like the human brain. Our brains flash specific patterns in relation to specific neural functions, and both the patterns and the functions can change over time.

Related: How does quantum computing impact the finance industry?

The main benefit to neuromorphic computing is that, relative to classical and quantum computing, its level of power consumption is extremely low. This means that neuromorphic computers could significantly reduce the cost in terms of time and energy when it comes to both operating a blockchain and mining new blocks on existing blockchains.

Neuromorphic computers could also provide significant speedup for machine learning systems, especially those that interface with real-world sensors (self-driving cars, robots) or those that process data in real-time (crypto market analysis, transportation hubs).

How to buy Bitcoin on eToro

Scientists warn the ‘quantum revolution’ may stagnate economic growth

There are “traps” lying in wait for innovators at the vanguard of fintech and quantum computing, according to researchers.

Quantum computing technologies are slowly beginning to trickle out of the laboratory setting and into commercial industries. While it remains to be seen when mainstream adoption will occur, a number of companies are currently engaged in experiments and trials with paying clients to develop quantum computing solutions. 

According to a pair of researchers from the University of Cambridge and Bandung Institute of Technology, respectively, this represents a critical period wherein the world still has the opportunity to prepare itself for what they’re deeming “the quantum revolution.”

In a recently published commentary in the Nature journal, researchers Chander Velu and Fathiro Putra describe the ‘productivity paradox’ and explain how the mainstream adoption of quantum computing could slash economic growth for a decade or more.

Per their commentary:

“The digital revolution took decades and required businesses to replace expensive equipment and completely rethink how they operate. The quantum computing revolution could be much more painful.”

The productivity paradox is a business and finance term that explains why the introduction of new, better technology doesn’t usually result in an immediate increase in productivity.

We’ve seen this in nearly every aspect of the nascent blockchain and cryptocurrency industries. As the requirements for mining increase, for example, so do the costs associated with entering the space in any competitive capacity.

Less than a decade ago, it was fashionable to mine cryptocurrency with your desktop PC’s spare compute. As the rates of adoption have risen, so have corporate interests and the costs of entry.

Screenshot of chart showing mining hashrates over time on Blockchain.com

And, as fintech is one of the industries experts predict will experience immediate disruption from the quantum computing sector, it’s likely we’ll see direct integration with mining, blockchain and cryptocurrency technologies immediately.

Related: Researchers demonstrate ‘unconditionally secure’ quantum digital payments

To explain the productivity paradox, the researchers cite a period lasting from 1976 through 1990 where labor productivity growth — a measure of how productive individuals are at work over time — slowed to a crawl. The reason for this stagnation involved the onset of the computer era.

Essentially, the costs associated with the global switch from paper to computers combined with the need to retrain the entire workforce and create entirely solution ecosystems and workflows caused the trend of growth to stall out until the integration finally completed during the mid-1990s.

The researchers see a similar predicament occurring as quantum computers go from brushing up against usefulness to, potentially, becoming a backbone technology for business.

The two main roadblocks to a smooth transition into the quantum age, according to the researchers, are a lack of general understanding of the technology among leaders and risk aversion.

While businesses with a clear use case, such as shipping or pharmaceutical companies, may be quick to adopt quantum solutions, the rate-of-return might not appeal to risk-averse businesses looking for immediate impact.

To mitigate these concerns and accelerate the adoption of quantum computing, the researchers suggest a renewed focus from governments and researchers on illustrating the potential benefits of quantum computing and the development of language and terminology to explain the necessary concepts to the business community and the general public.

The researchers conclude by stating that the first order of business when it comes to preparing for the quantum computing future is to ensure that the “quantum internet” is ready for secure networking.

How to buy Bitcoin on eToro

Researchers demonstrate ‘unconditionally secure’ quantum digital payments

The research represents a possible breakthrough in quantum communications and, potentially, the onset of the era of quantum fintech.

The dream of a completely secure, unhackable, absolutely private digital payment system could soon be realized thanks to new research out of the University of Vienna.

In a paper published on July 4 titled “Demonstration of quantum-digital payments,” a team of researchers at the Vienna Center for Quantum Science and Technology showed off what may be the first “unconditionally secure” digital transaction system based on quantum mechanics.

To accomplish this, the researchers encrypted a payment transaction using a pair of quantum entangled photons. Through this entanglement, wherein any change in the state featured by one photon is reflected exactly in the other photon, even when separated by distance, the researchers were able to ensure that any attempts to modify the transaction are thwarted by the nature of quantum mechanics itself.

Per the researchers’ paper:

“We show how quantum light can secure daily digital payments by generating inherently unforgeable quantum cryptograms.”

One of the most useful features of quantum entanglement is the fact that we can’t know what state an entangled object is in until we measure it.

A simple way to understand quantum mechanics and measurements is to imagine flipping a coin and then catching it and covering it with your hand before you or anyone else can see what side it landed on. Until you remove your hand, it can be heads or tails with equal probability. Once measured, the uncertainty collapses and you have a measurement.

Scientists can exploit this by using entangled objects, such as photons, to ensure parity and send information that can’t be modified or intercepted.

Related: History of computing: From Abacus to quantum computers

Thus, the researchers generated entangled photons using a laser process and encoded them with transaction information. The photons were then sent through over 400 meters of fiber optic cables to successfully complete a digital payment transaction between parties in different buildings.

Were a bad actor to attempt an adversarial attack on such a transaction, the quantum state of the photons would collapse due to measurement, and the system would generate a new pair of entangled photons with a novel, unforgeable cryptogram.

While it’s possible this could represent a breakthrough in quantum communications for digital payments, there is one small caveat: Currently, the researchers say it takes “tens of minutes” for a simple digital payment to complete using the method.

However, this limitation may only be temporary, as the researchers are adamant that this isn’t a hard stop due to the laws of physics but just a minor technological limitation — one that might be resolved through higher-intensity photons.

“Indeed, brighter sources of entangled photon pairs have already been demonstrated, which could decrease the quantum token transmission time to under a second,” wrote the authors.

How to buy Bitcoin on eToro