1. Home
  2. Machine Learning

Machine Learning

AI21 Labs debuts anti-hallucination feature for GPT chatbots

Contextual Answers is designed for enterprise but could have far-reaching implications for the generative AI sector.

AI21 Labs recently launched “Contextual Answers,” a question-answering engine for large language models (LLMs). 

When connected to an LLM, the new engine allows users to upload their own data libraries in order to restrict the model’s outputs to specific information.

The launch of ChatGPT and similar artificial intelligence (AI) products has been paradigm-shifting for the AI industry, but a lack of trustworthiness makes adoption a difficult prospect for many businesses.

According to research, employees spend nearly half of their workdays searching for information. This presents a huge opportunity for chatbots capable of performing search functions; however, most chatbots aren’t geared toward enterprise.

AI21 developed Contextual Answers to address the gap between chatbots designed for general use and enterprise-level question-answering services by giving users the ability to pipeline their own data and document libraries.

According to a blog post from AI21, Contextual Answers allows users to steer AI answers without retraining models, thus mitigating some of the biggest impediments to adoption:

“Most businesses struggle to adopt [AI], citing cost, complexity and lack of the models’ specialization in their organizational data, leading to responses that are incorrect, ‘hallucinated’ or inappropriate for the context.”

One of the outstanding challenges related to the development of useful LLMs, such as OpenAI’s ChatGPT or Google’s Bard, is teaching them to express a lack of confidence.

Typically, when a user queries a chatbot, it’ll output a response even if there isn’t enough information in its data set to give factual information. In these cases, rather than output a low-confidence answer such as “I don’t know,” LLMs will often make up information without any factual basis.

Researchers dub these outputs “hallucinations” because the machines generate information that seemingly doesn’t exist in their data sets, like humans who see things that aren’t really there.

According to A121, Contextual Answers should mitigate the hallucination problem entirely by either outputting information only when it’s relevant to user-provided documentation or outputting nothing at all.

In sectors where accuracy is more important than automation, such as finance and law, the onset of generative pretrained transformer (GPT) systems has had varying results.

Experts continue to recommend caution in finance when using GPT systems due to their tendency to hallucinate or conflate information, even when connected to the internet and capable of linking to sources. And in the legal sector, a lawyer now faces fines and sanctioning after relying on outputs generated by ChatGPT during a case.

By front-loading AI systems with relevant data and intervening before the system can hallucinate non-factual information, AI21 appears to have demonstrated a mitigation for the hallucination problem.

This could result in mass adoption, especially in the fintech arena, where traditional financial institutions have been reluctant to embrace GPT tech, and the cryptocurrency and blockchain communities have had mixed success at best employing chatbots.

Related: OpenAI launches ‘custom instructions’ for ChatGPT so users don’t have to repeat themselves in every prompt

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

Apple has its own GPT AI system but no stated plans for public release: Report

The Cupertino, California-based company reportedly developed an internal GPT system on Google infrastructure for employees to tinker with.

Apple’s reportedly working on its own generative pre-trained transformer (GPT) artificial intelligence (AI) model. However, there’s no indication that the company has any plans to launch it to the public.

Per a July 19 report from Bloomberg, Apple’s internal GPT system is called “Ajax.” It’s purportedly similar to OpenAI’s ChatGPT and Google’s Bard.

Apple has a longstanding reputation for developing its products inside of a walled garden. While “Ajax,” which Bloomberg reports some engineers have referred to as “Apple GPT,” could eventually turn into a consumer-facing product, its current infrastructure could pose a problem.

Apple codenamed its GPT system “Ajax” because it was developed on top of Google Jax, a machine learning framework. It’s also reportedly running on Google Cloud, which could limit Apple’s ability to affordably scale Ajax beyond internal testing.

Google’s Bard AI system is one of the primary competitors in the consumer-facing generative AI technology space, facing direct opposition from Microsoft and OpenAI in the form of BingAI and ChatGPT. However, Apple has so far not indicated it intends to compete in this arena.

Related: Meta and Microsoft launch open-source AI model Llama 2

Apple’s track record concerning AI demonstrates that the company is privacy-focused when it concerns machine learning technology. For this reason, most of its efforts are focused on AI technologies that can be run using onboard processors instead of cloud-based services.

Chatbot technology, such as ChatGPT, typically requires internet connectivity to work. While it’s possible to run a chatbot on discrete architecture, such as the AI chip on an iPhone, the model size and capabilities are constrained by the device’s hardware.

However, if Apple were to develop a comparatively useful GPT model capable of running discretely on iPhone hardware, that could be a boon to users who value privacy over the conversational features embedded in the larger cloud-based models. Consumers who require privacy as a default would stand to benefit the most.

This could also solve or mitigate some of the outstanding problems with GPT-based chatbots. Cryptocurrency trading bots built on GPT tech, for example, currently suffer from hallucinations. In the technical sense, this means that sometimes they make things up when they can’t come up with a factual answer. 

A pretrained chatbot capable of referencing user-tuned data sets and running discretely on an iPhone could eliminate noisy data for trading bots and keep users’ financial data — such as wallet keys, personally identifiable information and transaction records — completely private.

Despite Apple’s lack of impact in the chatbot space, the Cupertino, California-based company is one of the most impactful players in AI. The AI powering the iPhone’s camera and photography editing suite remains cutting edge, and Apple Research outputs a steady stream of significant papers in the machine learning space.

A veritable “who’s who" of AI luminaries and renowned experts have also recently filtered through the company’s secretive AI labs, including the “GANfather,” Ian Goodfellow, who recently left the company to join Google DeepMind, and its current head of AI, John Giannandrea, who previously led Search at Google.

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

Elon Musk’s new AI startup is as ambitious as it is doomed

The public perception surrounding AI’s abilities is no match for the laws of physics.

Almost nothing is known about Elon Musk’s latest endeavor, an artificial intelligence startup named xAI. But “almost nothing” is still something. And we can glean a lot from what little we do know.

As Cointelegraph recently reported, Musk announced xAI on July 12 in a statement comprising three sentences, “Today we announce the formation of xAI. The goal of xAI is to understand the true nature of the universe. You can meet the team and ask us questions during a Twitter Spaces chat on Friday, July 14th.”

Based on this information we can deduce that xAI exists, it is doomed, and more information about how it will fail will be revealed on Twitter. The reason it is doomed is simple: The laws of physics prevent it.

According to a report from Reuters, Musk’s motivation for xAI is based on a desire to develop safe artificial intelligence (AI). In a recent Twitter Spaces event, he said:

“If it tried to understand the true nature of the universe, that’s actually the best thing that I can come up with from an AI safety standpoint.”

This is a laudable goal, but any attempts to understand the “true” nature of the universe are doomed because there isn’t a ground-truth knowledge center somewhere where we can verify our theories against.

It’s not that humans aren’t smart enough to understand the nature of the universe — the problem is that the universe is really, really big, and we’re stuck inside of it.

Heisenberg’s Uncertainty Principle tells us unequivocally that certain aspects of reality cannot be confirmed simultaneously through observation or measurement. This is the reason why we can’t just measure the distance between Earth and Uranus, wait a year, measure it again, and determine the exact rate of the universe’s expansion.

The scientific method requires observation, and, as the anthropic principle teaches us, all observers are limited.

In the case of the observable universe, we’re further limited by the nature of physics. The universe is expanding at such a rapid pace that it prohibits us from measuring anything beyond a certain point, no matter what tools we use.

The universe’s expansion doesn’t just make it bigger. It gives it a distinct, definable “cosmological horizon” that the laws of physics prevent us from measuring beyond. If we were to send a probe out at the maximum allowable speed under the laws of physics, the speed of light, then every bit of the universe that’s beyond the exact point the probe could travel in X amount of time is forever inaccessible.

This means even a hypothetical superintelligence capable of processing all of the data that’s ever been generated still could not determine any ground truths about the universe.

A slight twist on Schrödinger’s Cat thought experiment, called Wigner’s Friend, demonstrates why this is the case. In the original, Erwin Schrödinger imagined a cat trapped in a box with a vial of radioactive liquid and a hammer that would strike the vial, and thus kill the cat, upon the completion of a quantum process.

One of the fundamental differences between quantum and classical processes is that quantum processes can be affected by observation. In quantum mechanics, this means that the hypothetical cat is both alive and dead until someone observes it.

Physicist Eugene Wigner was reportedly “irked” by this and decided to throw his own spin on the thought experiment to challenge Schrödinger’s assertions. His version added two scientists, one inside the lab who opens the box to observe whether the cat was alive or dead and another outside who opens the door to the lab to see whether the scientist inside knows whether the cat is alive or dead.

What xAI appears to be proposing is a reversal of Wigner’s thought experiment. They seemingly want to remove the cat from the box and replace it with a general pre-trained transformer (GPT) AI system — i.e., a chatbot like ChatGPT, Bard or Claude 2.

Related: Elon Musk to launch truth-seeking artificial intelligence platform TruthGPT

Instead of asking an observer to determine whether the AI is alive or dead, their plan is to ask the AI to discern ground truths about the lab outside of the box, the world outside of the lab and the universe beyond the cosmological horizon without making any observations.

The reality of what xAI seems to be proposing would mean the development of an oracle: a machine capable of knowing things it doesn’t have evidence for. 

There is no scientific basis for the idea of an oracle; its origins are rooted in mythology and religion. Scientifically speaking, the best we can hope for is that xAI develops a machine capable of parsing all of the data that’s ever been generated

There’s no conceivable reason to believe this would turn the machine into an oracle, but maybe it’ll allow it to help scientists see something they missed and lead to further insight. Perhaps the secret to cold fusion is lying around in a Reddit data set somewhere that nobody’s managed to use to train a GPT model yet.

But, unless the AI system can defy the laws of physics, any answers it gives us regarding the “true” nature of the universe will have to be taken on faith until confirmed by observations made from beyond the box — and the cosmological horizon.

For these reasons, and many others related to how GPT systems actually interpret queries, there’s no scientifically viable method by which xAI, or any other AI company, can develop a binary machine running classical algorithms capable of observing the truth about our quantum universe.

Tristan Greene is a deputy news editor for Cointelegraph. Aside from writing and researching, he enjoys gaming with his wife and studying military history.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

Anthropic launches Claude 2 amid continuing AI hullabaloo

The new model demonstrates measurable improvements across numerous categories, including near-instant query response times and the ability to parse inputs up to 100K tokens in size.

Anthropic, an artificial intelligence (AI) and “public benefit” company, launched Claude 2 on July 11, marking another milestone in a year full of seemingly nonstop progress from the burgeoning generative AI sector. 

According to a company blog post, Claude 2 shows improvements across nearly every measurable category. Perhaps most noteworthy among the differences between it and its predecessor is how the researchers discuss their work.

There’s no mention of traditional machine learning benchmarking or computational scores against similar models in the blog post announcing Claude 2. Instead, Anthropic tested both Claude and Claude 2 head-to-head on numerous tests meant to represent real-world knowledge, skills and problem-solving tests.

Claude 2 beat its predecessor across the board on knowledge, coding and other exams and, according to Anthropic, even scores well against human averages:

“When compared to college students applying to graduate school, Claude 2 scores above the 90th percentile on the GRE reading and writing exams, and similarly to the median applicant on quantitative reasoning.”

It is worth noting that many experts believe comparisons between human and AI test takers are inefficacious due to the nature of human cognitive reasoning and the likelihood that a large language model’s training data set contains test information. Essentially, tests designed for humans may not actually “test” an AI’s ability to reason or provide a proper demonstration of actual knowledge or skill.

Along with the launch of Claude 2, Anthropic debuted a beta version of a web-based “Talk to Claude” interface providing general access to the chatbot for users in the United States and the United Kingdom.

Related: How to land a high-paying job as an AI prompt engineer

Cointelegraph conducted brief testing of the new version and, anecdotally speaking, the improvements were immediately noticeable. Claude 2 responded to Cointelegraph prompts near instantly with clear, concise answers.

Chat with Claude 2. Source: Anthropic

According to Anthropic, the new model’s prompt limit is 100,000 tokens, or about the equivalent of 75,000 words. The site’s user interface indicates that users can upload PDF, TXT, CSV and similar documents for parsing; however, this functionality did not work in Cointelegraph's limited testing prior to publishing this article.

Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

Sarah Silverman sues Meta and OpenAI for copyright violations

Author Sarah Silverman and two others opened a lawsuit against OpenAI and Meta for using copyrighted work without permission to train their AI systems.

The American comedian and author Sarah Silverman, along with two other authors Richard Kadrey and Christopher Golden, have filed lawsuits against Meta Platforms’ LLaMa and OpenAI’s ChatGPT over copyright infringement. 

Meta and OpenAI are alleged to have used the plaintiffs’ content for training their respective artificial intelligence (AI) systems without obtaining any prior permission.

According to the court documents against Meta, many of the plaintiffs’ books under copyright appear in the dataset that “Meta has admitted to using to train LLaMA.”

Similarly, in the case against OpenAI, the lawsuit alleges that when ChatGPT generates summaries of the plaintiffs’ work it is an indication of the training via copyrighted content.

“The summaries get some details wrong. This is expected since a large language model mixes together expressive material derived from many sources. Still, the rest of the summaries are accurate…”

In order to obtain this data the suits claim that the companies retrieved the copyrighted data from what are known as “shadow libraries,” such as Bibliotik, Library Genesis, Z-Library, and others.

Related: Japanese AI experts raise concern over bots trained on copyrighted material

These shadow libraries are websites that use torrent systems to make books “available in bulk," says the lawsuit. Such sites are illegal and are unlike open-source data that comes from databases such as Gutenberg, which collects books that have copyrights that have run out.

“These shadow libraries have long been of interest to the AI-training community because of the large quantity of copyrighted material they host.”

Along with complaints about copyright infringement of their own personal work, the authors filed the complaint on behalf of a class of copyright owners across the United States whose works were also allegedly infringed. 

Cointelegraph reached out to OpenAI and Meta for comment on the case, though neither responded prior to publication.

In May writers across the U.S. a part of the Writers Guild of America, took to the streets in an authorized strike -the first one in 15 years- which highlighted many issues faced in the industry including the usage of AI.

Magazine: Super Mario: Crypto Thief, Sega blockchain game, AI games rights fight — Web3 Gamer

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

How to land a high-paying job as an AI prompt engineer

Discover the essential steps, skills and strategies needed to land a lucrative career in the rapidly growing field of AI customization and fine-tuning.

The field of AI is rapidly expanding, and one niche area that has gained significant attention is prompt engineering. As the demand for artificial intelligence (AI) applications and customization grows, the need for skilled AI prompt engineers is on the rise. This article will explore the steps and strategies to land a high-paying job as an AI prompt engineer, including the necessary skills, educational background and job market context.

Understanding the role of an AI prompt engineer

An AI prompt engineer specializes in designing effective prompts to guide the behavior and output of AI models. They deeply understand natural language processing (NLP), machine learning and AI systems.

The AI prompt engineer’s primary goal is to fine-tune and customize AI models by crafting precise prompts that align with specific use cases, ensuring desired outputs and enhanced control.

Developing the necessary skills

To excel as an AI prompt engineer, some skills are crucial:

NLP and language modeling

A strong understanding of transformer-based structures, language models and NLP approaches is required. Effective prompt engineering requires an understanding of the pre-training and fine-tuning procedures used by language models like ChatGPT.

Programming and machine learning

Expertise in programming languages like Python and familiarity with frameworks for machine learning, such as TensorFlow or PyTorch, is crucial. Success depends on having a solid understanding of data preprocessing, model training and evaluation.

Related: How to write effective ChatGPT prompts for better results

Collaboration and communication

Prompt engineers will frequently work with other teams. Excellent written and verbal communication skills are required to work with stakeholders effectively, explain urgent requirements, and comprehend project goals.

Educational background and learning resources

A strong educational foundation is beneficial for pursuing a career as an AI prompt engineer. The knowledge required in fields like NLP, machine learning, and programming can be acquired with a bachelor’s or master’s degree in computer science, data science, or a similar discipline.

Additionally, one can supplement their education and keep up-to-date on the most recent advancements in AI and prompt engineering by using online tutorials, classes, and self-study materials.

Getting practical experience

Getting real-world experience is essential to proving one’s abilities as an AI prompt engineer. Look for projects, research internships, or research opportunities where one can use prompt engineering methods.

An individual’s abilities can be demonstrated, and concrete proof of their knowledge can be provided by starting their own prompt engineering projects or contributing to open-source projects.

Networking and job market context

As an AI prompt engineer, networking is essential for seeking employment prospects. Attend AI conferences, get involved in online forums, go to AI-related events and network with industry experts. Keep abreast of employment listings, AI research facilities, and organizations that focus on NLP and AI customization.

Related: How to use ChatGPT like a pro

Continuous learning and skill enhancement

As AI becomes increasingly ubiquitous, the demand for skilled AI prompt engineers continues to grow. Landing a high-paying job in this field requires a strong foundation in NLP, machine learning, and programming, along with practical experience and networking.

Aspiring prompt engineers can position themselves for success and secure a high-paying job in this exciting and evolving field by continuously enhancing skills, staying connected with the AI community, and demonstrating expertise.

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

Scientists created a crypto portfolio management AI trained with on-chain data

According to the researchers, CryptoRLPM is the first reinforcement learning-based AI system using on-chain metrics for portfolio management.

A pair of researchers from the University of Tsukuba in Japan recently built an AI-powered cryptocurrency portfolio management system that utilizes on-chain data for training, the first of its kind according to the scientists. 

Called CryptoRLPM, short for “Cryptocurrency reinforcement learning portfolio manager,” the AI system utilizes a training technique called “reinforcement learning" to implement on-chain data into its model.

Reinforcement learning (RL) is an optimization paradigm wherein an AI system interacts with its environment — in this case, a cryptocurrency portfolio — and updates its training based on reward signals.

CryptoRLPM applies feedback from RL throughout its architecture. The system is structured into five primary units which work together to process information and manage structured portfolios.

These modules include a Data Feed Unit, Data Refinement Unit, Portfolio Agent Unit, Live Trading Unit, and an Agent Updating Unit.

Screenshot of pre-print research, 2023 Huang, Tanaka, "A Scalable Reinforcement Learning-based System Using On-Chain Data for Cryptocurrency Portfolio Management"

Once developed, the scientists tested CryptoRLPM by assigning it three portfolios. The first contained only Bitcoin (BTC) and Storj (STORJ), the second kept BTC and STORJ while adding Bluzelle (BLZ), and the third kept all three alongside Chainlink (LINK).

The experiments were conducted over a period lasting from October of 2020 to September of 2022 with three distinct phases (training, validation, backtesting.)

The researchers measured the success of CryptoRLPM against a baseline evaluation of standard market performance through three metrics: “accumulated rate of return” (AAR), “daily rate of return” (DRR), and “Sortino ratio” (SR).

AAR and DRR are at-a-glance measures of how much an asset has lost or gained in a given time period and the SR measures an asset’s risk-adjusted return.

Screenshot of pre-print research, 2023 Huang, Tanaka, "A Scalable Reinforcement Learning-based System Using On-Chain Data for Cryptocurrency Portfolio Management"

According to the scientists’ pre-print research paper, CryptoRLPM demonstrates significant improvements over baseline performance:

“Specifically, CryptoRLPM shows at least a 83.14% improvement in ARR, at least a 0.5603% improvement in DRR, and at least a 2.1767 improvement in SR, compared to the baseline Bitcoin.”

Related: DeFi meets AI: Can this synergy be the new focus of tech acquisitions?

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

What is generative AI?

Generative AI leverages large data sets and sophisticated models to mimic human creativity and produce new images, music, text and more.

Generative artificial intelligence (AI), fueled by advanced algorithms and massive data sets, empowers machines to create original content, revolutionizing fields such as art, music and storytelling. By learning from patterns in data, generative AI models unlock the potential for machines to generate realistic images, compose music and even develop entire virtual worlds, pushing the boundaries of human creativity.

Generative AI, explained

Generative AI is a cutting-edge field that investigates the potential of machine learning to inspire human-like creativity and produce original material. Generative AI is a subset of artificial intelligence concerned with creating algorithms that can produce fresh information or replicate historical data patterns.

It uses methods like deep learning and neural networks to simulate human creative processes and produce unique results. Generative AI has paved the way for applications ranging from image and audio generation to storytelling and game development by utilizing algorithms and training models on enormous amounts of data.

Both OpenAI’s ChatGPT and Google’s Bard show the capability of generative AI to comprehend and produce human-like writing. They have a variety of uses, including chatbots, content creation, language translation and creative writing. These models’ underlying ideas and methods promote generative AI more broadly and its potential to improve human-machine interactions and artistic expression.

Related: 5 AI tools for translation

This article will explain generative AI, its guiding principles, its effects on businesses and the ethical issues raised by this rapidly developing technology.

Evolution of generative AI

Here’s a summarized evolution of generative AI:

  • 1932: The concept of generative AI emerges with early work on rule-based systems and random number generators, laying the foundation for future developments.
  • 1950s–1960s: Researchers explore early techniques in pattern recognition and generative models, including developing early artificial neural networks.
  • 1980s: The field of artificial intelligence experiences a surge of interest, leading to advancements in generative models, such as the development of probabilistic graphical models.
  • 1990s: Hidden Markov Models became widely used in speech recognition and natural language processing tasks, representing an early example of generative modeling.
  • Early 2000s: Bayesian networks and graphical models gain popularity, enabling probabilistic inference and generative modeling in various domains.
  • 2012: Deep learning, specifically deep neural networks, started gaining attention and revolutionizing the field of generative AI, paving the way for significant advancements.
  • 2014: The introduction of generative adversarial networks (GANs) by Ian Goodfellow propels the field of generative AI forward. GANs demonstrate the ability to generate realistic images and become a fundamental framework for generative modeling.
  • 2015–2017: Researchers refine and improve GANs, introducing variations such as conditional GANs and deep convolutional GANs, enabling high-quality image synthesis.
  • 2018: StyleGAN, a specific implementation of GANs, allows for fine-grained control over image generation, including factors like style, pose and lighting.
  • 2019–2020: Transformers — originally developed for natural language processing tasks — show promise in generative modeling and become influential in text generation, language translation and summarization.
  • Present: Generative AI continues to advance rapidly, with ongoing research focused on improving model capabilities, addressing ethical concerns and exploring cross-domain generative models capable of producing multimodal content.

How does generative AI work?

With the use of algorithms and training models on enormous volumes of data, generative AI creates new material closely reflecting the patterns and traits of the training data. There are various crucial elements and processes in the procedure:

Data collection

The first stage is to compile a sizable data set representing the subject matter or category of content that the generative AI model intends to produce. A data set of tagged animal photos would be gathered, for instance, if the objective was to create realistic representations of animals.

Model architecture

The next step is to select an appropriate generative model architecture. Popular models include transformers, variational autoencoders (VAEs) and GANs. The architecture of the model dictates how the data will be altered and processed to produce new content.

Training

Using the gathered data set, the model is trained. By modifying its internal parameters, the model learns the underlying patterns and properties of the data during training. Iterative optimization is used during the training process to gradually increase the model’s capacity to produce content that closely resembles the training data.

Generation process

After training, the model can produce new content by sampling from the observed distribution of the training set. For instance, while creating photos, the model might use a random noise vector as input to create a picture that looks like an actual animal.

Evaluation and refinement

The created material is examined to determine its caliber and degree of conformity to the intended attributes. Depending on the application, evaluation metrics and human input may be used to improve the generated output and develop the model. Iterative feedback loops contribute to the improvement of the content’s diversity and quality.

Fine-tuning and transfer learning

Pre-trained models may occasionally serve as a starting point for transfer learning and fine-tuning certain data sets or tasks. Transfer learning is a strategy that enables models to use information from one domain to another and perform better with less training data.

It’s crucial to remember that the precise operation of generative AI models can change based on the chosen architecture and methods. The fundamental idea is the same, though: the models discover patterns in training data and produce new content based on those discovered patterns.

Applications of generative AI

Generative AI has transformed how we generate and interact with content by finding multiple applications in a variety of industries. Realistic visuals and animations may now be produced in the visual arts thanks to generative AI.

The ability of artists to create complete landscapes, characters, and scenarios with astounding depth and complexity has opened up new opportunities for digital art and design. Generic AI algorithms can create unique melodies, harmonies, and rhythms in the context of music, assisting musicians in their creative processes and providing fresh inspiration.

Beyond the creative arts, generative AI has significantly impacted fields like gaming and healthcare. It has been used in healthcare to generate artificial data for medical research, enabling researchers to train models and investigate new treatments without jeopardizing patient privacy. Gamers can experience more immersive gameplay by creating dynamic landscapes and nonplayer characters (NPCs) using generative AI.

Ethical considerations

The development of generative AI has enormous potential, but it also raises significant ethical questions. One major cause for concern is deepfake content, which uses AI-produced content to deceive and influence people. Deepfakes have the power to undermine public confidence in visual media and spread false information.

Additionally, generative AI may unintentionally continue to reinforce biases that are present in the training data. The AI system may produce material that reflects and reinforces prejudices if the data used to train the models is biased. This may have serious societal repercussions, such as reinforcing stereotypes or marginalizing particular communities.

Related: What is explainable AI (XAI)?

Researchers and developers must prioritize responsible AI development to address these ethical issues. This entails integrating systems for openness and explainability, carefully selecting and diversifying training data sets, and creating explicit rules for the responsible application of generative AI technologies.

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

5 AI tools for translation

Explore AI translation tools, their features, benefits and pricing models to find the right solution for your translation needs.

Translation is the process of converting written or spoken content from one language to another while preserving its meaning. By automating and enhancing the translation process, artificial intelligence (AI) has significantly contributed to changing the translation industry.

To evaluate and comprehend the structure, syntax and context of the source language and produce correct translations in the target language, AI-powered translation systems use machine learning algorithms and natural language processing techniques

Types of AI-powered translation systems

AI-powered translation systems can be categorized into two main approaches:

Rule-based machine translation (RBMT)

To translate text, RBMT systems use dictionaries and pre-established linguistic rules. Linguists and other experts create these guidelines and dictionaries that specify how to translate words, phrases and grammatical structures.

While RBMT systems are capable of producing accurate translations for some language pairs, they frequently face limitations due to the complexity and diversity of linguistic systems, which makes them less useful for translations that are more complex.

Statistical machine translation (SMT)

SMT systems employ statistical models that have been developed using sizable bilingual corpora. These algorithms analyze the words and phrases in the source and target languages to find patterns and correlations.

SMT systems are able to make educated assumptions about the ideal translation for a particular input by examining enormous volumes of data. With more training data, SMT systems get more accurate, although they may have trouble with unusual or rare phrases.

Neural machine translation (NMT) has recently become more well-known in the translation industry. To produce translations, NMT systems use deep learning methods, notably neural networks. Compared to earlier methods, these models are better able to represent the context, semantics and complexities of languages. NMT systems have proven to perform better than other technologies, and they are widely employed in many well-known translation services and applications.

Advantages of AI in translation

The use of AI in translation offers several advantages:

  • Speed and efficiency: AI-powered translation systems can process large volumes of text quickly, accelerating the translation process and improving productivity.
  • Consistency: AI ensures consistent translations by adhering to predefined rules and learned patterns, reducing errors and discrepancies.
  • Customization and adaptability: AI models can be fine-tuned and customized for specific domains, terminologies or writing styles, resulting in more accurate and contextually appropriate translations.
  • Continuous improvement: AI systems can learn from user feedback and update their translation models over time, gradually improving translation quality.

AI tools for translation

There are several AI tools available for translation that leverage machine learning and natural language processing techniques. Here are five popular AI tools for translation:

Google Translate

Google Translate is a widely used AI-powered translation tool. To offer translations for different language pairs, it combines rule-based and neural machine translation models. It offers functionalities for text translation, website translation and even speech-to-text and text-to-speech.

Google Translate offers both free and paid versions. The basic translation services, including text translation, website translation and basic speech-to-text features, are accessible to users for free. However, Google also offers a paid service called Google Translate API for developers and businesses with more extensive translation needs. API usage is subject to pricing based on the number of characters translated.

Microsoft Translator

Another capable AI translation tool is Microsoft Translator. It offers translation services for many different languages and makes use of neural machine translation models. It offers developers APIs and SDKs so they may incorporate translation functionality into their projects.

Microsoft Translator offers a tiered pricing model. It has a free tier that allows users to access basic translation services with certain limitations. Microsoft also provides paid plans for higher volume and advanced features. The pricing is typically based on the number of characters translated or the number of API requests made.

DeepL

DeepL is an AI-driven translation tool known for its high-quality translations. It utilizes neural machine translation models and claims to outperform other popular translation tools in terms of accuracy. DeepL supports multiple language pairs and offers a user-friendly interface.

DeepL offers both free and paid versions. The free version of DeepL allows users to access its translation services with certain usage restrictions. DeepL also offers a subscription-based premium plan called DeepL Pro, which provides additional benefits, such as faster translation speeds, unlimited usage and the ability to integrate the service into other applications.

Systran

Systran is a language technology company that provides AI-powered translation solutions. It offers a range of products and services, including neural machine translation engines, translation APIs and specialized industry solutions. Systran focuses on customization and domain-specific translations.

Pricing for Systran’s offerings is typically based on the specific requirements and level of customization desired by the client.

Trados Enterprise

RWS is a global leader in translation and localization services, and it provides various language technology solutions to support translation and multilingual content management. 

One of its language technology offerings is Trados Enterprise (previously RWS Language Cloud). This cloud-based platform is designed to streamline the translation process, enhance collaboration and improve translation quality. It provides a range of features and tools to manage translation projects, such as translation memory, terminology management, project management and linguistic assets.

Trados Enterprise offers different versions tailored to specific needs. The Studio version is priced at $125 per month and provides an industry-leading computer-assisted translation (CAT) tool for professional linguists. The Team version, priced at $185 per user per month, focuses on cloud-based collaboration for translation projects.

The Accelerate version starts at $365 per user per month and offers end-to-end translation management for organizations with custom requirements. RWS also provides a free trial for interested users and encourages potential customers to request a demo to explore their offerings in detail.

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

Google launches ‘Anti Money Laundering AI’ after successful HSBC trial

Google claims the new tools are far more efficient than traditional rules-based approaches at detecting money laundering at scale.

Google Cloud recently announced the launch of its “Anti Money Laundering AI” (AMLAI) service after a successful trial with London-based financial services group HSBC.

AMLAI uses machine learning to create risk profiles, monitor transactions, and analyze data. Per a blog post from Google Cloud:

“AI transaction monitoring replaces the manually defined, rules-based approach and harnesses the power of financial institutions’ own data to train advanced machine learning (ML) models to provide a comprehensive view of risk scores.”

In practice, Google Cloud claims its trial partner, HSBC, saw an increase of two to four times the number of positive alerts and a 60% reduction in false positives.

The service’s cost will vary depending on the number of customers serviced daily with the AML and risk scoring systems and how many customers are included in the training dataset used to spin the model up.

AMLAI’s launch signifies the furtherance of Google and Google Cloud’s ambitions in the fintech space. While the current AI zeitgeist centers around generative AI products such as Google’s Bard chatbot, the company has quietly been making its presence felt as both a fintech developer and banking services vendor.

Related: Google Cloud launches free courses to help users build their own GPT-style AI

During the COVID-19 pandemic, Google rapidly deployed a paycheck protection program loan processing tool. Over the years, the company has dabbled in alternative payment solutions such as its widely-adopted Google Pay service and the advent of Google-sponsored debit cards featuring NFC connectivity.

Google’s further involvement in the anti-money laundering sector could be a positive sign for the growing industry. According to an analysis from BlueWeave consulting, the global AML market size was estimated at roughly $3 billion in 2022 and is expected to reach nearly $8 billion by the decade's end.

Mitigating factors causing the projected growth include the rise of non-traditional payments, an ever-changing regulatory landscape, and a steadily creeping increase in the number of money laundering cases globally.

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users