1. Home
  2. Machine Learning

Machine Learning

How AI is changing the way humans interact with machines

Exploring how artificial intelligence and natural language processing are redefining everyday interactions with different technologies.

The past 12 months have seen the global digital paradigm evolve tremendously, especially regarding how humans interact with machines. In fact, the space has undergone such a radical transformation that people of all ages are now fast becoming conversant with artificial intelligence (AI) models, most popularly OpenAI’s ChatGPT. 

The primary driving force behind this revolution has been the advancements made in natural language processing (NLP) and conversational AI. NLP is a subfield of AI that focuses on the interaction between computers and humans using everyday language and speech patterns. The ultimate objective of NLP is to read, decipher, understand and make sense of human language in a way that is understandable and easy to digest for users.

To elaborate, it combines computational linguistics — i.e., rule-based modeling of human language — with other fields, such as machine learning, statistics and deep learning. As a result, NLP systems allow machines to understand, interpret, generate, and respond to human language in a meaningful and contextually appropriate way.

Moreover, NLP involves several key tasks and techniques, including part-of-speech tagging, named entity recognition, sentiment analysis, machine translation and topic extraction. These tasks help machines understand and generate human language-type responses. For example, part-of-speech tagging involves identifying the grammatical group of a given word, while named entity recognition involves identifying individuals, companies or locations in a text.

NLP redefining communication frontiers

Even though AI-enabled tech has only recently started becoming part of the digital mainstream, it has profoundly influenced many people for the better part of the last decade. Companions like Amazon’s Alexa, Google’s Assistant and Apple’s Siri have woven themselves into the fabric of our everyday lives, assisting us with everything from jotting down reminders to orchestrating our smart homes.

The magic behind these helpers is a potent mix of NLP and AI, enabling them to comprehend and react to human speech. That said, the scope of NLP and AI has now expanded into several other sectors. For example, within customer service, chatbots now enable companies to provide automated customer service with immediate responses to customer inquiries.

With the ability to juggle multiple customer interactions simultaneously, these automated chatbots have already slashed wait times.

Language translation is another frontier where NLP and AI have made remarkable progress. Translation apps can now interpret text and speech in real time, dismantling language barriers and fostering cross-cultural communication.

A paper in The Lancet notes that these translation capabilities have the potential to redefine the health sector. Researchers believe these systems can be deployed in countries with insufficient health providers, allowing doctors and medical professionals from abroad to deliver live clinical risk assessments.

Sentiment analysis, another application of NLP, is also being employed to decipher the emotional undertones behind words, making responses from platforms like Google Bard, ChatGPT and Jasper.ai even more human-like.

Recent: Bitcoin adoption in Mexico boosted by Lightning partnership with retail giant

Thanks to their growing prowess, these technologies can be integrated into social media monitoring systems, market research analysis and customer service delivery. By scrutinizing customer feedback, reviews and social media chatter, businesses can glean valuable insights into how their customers feel about their products or services.

Lastly, AI and NLP have ventured into the realm of content generation. AI-powered systems can now craft human-like text, churning out everything from news articles to poetry, helping create website content, generating personalized emails and whipping up marketing copy.

The future of AI and NLP 

Looking toward the horizon, many experts believe the future of AI and NLP to be quite exciting. Dimitry Mihaylov, co-founder and chief science officer for AI-based medical diagnosis platform Acoustery, told Cointelegraph that the integration of multimodal input, including images, audio, and video data, will be the next significant step in AI and NLP, adding:

“This will enable more comprehensive and accurate translations, considering visual and auditory cues alongside textual information. Sentiment analysis is another focus of AI experts, and that would allow a more precise and nuanced understanding of emotions and opinions expressed in text. Of course, all companies and researchers will work on enabling real-time capabilities, so most human interpreters, I am afraid, will start losing their jobs.”

Similarly, Alex Newman, protocol designer at Human Protocol, a platform offering decentralized data labeling services for AI projects, believes that NLP and AI are on the verge of significantly increasing individual productivity, which is crucial given the anticipated shrinkage of the workforce due to AI automation. 

Newman sees sentiment analysis as a key driver, with a more sophisticated interpretation of data taking place through neural networks and deep learning systems. He also envisions the open-sourcing of data platforms to better cater to those languages that have traditionally been under-served by translation services.

Megan Skye, a technical content editor for Astar Network — an AI-based multichain decentralized application layer on Polkadot — sees the sky as the limit for innovation in AI and NLP, particularly with AI’s ability to self-assemble new iterations of itself and extend its own functionality, adding:

“AI and NLP-based sentiment analysis is likely already happening on platforms like YouTube and Facebook that use a knowledge graph, and could be extended to the blockchain. For example, if a new domain-specific AI is configured to accept freshly indexed blocks as a stream of source input data, and we had access to or developed an algorithm for blockchain-based sentiment analysis.”

Scott Dykstra, chief technical officer for AI-based data repository Space and Time, sees the future of NLP at the intersection of edge and cloud computing. He told Cointelegraph that in the near to mid-term, most smartphones would likely come with an embedded large-language model that will work in conjunction with a massive foundational model in the cloud. “This setup will allow for a lightweight AI assistant in your pocket and heavyweight AI in the data center,” he added.

The road ahead is paved with challenges

While the future of AI and NLP is promising, it is not without its challenges. For example, Mihaylov points out that AI and NLP models rely heavily on large volumes of high-quality data for training and performance.

However, due to various data privacy laws, acquiring labeled or domain-specific data can be challenging in some industries. Furthermore, different industries have unique vocabularies, terminologies and contextual variations that require very specific models. “The shortage of qualified professionals to develop these models presents a significant barrier,” he opined.

Skye echoes this sentiment, noting that while AI systems can potentially operate autonomously in almost any industry, the logistics of integration, modification of workflows, and education present significant challenges. Furthermore, AI and NLP systems require regular maintenance, especially when the quality of answers and a low probability of error are important.

Magazine: Bitcoin 2023 in Miami comes to grips with ‘shitcoins on Bitcoin’

Lastly, Newman believes that the problem of access to new data sources pertinent to each industry looking to use these technologies will become more and more apparent with each passing year, adding:

“There’s plenty of data out there; it’s just not always accessible, fresh or sufficiently prepared for machine training. Without data that reflects the particulars of an industry, its language, rules, systems, and specifics, AI won’t be able to appreciate any context and operate effectively.”

Therefore, as more and more people continue to gravitate toward the use of the aforementioned technologies, it will be interesting to see how the existing digital paradigm continues to evolve and mature, especially given the rapid rate at which the use of AI seems to be seeping into various industries.

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

Biden to discuss dangers of AI in San Francisco meeting with experts

The U.S. president will meet with AI experts to discuss safety, policy and opportunities while he’s in San Francisco.

United States President Joe Biden will discuss artificial intelligence (AI) with a group of Silicon Valley experts on June 20 between campaign fundraising stops in California. 

The president will meet with at least eight experts, including renowned researchers and experts in AI safety. According to the White House, the topic of discussion will be the Biden administration’s “commitment to seizing the opportunities and managing the risks of Artificial Intelligence.”

Per a report from The Associated Press, the attendee list includes Jim Steyer, founder of Common Sense Media; Tristan Harris, co-founder of the Center for Humane Technology; Fei-Fei Li, co-director of Stanford’s Human-Centered AI institute; Joy Buolamwini, founder of the Algorithmic Justice League; and Sal Khan, founder of the Khan Institute.

Related: US vice president gathers top tech CEOs to discuss dangers of AI

This group is noteworthy for its individual members’ efforts in education, policy, safety and harm mitigation. Previous meetings with White House officials have included CEOs from some of the largest companies in the global AI sector — including meetings with representatives from Google, Microsoft and Anthropic.

Biden will meet with the experts at 4:00 pm Pacific Standard Time on June 20, during a series of discussions the president is participating in at the Fairmont Hotel in San Francisco. The event will be streamed on the official White House YouTube channel.

The U.S. Senate recently met with OpenAI CEO Sam Altman, IBM chief privacy and trust officer Christina Montgomery and New York University’s Gary Marcus in a hearing to discuss AI policy.

During his testimony, Altman expressed his belief that the U.S. government should establish a federal regulatory body to provide oversight, licensing and accountability for the burgeoning AI sector. While Marcus agreed with the notion, IBM’s Montgomery dissented, stating that it was her company’s view that Congress should take a more surgical approach to AI governance.

The discussions surrounding AI come at a time when the U.S. government has yet to set a comprehensive strategy for legislating AI development and production.

While Europe, China and the United Kingdom have either passed or are currently weighing bills featuring overarching legislation packages for the AI sector, the U.S. still lags behind in both comprehensive cryptocurrency and AI regulations.

Untangling the two sectors is becoming increasingly difficult, as AI now underpins many cryptocurrency, blockchain and Web3 industries.

Related: AI has a ‘symbiotic relationship’ with blockchain: Animoca Brands CEO

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

5 AI tools for learning and research

Supercharge your learning and research capabilities with AI tools, enabling you to gain a competitive edge and reach new levels of understanding.

AI tools are revolutionizing learning and research in today’s digital age by providing sophisticated capabilities and effective solutions. These tools make use of artificial intelligence to speed up various tasks, increase output and offer insightful data. 

Consensus, QuillBot, Gradescope, Elicit and Semantic Scholar are five well-known AI tools that are frequently used in the learning and research fields.

Consensus

The goal of the Consensus AI search engine is to democratize expert knowledge by making study findings on a range of subjects easily accessible. This cutting-edge engine, which runs on GPT-4, uses machine learning and natural language processing (NLP) to analyze and evaluate web content.

When you pose the “right questions,” an additional AI model examines publications and gathers pertinent data to respond to your inquiry. The phrase “right questions” refers to inquiries that lead to findings that are well-supported, as shown by a confidence level based on the quantity and caliber of sources used to support the hypothesis.

QuillBot

QuillBot is an artificial intelligence (AI) writing assistant that helps people create high-quality content. It uses NLP algorithms to improve grammar and style, rewrite and paraphrase sentences, and increase the coherence of the work as a whole.

QuillBot’s capacity to paraphrase and restate text is one of its main strengths. This might be especially useful if you wish to keep your research work original and free of plagiarism while using data from previous sources.

QuillBot can also summarize a research paper and offer alternate wording and phrase constructions to assist you in putting your thoughts into your own words. QuillBot can help you add variety to your writing by recommending different sentence constructions. This feature can improve your research papers readability and flow, which will engage readers more.

Additionally, ChatGPT and QuillBot can be used together. To utilize both ChatGPT and QuillBot simultaneously, start with the output from ChatGPT and then transfer it to QuillBot for further refinement.

Gradescope

Widely used in educational institutions, Gradescope is an AI-powered grading and feedback tool. The time and effort needed for instructors to grade assignments, exams and coding projects are greatly reduced by automating the process. Its machine-learning algorithms can decipher code, recognize handwriting and provide students with in-depth feedback.

Related: How to use ChatGPT to learn a language

Elicit

Elicit is an AI-driven research platform that makes it simpler to gather and analyze data. It uses NLP approaches to glean insightful information from unstructured data, including polls, interviews and social media posts. Researchers can quickly analyze huge amounts of text with Elicit to find trends, patterns and sentiment.

Using the user-friendly Elicit interface, researchers can simply design personalized surveys and distribute them to specific participants. To ensure correct and pertinent data collection, the tool includes sophisticated features, including branching, answer validation and skip logic.

In order to help academics properly analyze and interpret data, Elicit also offers real-time analytics and visualizations. Elicit streamlines the research process, saves time and improves data collection for researchers in a variety of subjects thanks to its user-friendly design and powerful capabilities.

Semantic Scholar

Semantic Scholar is an AI-powered academic search engine that prioritizes scientific content. It analyzes research papers, extracts crucial information, and generates recommendations that are pertinent to the context using machine learning and NLP techniques.

Researchers can use Semantic Scholar to research related works, spot new research trends and keep up with the most recent advancements in their fields.

Related: 5 free artificial intelligence courses and certifications

Striking a balance: Harnessing AI in research responsibly

It’s crucial to keep moral standards in mind and prevent plagiarism when employing AI research tools. The use of another person’s words, ideas or works without giving due credit or permission is known as plagiarism. While using AI research tools, one may follow the guidelines below to prevent plagiarism and uphold ethical standards:

  • Understand the purpose of the AI research tool.
  • Attribute sources properly.
  • Paraphrase and synthesize information.
  • Cross-verify information from multiple sources.
  • Check for copyright restrictions.
  • Review and edit AI-generated content.
  • Seek ethical AI tools.

Though AI research tools might be beneficial for improving your research and writing processes, it is important to uphold academic integrity and observe ethical standards. Always make an effort to give fair credit to others and make sure that your work accurately reflects your own thoughts and understanding.

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

Meta’s new ‘Voicebox’ AI is a text-to-speech tool that learns like ChatGPT

Meta claims Voicebox is the first AI that can generalize text-to-speech tasks it wasn’t trained to accomplish and describes it as a “breakthrough.”

Meta AI recently unveiled a “breakthrough” text-to-speech (TTS) generator it claims produces results up to 20 times faster than state-of-the-art artificial intelligence models with comparable performance. 

The new system, dubbed Voicebox, eschews traditional TTS architecture in favor of a model more akin to OpenAI’s ChatGPT or Google’s Bard.

Among the main differences between Voicebox and similar TTS models, such as ElevenLabs Prime Voice AI, is that Meta’s offering can generalize through in-context learning.

Much like ChatGPT or other transformer models, Voicebox uses large-scale training datasets. Previous efforts to use massive troves of audio data have resulted in severely degraded audio outputs. For this reason, most TTS systems use small, highly-curated, labelled datasets.

Meta overcomes this limitation through a novel training scheme that ditches labels and curation for an architecture capable of “in-filling” audio information.

As Meta AI put in a June 16 blog post, Voicebox is the “first model that can generalize to speech-generation tasks it was not specifically trained to accomplish with state-of-the-art performance.”

This makes it possible for Voicebox to translate text to speech, remove unwanted noise by synthesizing replacement speech, and even apply a speaker’s voice to different language outputs.

According to an accompanying research paper published by Meta, its pre-trained Voicebox system can accomplish all of this using only the desired output text and a three-second audio clip.

The arrival of robust speech-generation comes at particular sensitive time as social media companies continue to struggle with moderation and, in the U.S., a looming presidential election threatens to once again test the limits of online misinformation detection.

Former U.S. president Donald Trump, for example, currently faces allegations that he mishandled confidential government materials after leaving office. Among the purported evidence cited in the case against him are audio recordings wherein he allegedly admitted to potential wrongdoing.

While there’s currently no indication that the former president intends to deny the content described in the audio files, his case illustrates that data integrity resides at the core of the U.S. legal system and, by extension, its democracy.

Voicebox isn’t the first tool of its kind, but it appears to be among the most robust. As such, Meta’s developed a tool for determining if speech was generated by it which the company claims can “trivially detect” the difference between real and fake audio. Per the blog post:

“As with other powerful new AI innovations, we recognize that this technology brings the potential for misuse and unintended harm. In our paper, we detail how we built a highly effective classifier that can distinguish between authentic speech and audio generated with Voicebox to mitigate these possible future risks.”

In the cryptocurrency world, AI has become as integral to day-to-day operations for most businesses as the internet or electricity. The largest exchanges rely on AI chatbots for customer interactions and sentiment analysis, and trading bots have become commonplace.

Related: Bybit plugs into ChatGPT for AI-powered trading tools

The advent of robust text-to-speech systems such as Voicebox, combined with automated trading, could help bridge a gap for would-be cryptocurrency traders who rely on TTS systems that, currently, may struggle with crypto jargon or multi-lingual support.

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

US senators propose bill to eliminate section 230 protection for AI companies

The bipartisan bill purportedly seeks to hold companies accountable for harm, but it’s unclear whether section 230 even applies to AI.

U.S. Senators Josh Hawley, a republican from Missouri, and Richard Blumenthal, a democrat from Connecticut, introduced a senate bill on June 14 that would eliminate special protections afforded to online computer services providers under the Communications Decency Act of 1996.

Section 230 refers to text found in title 47, section 230 of the code resulting from the Decency Act’s passage. It specifically grants protection to online service providers from liability for content posted by users. It also gives providers immunity from prosecution for illegal content, provided good faith efforts are made to take down such content upon discovery.

Opponents of section 230 have argued that it absolves social media platforms and other online service providers of responsibility for the content they host. The U.S. Supreme Court recently ruled against changing section 230 in light of a lawsuit in which plaintiff’s sought to hold social media companies accountable for damages sustained through the platform’s alleged hosting and promotion of terrorist-related content.

Per the high court’s opinion, a social media site can’t be held accountable for the suggestions made by the algorithms it uses to surface content any more than an email or cellular service provider can for the content transmitted via their services.

It’s unclear at this time, however, whether section 230 actually applies to generative AI companies such as Open AI and Google, makers of ChatGPT and Bard respectively.

During a recent senate hearing, OpenAI CEO Sam Altman told U.S. Senator Lindsey Graham that it was impression that section 230 didn’t apply to his company. When pressed by Hawley, during the same hearing, who asked Altman what he thought about a hypothetical situation where congress "opened the courthouse doors” and allowed people who were harmed by AI to testify in court, the CEO responded, "please forgive my ignorance, can’t people sue us?"

While there’s no specific language covering generative AI in section 230, it’s possible further discussions about its relevance to generative AI technologies could come down to the definition of "online service."

Related: AI-related crypto returns rose up to 41% after ChatGPT launched

The GPT API, for example, underpins countless AI services throughout the cryptocurrency and blockchain industries. If section 230 applies to generative AI technologies, it might prove difficult to hold businesses or individuals accountable for harms resulting from misinformation or bad advice generated via AI.

Magazine: Musk’s alleged price manipulation, the Satoshi AI chatbot and more

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

European Union AI Act passes in parliament

Member states will have the opportunity to negotiate final details before the act becomes law.

The European Parliament has passed the EU AI Act, a sweeping legislative framework for governance and oversight of artificial intelligence technologies in the European Union.

The measure passed in Parliament during a June 14 vote that saw majority support for the act in the form of 499 votes for, 28 against and 93 abstaining. The next step before the bill becomes law will involve individual negotiations with members of the European Parliament to smooth out the details. Initially proposed by the European Commission on April 21, the EU AI Act is a comprehensive set of rules for AI development in the EU.

Per a press release from the European parliament:

“The rules aim to promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its harmful effects.”

Once implemented, the act would prohibit certain types of artificial intelligence services and products while limiting or placing restrictions on others. Among the technologies outright banned are biometric surveillance, social scoring systems, predictive policing, so-called “emotion recognition” and untargeted facial recognition systems. Generative AI models, such as OpenAI’s ChatGPT and Google’s Bard, would be allowed to operate under the condition that their outputs be clearly labeled as AI-generated.

Related: Irish data watchdog blocks Google from launching Bard in the EU

Once the act becomes law, any AI system that could “pose significant harm to people’s health, safety, fundamental rights or the environment” or “influence voters and the outcome of elections” will be classified as high risk and subject to further governance.

Parliament’s passing of the EU AI Act comes just two weeks after the supranational entity’s Markets in Crypto-Assets (MiCA) bill became law on May 31. In both cases, industry leaders were among those leading the charge for regulation.

OpenAI CEO Sam Altman has been among the most vocal supporters of government oversight of the AI industry. He recently testified before Congress during a hearing in which he made explicit his belief that regulation is necessary. However, Altman also recently warned European regulators against overregulation.

On the cryptocurrency front, Ripple’s managing director for Europe and the United Kingdom, Sendi Young, recently told Cointelegraph that she believes MiCA will help facilitate a “level playing field” for companies operating in the crypto sector in Europe.

Magazine: Musk’s alleged price manipulation, the Satoshi AI chatbot and more

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

AMD reveals new AI chip challenging Nvidia’s dominance

With its CDNA architecture and 192GB memory capacity, the MI300X by AMD accommodates larger AI models.

Advanced Micro Devices Inc (AMD) on Tuesday, June 13, gave new details about an artificial intelligence (AI) chip that will challenge market leader Nvidia Corp.

The California-based AMD said its most-advanced graphics processing unit (GPU) for AI, the MI300X, will start trickling out in the third quarter and be followed by mass production beginning in the fourth quarter.

AMD's announcement represents the most substantial challenge to Nvidia, which currently dominates the market for AI chips with over 80% market share. GPUs are chips used by firms like OpenAI to build cutting-edge AI programs such as ChatGPT. They have parallel processing capabilities and are optimized for handling large amounts of data simultaneously, making them well-suited for tasks that require high-speed and efficient graphical processing.

AMD announced that its latest MI300X chip and CDNA architecture was specifically developed to cater to the demands of large language models and advanced AI models. With a maximum memory capacity of 192GB, the MI300X enables the accommodation of even larger AI models compared to other chips like Nvidia's H100 chip which supports a maximum of 120GB of memory.

AMD Chief Executive Lisa Su speaking at an event outlining AMD AI accelerator chip in San Francisco

AMD announced the Infinity Architecture, which combines eight M1300X accelerators into a single system, mirroring similar systems by Nvidia and Google that integrate eight or more GPUs for AI applications.

During the presentation to investors and analysts in San Francisco, AMD chief executive officer, Lisa Su highlighted that AI represents the company's "most significant and strategically important long-term growth opportunity."

"We think about the data center AI accelerator [market] growing from something like $30 billion this year, at over 50% compound annual growth rate, to over $150 billion in 2027," 

If developers and server manufacturers adopt AMD's "accelerator" AI chips as alternatives to Nvidia's products, it could open up a significant untapped market for the chipmaker. AMD, renowned for its conventional computer processors, stands to benefit from this potential shift in demand.

Related: AI startup by ex-Meta and Google researchers raises $113M in seed funding

Although AMD did not reveal specific pricing details, this action could potentially exert downward price pressure on Nvidia's GPUs, including models like the H100, which can carry price tags of $30,000 or higher. Reduced GPU prices have the potential to contribute to lowering the overall expenses associated with running resource-intensive generative AI applications.

Magazine: Is AI a nuke-level threat? Why AI fields all advance at once, dumb pic puns

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

Irish data watchdog blocks Google from launching Bard in the EU: Report

Google’s been forced to postpone the launch of its Bard AI service in the EU after Irish regulators accused it of failing to file the proper paperwork.

The Irish Data Protection Commission (DPC) has reportedly blocked the launch of Google’s generative artificial intelligence (AI) service, Bard, in the European Union over privacy concerns. 

Google launched Bard in the United States, United Kingdom and 178 other countries earlier this year. However, it’s so far been unable to crack the EU. The Mountain View, California company reportedly intended to remedy that during the week of June 13, but as Politico reports, those plans have come to a halt.

Per the report, DPC deputy commissioner Graham Doyle stated that Google only recently informed the commission of its intention to launch Bard in the EU this week.

He went on to explain that Google hadn’t provided the commission with “any detailed briefing nor sight of a data protection impact assessment or any supporting documentation.” As a result, said Doyle, “Bard will not now launch this week.”

Related: UK to get ‘early or priority access’ to AI models from Google and OpenAI

The EU's approach to AI regulation has been described as being far stricter than neighboring efforts in the U.K. and those in the United States.

European data protection supervisor Wojciech Wiewiórowski previously quipped that “the definition of hell is European legislation with American enforcement” after OpenAI’s ChatGPT was recently banned in Italy over privacy concerns.

It appears that Google finds itself in a similar situation with EU regulators. It’s worth noting that ChatGPT was eventually approved for use in Italy after OpenAI addressed regulators’ privacy concerns.

The push for greater regulatory focus on AI technologies in the EU stems from the EU AI Act, a proposed framework for regulating artificial intelligence in the European Union filed in May 2023.

Its drafters seek to align governance of AI technologies with the General Data Protection Regulation, a sweeping set of rules meant to protect citizens’ privacy.

Much like the Markets in Crypto-Assets legislation, the EU’s AI Act appears to have vastly different requirements for companies operating in the EU than in the U.K. or U.S., including a greater emphasis on security, privacy and accountability.

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

UK to get ‘early or priority access’ to AI models from Google and OpenAI

It’s unclear at this time what access the U.K. will receive, but the reported commitment could be the first of its kind.

British Prime Minister Rishi Sunak recently announced that Google DeepMind, OpenAI, and Anthropic — three tech outfits widely considered the global industry leaders in generative AI technologies — have agreed to provide the United Kingdom with early access to their AI models.

Sunak made the announcement during a speech opening London Tech Week, an event described by organizers as “a global celebration of tech, uniting the most innovative thinkers and talent of tomorrow in a week-long festival.”

He made the comment while explaining a three-part plan to ensure AI systems in the U.K. are deployed in a safe and secure manner. The first step, per a transcript of the speech, is to perform cutting-edge safety research:

“We’re working with the frontier labs - Google DeepMind, OpenAI and Anthropic. And I’m pleased to announce they’ve committed to give early or priority access to models for research and safety purposes to help build better evaluations and help us better understand the opportunities and risks of these systems.”

The prime minister went on to explain that the second step of the U.K.’s plan is the recognition that AI, as a technology, doesn’t “respect traditional national borders,” thus necessitating the formation of a global task force.

Finally, the third step, per Sunak, is to invest in both AI and quantum to “seize the extraordinary potential of AI to improve people’s lives.” He cited recent investments in the amounts of $1.125 billion and $2.75 billion, for compute and quantum technologies, respectively, as steps the U.K. had already taken towards accomplishing this goal.

Related: Crypto ads face stricter rules, referral bonus ban by UK FCA

It remains unclear at this time exactly what form of “early or priority” access the U.K. government will obtain or when such access will be afforded.

Google DeepMind, OpenAI, and Anthropic have historically offered betas and limited preview versions of their large language models (such as Google’s Bard, OpenAI’s ChatGPT, and Anthropic’s Claude). All three companies have also invested in both internal testing with company scientists and external testing with contracted experts.

The prime minister didn’t make it clear whether the U.K. would obtain earlier access to production models than the general public or contractors or if the commitment was simply to offer access to the government as well as other priority researchers.

These comments come at an active time for the U.K.’s regulatory efforts. Not only is parliament racing to come up with comprehensive protections for citizens relative to the recent generative AI boom, but it's also facing increasing pressure to regulate cryptocurrency, blockchain, and Web3 technologies.

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users

Intuit introduces proprietary large language models for fintech with GenOS

Intuit is launching an operating system for generative artificial intelligence that will feature AI models trained on the company’s financial data.

Fintech giant Intuit, whose product offerings include TurboTax, Mint, Credit Karma, Mailchimp and QuickBooks, recently expanded its software services platform to include GenOS, an operating system for generative artificial intelligence (AI) technologies. 

According to Intuit, the new operating system will come with a suite of tools, including a developer studio, UX library, runtime layer and several pre-trained large language models (LLMs).

Several high-profile businesses have recently begun adapting third-party LLM solutions such as OpenAI’s ChatGPT for their specific needs. However, Intuit’s taken a different approach by creating proprietary tools and its own development and deployment platform.

Intuit isn’t necessarily known for its AI products, but its place as an industry leader leaves it well-positioned to leverage internal data to train models similar to ChatGPT. The primary benefit in doing so is that the company can cherry-pick what data gets included, thus allowing it to fine-tune its models for fintech.

Where ChatGPT and similar LLMs, such as Google’s Bard, have typically been positioned as general chatbots — meaning they’re designed to discuss virtually any subject — a model trained specifically on financial data would be considered a “narrow,” or targeted, system.

And Intuit reportedly has a lot of data to work with. Per an announcement published on June 6:

“The company has 400,000 customer and financial attributes per small business, as well as 55,000 tax and financial attributes per consumer, and connects with over 24,000 financial institutions. With more than 730 million AI-driven customer interactions per year, Intuit is generating 58 billion machine learning predictions per day.”

It remains to be seen exactly how Intuit intends to implement GenOS, as the company hasn’t so far disclosed any specific information about the LLMs it’s currently developing through the new platform. However, some of the primary use cases for similar models have been in consumer education and customer service.

Related: AI-related crypto returns rose up to 41% after ChatGPT launched

The launch of GenOS comes at a tumultuous time for United States taxpayers but could represent some relief for users of its flagship TurboTax product.

The Internal Revenue Service (IRS) is currently under fire from U.S. conservative Republicans who’ve proposed as much as $21 billion in budget cuts to the agency over the next two years.

Such cuts stand to diminish IRS efforts toward modernizing citizen tax services, potentially negatively impacting an already complex tax filing system. This represents a problem that, combined with recent uncertainty surrounding the legal nature of digital assets in the wake of the Securities and Exchange Commission action against cryptocurrency exchanges Binance and Coinbase, could pose significant challenges for the 43 million U.S. taxpayers who hold crypto assets.

Line Partners With Sony’s Soneium to Launch Blockchain Mini-Apps for 200 Million Users