1. Home
  2. Machine Learning

Machine Learning

AI experts sign doc comparing risk of ‘extinction from AI’ to pandemics, nuclear war

The “Godfather of AI” and the CEOs of OpenAI, Google DeepMind and Anthropic are among the hundreds of signatories.

Dozens of artificial intelligence (AI) experts, including the CEOs of OpenAI, Google DeepMind and Anthropic, recently signed an open statement published by the Center for AI Safety (CAIS). 

The statement contains a single sentence:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Among the document’s signatories are a veritable “who’s who” of AI luminaries, including the “Godfather” of AI, Geoffrey Hinton; University of California, Berkeley’s Stuart Russell; and Massachusetts Institute of Technology’s Lex Fridman. Musician Grimes is also a signatory, listed under the “other notable figures” category.

Related: Musician Grimes willing to ‘split 50% royalties’ with AI-generated music

While the statement may appear innocuous on the surface, the underlying message is a somewhat controversial one in the AI community.

A seemingly growing number of experts believe that current technologies may or will inevitably lead to the emergence or development of an AI system capable of posing an existential threat to the human species.

Their views, however, are countered by a contingent of experts with diametrically opposed opinions. Meta chief AI scientist Yann LeCun, for example, has noted on numerous occasions that he doesn’t necessarily believe that AI will become uncontrollable.

To him and others who disagree with the “extinction” rhetoric, such as Andrew Ng, co-founder of Google Brain and former chief scientist at Baidu, AI isn’t the problem, it’s the answer.

On the other side of the argument, experts such as Hinton and Conjecture CEO Connor Leahy believe that human-level AI is inevitable and, as such, the time to act is now.

It is, however, unclear what actions the statement’s signatories are calling for. The CEOs and/or heads of AI for nearly every major AI company, as well as renowned scientists from across academia, are among those who signed, making it obvious the intent isn’t to stop the development of these potentially dangerous systems.

Earlier this month, OpenAI CEO Sam Altman, one of the above-mentioned statement’s signatories, made his first appearance before Congress during a Senate hearing to discuss AI regulation. His testimony made headlines after he spent the majority of it urging lawmakers to regulate his industry.

Altman’s Worldcoin, a project combining cryptocurrency and proof-of-personhood, has also recently made the media rounds after raising $115 million in Series C funding, bringing its total funding after three rounds to $240 million.

SEC’s enforcement case against Ripple may be wrapping up

Amazon is hiring AI engineers to build a ChatGPT-like search interface

A pair of postings on Amazon’s jobs website indicates the company is planning to implement advanced AI search features in its online web store.

Amazon is preparing to develop and implement a new “search” functionality for its online web store featuring a ChatGPT-like interface. 

A pair of job postings first spotted by Bloomberg spell out the company’s plans, which state in unambiguous language that Amazon intends to reinvent its long-standing search feature.

In a job listing for a “Sr Technical Program Manager,” the company states:

“We are working on a new AI-first initiative to re-architect and reinvent the way we do search through the use of extremely large scale next-generation deep learning techniques.”

The pay for this position, which requires at least seven years of experience working directly with engineering teams, ranges from $119,000 to $231,400 per year, depending on the applicant’s location.

A second job listing, this one for a “Sr SDE, Machine Learning (ML), Amazon Search” position paying between $134,500 and $261,500 per year, adds further detail. It explains that the initiative will be “a once in a generation transformation for Search,” and that the company intends to deliver to customers right away:

“We are reimagining Amazon Search with an interactive conversational experience that helps you find answers to product questions, perform product comparisons, receive personalized product suggestions, and so much more, to easily find the perfect product for your needs.”

Combined, the postings make it clear the company plans to implement high-level changes to the way its search feature works.

Related: Amazon implements AI to enhance logistics and delivery speeds

Also of note, Amazon recently debuted its own “Bedrock” artificial intelligence (AI) foundational models. Bedrock was designed as a “serverless” AI service allowing customers to build out their own ChatGPT-like models.

Amazon’s own “Titan” chatbot service was announced along with Bedrock. Company vice president Bratin Saha told reporters that Amazon has been using “a fine-tuned version” of Titan to surface search results on the company’s homepage; it’s unclear if this is related to the recent job postings. Amazon did not immediately respond to a request for more information.

The timing of these announcements comes as no surprise. Movement in the generative AI space has been rapid since OpenAI launched its ChatGPT service in March. 

So far, the aggressive growth seems to be paying off. Generative AI tech appears to be impacting nearly every sector — from journalism, where several media outlets have experimented with AI-generated reporting, to cryptocurrency and blockchain. In the latter sector, nearly every segment of cryptocurrency and blockchain development, trading, and community interaction has been affected by generative AI. 

Related: Irish newspaper apologizes for misleading AI-generated article

SEC’s enforcement case against Ripple may be wrapping up

OpenAI CEO to testify before Congress alongside ‘AI pause’ advocate and IBM exec

AI experts from IBM, NYU and OpenAI will testify before the U.S. Senate on May 16 in a hearing entitled “Oversight of A.I.: Rules for Artificial Intelligence.”

OpenAI CEO Sam Altman will make his first appearance before Congress on May 16 to discuss artificial intelligence (AI) regulation in the United States during a hearing on oversight. Also testifying will be IBM’s chief privacy and trust officer, Christina Montgomery — who is a member of the U.S. National Artificial Intelligence Advisory Committee — and New York University emeritus professor Gary Marcus.

Details remain scarce concerning the hearing’s agenda. Its title, “Oversight of A.I.: Rules for Artificial Intelligence,” implies the discussion will center on safety and privacy, as does the roster of scheduled attendees.

The hearing will mark Altman’s first on-the-record testimony before Congress, though he recently attended a roundtable discussion with Vice President Kamala Harris at the White House alongside the CEOs of Alphabet, Microsoft and Anthropic.

NYU’s Marcus recently made waves in the AI community with his full-throated support for a community-driven “pause” on AI development for six months.

Related: Elon Musk and tech execs call for pause on AI development

The idea of an AI pause was defined in an open letter published on the Future of Life Institute website on March 22. As of this article’s publishing, it has more than 27,500 signatures.

The letter’s stated goal is to “call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

Altman and Montgomery are among those opposed to the pause.

For Montgomery’s part, her sentiments were explained in an in-depth IBM company blog post she authored entitled, “Don’t pause AI development, prioritize ethics instead,” wherein she made a case for a more precise approach to AI regulation:

“A blanket pause on AI’s training, together with existing trends that seem to be de-prioritizing investment in industry AI ethics efforts, will only lead to additional harm and setbacks.”

According to another IBM blog post penned in part by Montgomery, the company believes AI should be regulated based on risk — it’s worth noting that, to the best of Cointelegraph’s knowledge, IBM doesn’t currently have any public-facing generative AI models.

OpenAI, on the other hand, is responsible for ChatGPT, arguably the most popular public-facing AI technology in existence.

Per an interview with Lex Fridman at a Massachusetts Institute of Technology event, Altman supports the safe and ethical development of AI systems but believes in “engaging everyone in the discussion” and “putting these systems out into the world.”

That leaves Marcus as the lone outlier, one who’s been a vocal supporter of the pause since it was first floated. Though Marcus admittedly had “no hand in drafting” the pause letter, he did pen a blog post titled, “Is it time to hit the pause button on AI?” nearly a month before the open letter was published.

While the upcoming Senate hearing will likely function as little more than a forum for members of Congress to ask questions, the discussion could have disruptive ramifications — depending on which experts you believe.

If Congress determines that AI regulation deserves a heavy hand, experts such as Montgomery fear such efforts could have a chilling effect on innovation without necessarily addressing safety concerns.

This harm could trickle into operating sectors where GPT technology underpins a plethora of bots and services. In the world of fintech, for example, cryptocurrency exchanges are adapting chatbot technology to serve their customers, conduct trades and analyze the market.

However, experts such as Marcus and Elon Musk worry that failure to enact what they deem as common sense policy related to AI oversight could result in an existential crisis for humankind.

SEC’s enforcement case against Ripple may be wrapping up

State-sponsored Chinese AI firm launches bot service to ‘surpass’ ChatGPT

Chinese tech company iFlytek recently made waves with the launch of its “Spark Model,” an AI system it says will surpass ChatGPT by the end of the year.

A state-subsidized Chinese artificial intelligence (AI) company recently announced the launch of “Spark Model,” an AI system designed to compete directly with OpenAI’s ChatGPT.

The launch by the company, iFlytek, took place at a tech event in Hefei called “Spark Desk” and featured a full demonstration of the new system’s capabilities.

Per a translation provided by Bing, iFlytek founder and president Liu Qingfeng told event attendees that the Spark Model — also referred to as the “cognitive big model” — represented the “dawn of general artificial intelligence.”

While there’s no scientific consensus on whether general artificial intelligence (also referred to as artificial general intelligence, or AGI) is even possible using current machine learning techniques, the billionaire tech mogul did offer a comparison to ChatGPT and a timeline for updates:

“This year, we will continue to upgrade the big model, and on October 10, we will surpass ChatGPT in the Chinese [language] and reach the same level as it in English.”

Details about the underlying technology powering Spark Model are scarce as of the time of this article’s publishing, but Qingfeng described the AI’s capabilities as “far ahead of the existing system that can be measured in China.”

Direct comparisons between ChatGPT and similar models can be difficult to make without side-by-side benchmarking. Not only does OpenAI keep training details and other proprietary information under wraps, but ChatGPT is also banned in China — a fact that limits OpenAI’s ability to train its models on Chinese languages and culture.

The current ban on ChatGPT in China has been described as potentially stifling, especially when compared to Hong Kong — a city designated as a special administrative region of the People’s Republic of China — where there’s no populace-wide ban on the use of technologies such as ChatGPT and cryptocurrency.

Related: China’s crypto stance unchanged by moves in Hong Kong, says exec

In Hong Kong and throughout the West, ChatGPT has become increasingly popular among cryptocurrency users and companies for its ability to generate code and as an underlying technology for the development of advanced trading bots and portfolio analysis. 

If the proposed upgrades to the Spark Model do manage to give iFlytek a leg up on OpenAI and ChatGPT, it would represent not only a monumental moment in tech (ChatGPT is widely considered among the most powerful of today’s generative AI systems) but it will also have done so in a relatively short amount of time.

According to Qingfeng, the company’s research arm began work on Spark Model just six months ago, on Dec. 22, 2022. By way of comparison, OpenAI began developing the precursor to its GPT products in 2015. ChatGPT wasn’t launched until Nov. 30, 2022. 

SEC’s enforcement case against Ripple may be wrapping up

Tim Cook says Apple will weave AI into products as researchers work on solving bias

The Apple CEO said the company would “continue weaving” AI into its products as internal research indicates an emphasis on building unbiased AI systems.

CEO Tim Cook gave a rare, if guarded, glimpse into Apple’s walled garden during the Q&A portion of a recent earnings call when asked his thoughts on generative artificial intelligence (AI) and where he “sees it going.” 

Cook refrained from revealing Apple’s plans, stating upfront, “We don’t comment on product roadmaps.” However, he did intimate that the company was interested in the space:

“I do think it’s very important to be deliberate and thoughtful in how you approach these things. And there’s a number of issues that need to be sorted. … But the potential is certainly very interesting.”

The CEO later added the company views “AI as huge” and would “continue weaving it in our products on a very thoughtful basis.”

Cook’s comments on taking a “deliberate and thoughtful” approach could explain the company’s absence in the generative AI space. However, there are some indications that Apple is conducting its own research into related models.

A research paper scheduled to be published at the Interaction Design and Children conference this June details a novel system for combating bias in the development of machine learning datasets.

Bias — the tendency for an AI model to make unfair or inaccurate predictions based on incorrect or incomplete data — is oft-cited as one of the most pressing concerns for the safe and ethical development of generative AI models.

The paper, which can currently be read in preprint, details a system by which multiple users would contribute to developing an AI system’s dataset with equal input.

Status quo generative AI development doesn’t add in human feedback until later stages, when models have typically already gained training bias.

The new Apple research integrates human feedback at the very early stages of model development in order to essentially democratize the data selection process. The result, according to the researchers, is a system that employs a “hands-on, collaborative approach to introducing strategies for creating balanced datasets.”

Related: AI’s black box problem: Challenges and solutions for a transparent future

It bears mention that this research study was designed as an educational paradigm to encourage novice interest in machine learning development.

It could prove difficult to scale the techniques described in the paper for use in training large language models (LLMs) such as ChatGPT and Google Bard. However, the research demonstrates an alternative approach to combating bias.

Ultimately, the creation of an LLM without unwanted bias could represent a landmark moment on the path to developing human-level AI systems.

Such systems stand to disrupt every aspect of the technology sector, especially the worlds of fintech, cryptocurrency trading and blockchain. Unbiased stock and crypto trading bots capable of human-level reasoning, for example, could shake up the global financial market by democratizing high-level trading knowledge.

Furthermore, demonstrating an unbiased LLM could go a long way toward satisfying government safety and ethical concerns for the generative AI industry.

This is especially noteworthy for Apple, as any generative AI product it develops or chooses to support would stand to benefit from the iPhone’s integrated AI chipset and its 1.5 billion user footprint.

SEC’s enforcement case against Ripple may be wrapping up

Microsoft axes Bing wait list, giving users free access to GPT-4

Bing is set to receive several much-requested AI-powered features that could put it in competition with OpenAI’s ChatGPT Plus subscription service.

Microsoft recently announced a slew of new artificial intelligence (AI)-powered features for its Bing chatbot and Edge web browser. Chief among the changes, Bing users now have full access to the GPT-4 model — the same underlying engine that powers ChatGPT’s “Plus” subscription service.

Previously, Microsoft held access to the GPT-4 version of the Bing chatbot to a “limited preview.” It’s now announcing open availability through the Bing app, web access and the Edge browser.

Aside from giving Bing, Edge and Windows users free, unfettered access to the GPT-4 model, Microsoft also announced upcoming support for multimodal outputs, chat history and plug-ins.

Multimodal support will allow the Bing chatbot to generate responses, which include a combination of text, images and videos. It will also have the ability to generate charts and graphs, something that could give it a leg up over ChatGPT.

Users will also have access to their full chat history and, for those using the Edge browser, the ability to move chats to the sidebar to continue surfing in the same tab. Microsoft says this feature will be implemented “starting shortly.”

In the future, according to the blog post, Bing may even be able to reference previous sessions when interacting with users:

“Over time, we’re exploring making your chats more personalized by bringing context from a previous chat into new conversations.”

Perhaps the most ambitious addition announced is “Edge Actions,” also referred to as “Bing Actions," Microsoft’s term for upcoming integrations featuring third-party plug-ins for Bing chat.

The only plug-ins specifically mentioned in the announcement are OpenTable, which would allow users to reserve seats at restaurants directly within the chat interface, and Wolfram/Alpha, a modality that would allow users to create complex visualizations for math and science queries. Microsoft says more integrations will be revealed as they’re implemented.

The new features won’t require any purchases or subscriptions, though users will need a free Microsoft account to take advantage of the Bing chatbot’s full suite of functions.

By contrast, OpenAI’s ChatGPT Plus service costs $20 per month for access to the same GPT-4 model (the freely available ChatGPT service relies on GPT-3.5). Furthermore, ChatGPT Plus doesn’t currently offer image generation, web search or third-party plug-in support.

It’s unclear how Microsoft and OpenAI intend to balance their respective offerings. Experts weighing in on social media have expressed confusion over what appears to be competition for users, as the companies essentially partnered up after Microsoft invested $10 billion in OpenAI.

As it currently stands, those paying for ChatGPT Plus do receive certain benefits not available to the general public or Bing chatbot users. These include early access to new features, priority access to the system even during periods of high traffic and faster response times from the model.

The cryptocurrency world has seen an explosion of interest in chatbot technologies throughout 2023. Developers have built advanced autonomous trading bots on the GPT-4 platform, and many individual crypto users have begun employing chatbots for a variety of reasons.

Related: Crypto Twitter uses new AI chatbot to make trading bots, blogs and even songs

It’s unknown at this time if OpenAI intends to adjust its subscription offering in the face of Bing’s ubiquity — Microsoft says the search engine now boasts 100 million users, while the addition of Bing AI to the Windows taskbar gives it a potential global reach of more than half a billion users per month.

SEC’s enforcement case against Ripple may be wrapping up

Google DeepMind CEO Demis Hassabis says we may have AGI ‘in the next few years’

The CEO of Google DeepMind says human-level AI could emerge before 2033 — an event that could radically alter how crypto trading bots and GPT-based tech functions.

Demis Hassabis, the CEO of Google DeepMind, recently predicted that artificial intelligence systems would reach human-level cognition somewhere between “the next few years” and “maybe within a decade.” 

Hassabis, who got his start in the gaming industry, co-founded Google DeepMind (formerly DeepMind Technologies), the company known for developing the AlphaGo AI system responsible for beating the world’s top human Go players.

In a recent interview conducted during The Wall Street Journal’s Future of Everything festival, Hassabis told interviewer Chris Mims he believes the arrival of machines with human-level cognition is imminent:

“The progress in the last few years has been pretty incredible. I don’t see any reason why that progress is going to slow down. I think it may even accelerate. So I think we could be just a few years, maybe within a decade away.”

These comments come just two weeks after internal restructuring led Google to announce the merging of “Google AI” and “DeepMind” into the aptly named “Google DeepMind.”

When asked to define “AGI” — artificial general intelligence — Hassabis responded: “human-level cognition.”

There currently exists no standardized definition, test, or benchmark for AGI widely accepted by the STEM community. Nor is there a unified scientific consensus on whether AGI is even possible.

Some notable figures, such as Roger Penrose (Stephen Hawking’s long-time research partner), believe AGI can’t be achieved, while others think it could take decades or centuries for scientists and engineers to figure it out.

Among those who are bullish on AGI in the near term, or some similar form of human-level AI, are Elon Musk and OpenAI CEO Sam Altman.

AGI’s become a hot topic in the wake of the launch of ChatGPT and myriad similar AI products and services over the past few months. Often cited as a ‘holy grail’ technology, experts predict human-level AI will disrupt every facet of life on Earth.

If human-level AI is ever achieved, it could disrupt various aspects of the crypto industry. In the cryptocurrency world, we could see fully autonomous machines capable of acting as entrepreneurs, C-suite executives, advisers, and traders with the intellectual reasoning capacity of a human and the ability to retain information and execute code of a computer system.

As to whether AGI agents would serve us as AI-powered tools or compete with us for resources remains to be seen.

For his part, Hassabis didn't speculate any scenarios, but he did tell The Wall Street Journal that he “would advocate developing these types of AGI technologies in a cautious manner using the scientific method, where you try and do very careful controlled experiments to understand what the underlying system does.”

This might stand in juxtaposition to the current landscape where products such as his own employer's Google Bard and OpenAI’s ChatGPT were recently made available for public use.

Related: ‘Godfather of AI’ resigns from Google, warns of the dangers of AI

Industry insiders such as OpenAI CEO Sam Altman and DeepMind’s Nando de Freitas have stated that they believe AGI could emerge by itself if developers continue to scale current models. And one Google researcher recently parted ways with the company after claiming that a model named LaMDA had already become sentient.

Because of the uncertainty surrounding the development of these technologies and their potential impact on humankind, thousands of people, including Elon Musk and Apple Inc. co-founder Steve Wozniak, recently signed an open letter asking companies and individuals building related systems to pause development for six months so scientists can assess the potential for harm.

SEC’s enforcement case against Ripple may be wrapping up

Student interest in ChatGPT skills on Udemy increased by 4,419% since 2022: Report

The latest topic consumption report from Udemy shows an increase in interest in ChatGPT as well as skills related to cloud computing and blockchain.

Udemy’s Global Workplace Learning Index for Q1 2023 indicates that ChatGPT, financial services, and courses aimed at developing students' business teaching skills have experienced a massive uptick in interest from the site’s reported 49 million users.

This likely comes as no surprise as we’re currently experiencing what Wired recently described as a “Wet Hot AI Chatbot Summer,” coming on the heels of OpenAI’s launch of ChatGPT.

According to Udemy, topic consumption — the number of users taking courses featuring skills specific to ChatGPT — has risen 4,419%.

Screenshot of Udemy PDF report. Source: Udemy

Other top tech skills receiving increased interest from students included Nutanix, Azure Machine Learning, and Amazon Elastic MapReduce — all cloud-related courses with applications in the field of machine learning. Artificial intelligence (AI) art generation also showed an uptick in interest, as did illustration.

The report also summarizes the top three surging skills by popularity for 15 countries. Despite the increase in topic consumption, ChatGPT only managed to break into the top three for the U.S. market, where it sits at the top spot. Artificial intelligence topped the list in Argentina and came in second in Canada.

Also of note, manufacturing, government, and financial services topped the list of surging industries, with related skills seeing outsized growth.

Beyond the tech industry, the report provides figures for skills in the “professional power skills” category. Leading the list is “teaching.” With a 764% increase in topic attention in Q1 2023, related skills were second only to ChatGPT in consumption. 

While the report doesn’t state any direct conclusions, it does include a quote from instructor Diego Davilla who says:

"Having a comprehensive understanding of ChatGPT and other emerging AI technologies will be imperative to quickly pivot in today’s era of rapid digital transformation.”

Chatbot technologies are already impacting the cryptocurrency world, with the advent of advanced trading bots capable of interfacing with third party plugins built on ChatGPT and similar platforms becoming increasingly popular.

Related: 5 free artificial intelligence courses and certifications

But the Udemy report also indicates that technologies underpinning blockchain development are seeing a rise in interest as well. Python certifications saw an uptick of 272% and FastAPI skills consumption increased by 102% — both are widely used in the development of blockchain tech.

SEC’s enforcement case against Ripple may be wrapping up

Scientists in Texas developed a GPT-like AI system that reads minds

A new study demonstrates how the tech underlining ChatGPT can decode brain scans; recent AI progress indicates this could have implications for blockchain and Web3.

Researchers at the University of Texas at Austin have developed an artificial intelligence (AI) system capable of interpreting and reconstructing human thoughts. 

The scientists recently published a paper in Nature Neuroscience exploring using AI to non-invasively translate human thoughts into words in real time.

According to the researchers, current methods for decoding thoughts into words are either invasive — meaning they require surgical implantation — or limited in that they “can only identify stimuli from among a small set of words or phrases.”

The team at Austin circumvented these limitations by training a neural network to decode functional magnetic resonance imaging (fMRI) signals from multiple areas of the human brain simultaneously.

In conducting this experiment, the researchers had several test subjects listen to hours of podcasts while an fMRI machine non-invasively recorded their brain activity. The resulting data was then used to train the system on a specific user’s thought patterns.

After the training, test subjects had their brain activity monitored again while listening to podcasts, watching short films and silently imagining telling a story. During this part of the experiment, the AI system was fed the subjects' fMRI data and decoded the signals into plain language in real time.

According to a press release from the University of Texas at Austin, the AI was able to get things right approximately 50% of the time. The results, however, aren’t exact — the researchers designed the AI to convey the general ideas being thought about, not the exact words being thought.

Fortunately for anyone concerned about having their thoughts infiltrated by AI against their will, the scientists are very clear that this isn’t currently a possibility.

The system only functions if it’s trained on a specific user’s brainwaves. This makes it useless for scanning individuals who haven’t spent hours providing fMRI data. And even if such data was generated without a user’s permission, the team ultimately concludes that both the decoding of the data and the machine’s ability to monitor thoughts in real time require active participation on the part of the person being scanned.

However, the researchers did note that this might not always be the case:

“[O]ur privacy analysis suggests that subject cooperation is currently required both to train and use the decoder. However, future developments might enable decoders to bypass these requirements. Moreover, even if decoder predictions are inaccurate without subject cooperation, they could be intentionally misinterpreted for malicious purposes.”

In related news, a team of researchers in Saudi Arabia recently developed a method for improving precision in diagnosing brain tumors by processing MRI scans through a blockchain-based neural network.

In their paper, the Saudi researchers demonstrate how processing cancer research on a secure, decentralized blockchain can improve precision and reduce human error.

Related: What is Immutable, explained

While both aforementioned experiments are cited as early work in their respective research papers, it’s worth noting that the technology used in each is widely available.

The AI underlining the experiments conducted by the team at the University of Texas at Austin is a generative pre-trained transformer (GPT), the same technology that ChatGPT, Bard and similar large language models are built on.

And the Saudi Arabian team’s cancer research was conducted using AI that was trained on Nvidia GTX 1080s, GPUs that have been available since 2016.

Realistically speaking, there’s nothing stopping a clever developer (with access to an fMRI machine) from combining the two ideas in order to develop an AI system that can read a person's thoughts and record them to the blockchain.

This could lead to a "proof-of-thought" paradigm, wherein perhaps people could mint nonfungible tokens (NFTs) of their thoughts or record immutable ledgers of their feelings and ideas for posterity, legal purposes or just bragging rights.

The impact, for example, of thought-to-blockchain NFT minting could have implications for copywriting and patent applications where the blockchain serves as proof of exactly when a thought or idea was recorded. It could also allow celebrity thinkers such as Nobel laureates or contemporary philosophers to codify their ideas in an immutable record — one that could be commoditized and served as collectible digital assets.

SEC’s enforcement case against Ripple may be wrapping up

5 Free artificial intelligence courses and certifications

Discover five free AI courses and certifications to help you expand your knowledge of artificial intelligence and machine learning.

Learning artificial intelligence (AI) is becoming increasingly important for both technical and non-technical professionals, as it has the potential to revolutionize various industries and provide innovative solutions to complex problems. With free AI courses and online certifications, individuals can acquire the necessary knowledge and skills to stay relevant in today’s rapidly evolving job market.

The Machine Learning Specialization by DeepLearning.AI and Stanford Online

The Machine Learning Specialization by DeepLearning.AI and Stanford Online is a foundational online program that provides a broad introduction to modern machine learning. This three-course specialization is taught by Andrew Ng, an AI visionary who has led critical research at Stanford University and groundbreaking work at Google Brain, Baidu, and Landing.AI to advance the AI field.

Other notable instructors include Eddy Shyu, curriculum product manager at DeepLearning.AI; Aarti Bagul, a curriculum engineer; and Geoff Ladwig, another top instructor at DeepLearning.AI.

The first course in the specialization is “Supervised Machine Learning: Regression and Classification,” which covers building machine learning models in Python using popular machine learning libraries NumPy and scikit-learn, and building and training supervised machine learning models for prediction and binary classification tasks, including linear regression and logistic regression.

The second course is “Advanced Learning Algorithms,” which teaches building and training a neural network with TensorFlow to perform multiclass classification, applying best practices for machine learning development so that your models generalize to data and tasks in the real world, and building and using decision trees and tree ensemble methods, including random forests and boosted trees.

The third and final course is “Unsupervised Learning, Recommenders, Reinforcement Learning,” which covers using unsupervised learning techniques for unsupervised learning, including clustering and anomaly detection, building recommender systems with a collaborative filtering approach and a content-based deep learning method, and building a deep reinforcement learning model.

By the end of this specialization, one will have mastered key concepts and gained practical know-how to quickly and powerfully apply machine learning to challenging real-world problems. If you’re looking to break into AI or build a career in machine learning, the Machine Learning Specialization is a great place to start.

CS50’s Introduction to Artificial Intelligence with Python by Harvard University

CS50’s Introduction to Artificial Intelligence with Python, offered by Harvard University, is an introductory course exploring modern artificial intelligence concepts and algorithms. The course is free on edX, but students can purchase a verified certificate for an additional fee. The instructors for the course are Gordon McKay, professor of the practice of computer science at Harvard University; Brian Yu, senior preceptor in computer science at Harvard University; and David Malan.

Students will dive into the ideas that give rise to technologies like game-playing engines, handwriting recognition and machine translation. This course teaches students how to incorporate machine learning concepts and algorithms into Python programs through a series of hands-on projects.

Related: A brief history of artificial intelligence

Students will gain exposure to the theory behind graph search algorithms, classification, optimization, reinforcement learning, and other topics in artificial intelligence and machine learning. By the end of the course, students will have experience in libraries for machine learning, and knowledge of artificial intelligence principles that will enable them to design intelligent systems of their own.

AI For Everyone by Coursera in collaboration with DeepLearning.AI

AI for Everyone is an online course offered by Coursera in collaboration with DeepLearning.AI. This course is designed for non-technical learners who want to understand AI concepts and their practical applications. It provides an overview of AI and its impact on the world, covering the key concepts of machine learning, deep learning and neural networks.

The course is taught by Andrew Ng, a renowned AI expert and founder of DeepLearning.AI. He is also a co-founder of Coursera and has previously taught popular online courses on machine learning, neural networks and deep learning. The course consists of four modules, each covering a different aspect of AI. These are:

  • What is AI?
  • Building AI projects
  • Building AI in your company
  • AI and society

The course is self-paced and takes approximately 10 hours to complete. It includes video lectures, quizzes and case studies that allow students to apply the concepts they have learned using popular programming languages such as Python.

The course is free to audit on Coursera, and financial aid is available for those who cannot afford the fee. A certificate of completion is also available for a fee.

Machine Learning Crash Course with TensorFlow APIs by Google

The Machine Learning Crash Course with TensorFlow APIs is a free online course offered by Google. It’s designed for beginners who want to learn about machine learning and how to use TensorFlow, which is a popular open-source library for building and deploying machine learning models.

The course covers the following topics:

  • Introduction to machine learning and TensorFlow
  • Linear regression
  • Classification
  • Neural networks
  • Regularization
  • Training and validation
  • Convolutional neural networks
  • Natural language processing
  • Sequence models

Throughout the course, you’ll learn about different machine-learning techniques, and how to use TensorFlow application programming interfaces (APIs) to build and train models. The course also includes hands-on exercises and coding assignments, which will help you gain practical experience building and deploying machine learning models.

The course is available for free on Google’s website, and is self-paced so that you can learn at your own speed. Upon completion, you’ll receive a certificate of completion from Google.

Related: 5 emerging trends in deep learning and artificial intelligence

Introduction to AI by Intel

The Intel® AI Fundamentals Course is an introductory-level course that teaches the fundamentals of artificial intelligence and its applications. It covers topics such as machine learning, deep learning, computer vision, natural language processing and more. The free and self-paced course includes modules that can be completed in any order.

The eight-week program includes lectures and exercises. Each week, students are expected to spend 90 minutes completing the coursework. The exercises are implemented in Python, so prior knowledge of the language is recommended, but students can also learn it along the way.

The course does not offer a certificate of completion, but students can earn badges for completing each module. The course is designed for software developers, data scientists and others interested in learning about AI.

Ready to join the AI revolution?

By taking advantage of the above resources, individuals can become part of the growing AI industry and contribute to shaping its future. Additionally, the ChatGPT Prompt Engineering for Developers course, developed in collaboration with OpenAI, offers developers the opportunity to learn how to use large language models (LLMs) to build powerful applications in a cost-effective and efficient manner. The course is taught by two renowned experts in the field of AI: Isa Fulford and Andrew Ng. 

Whether a learner is a beginner or an advanced machine learning engineer, this course will provide the latest understanding of prompt engineering and best practices for using prompts for the latest LLM models. With hands-on experience, one will learn how to use LLM APIs for various tasks, including summarizing, inferring, transforming text and expanding, and building a custom chatbot. This course is free for a limited time, so don’t miss out on the opportunity to join the AI revolution.

SEC’s enforcement case against Ripple may be wrapping up