1. Home
  2. generative ai

generative ai

Andreessen Horowitz raises $7.2B for new venture funds

The venture firm is putting $600 million of its billions into a new gaming fund — which includes Web3, GameFi and AI-integrated gaming projects.

Venture capital firm Andreessen Horowitz (a16z) said it raised $7.2 billion to invest across several tech sectors, including gaming and artificial intelligence — but isn’t putting any more toward crypto.

The firm’s “Growth” venture strategy — a bundle of funds backing a range of early-stage startups — will receive the largest chunk of the raise at $3.75 billion. Its "Infrastructure" and "Apps" will respectively receive $1.25 billion and $1 billion, a16z said in an April 16 statement.

Its Infrastructure strategy mostly focuses on funding teams in the AI, computing and data industries, while the Apps funds focus on consumer, enterprise and fintech application builders.

Read more

Trader Predicts More Rallies for Surging Ethereum Rival, Updates Outlook on Chainlink and Two Additional Altcoins

Marathon to Acquire Applied Digital’s 200 MW Texas Bitcoin Mining Site

Marathon to Acquire Applied Digital’s 200 MW Texas Bitcoin Mining SiteThe Nasdaq-listed mining enterprise Applied Digital has concluded a definitive pact for selling its 200-megawatt (MW) mining facility in Garden City, Texas. Applied Digital disclosed that the transaction involves Marathon Digital Holdings acquiring the Garden City mining property for a sum of $87.3 million. Applied Digital and Marathon Ink $87.3 Million Deal for Texas Mining […]

Trader Predicts More Rallies for Surging Ethereum Rival, Updates Outlook on Chainlink and Two Additional Altcoins

Even the Pope has something to say about artificial intelligence

The explosion of artificial intelligence this year hasn’t escaped the Vatican, with Pope Francis warning of its dangers in a hefty 3,400-word letter ahead of the World Day of Peace on Jan. 1.

Over the past year, there’s been no shortage of scientists, tech CEOs, billionaires and lawmakers sounding the alarm over artificial intelligence — and now, even the Pope wants to talk about it too.

In a hefty 3,412-word letter dated Dec. 8, Pope Francis — the head of the Catholic Church — warned of the potential dangers of AI to humanity and what needs to be done to control it. The letter came as the Roman Catholic Church prepares to celebrate World Day of Peace on Jan. 1, 2024.

Pope Francis wants to see an international treaty to regulate AI to ensure it is developed and used ethically — otherwise, we risk falling into the spiral of a “technological dictatorship.”

Read more

Trader Predicts More Rallies for Surging Ethereum Rival, Updates Outlook on Chainlink and Two Additional Altcoins

Amazon launches ‘Q’ — a ChatGPT competitor purpose-built for business

Employees in HR, legal, product management, design, manufacturing and operations departments will benefit from Q, said AWS CEO Adam Selipsky.

Amazon has launched its own artificial intelligence-powered assistant built for business, “Amazon Q.”

The AI chatbot can be used to have conversations, solve problems, generate content, gain insights and connect with a company’s information repositories, code, data and enterprise systems, Amazon Web Services said in a Nov. 28 announcement.

Q is part of Amazon’s broader strategy to integrate generative AI across its product ecosystem on both consumer and private sector fronts an hopes the tool will prove handy to employees.

“Amazon Q provides immediate, relevant information and advice to employees to streamline tasks, accelerate decision-making and problem-solving, and help spark creativity and innovation at work.”

Employees in HR, legal, product management, design, manufacturing and operations will benefit from Q, AWS CEO Adam Selipsky said in a Nov. 28 CNBC interview.

He noted that Q is trained on 17 years of AWS data.

Conversation tab on Amazon Q. Source: Amazon Web Services

AWS’s largest customers include financial firms Vanguard and Deloitte along with telecommunication companies Samsung and Verizon and entertainment conglomerate Disney — whose employees could leverage the AI chatbot when a more complete version is rolled out.

It is currently only offered in preview mode in Oregon and northern Virginia in the United States.

Related: AI companies commit to safe and transparent AI — White House

Amazon’s Q is unrelated to Q*, an AI project by ChatGPT creator OpenAI — which was rife with controversy last week when founder and CEO Sam Altman was sacked and then reinstated as CEO.

Amazon has been a big investor in the AI space, placing a $4 billion bet on Anthropic — the team behind Claude 2 chatbot — across several investments. Anthropic leverages much of its computational power from AWS.

Two of Amazon’s largest competitors, Google and Meta, released their own AI chatbots named Google Bard and LLaMA earlier in 2023, while Microsoft has invested about $13 billion into OpenAI.

Magazine: AI Eye: AI travel booking hilariously bad, 3 weird uses for ChatGPT, crypto plugins

Trader Predicts More Rallies for Surging Ethereum Rival, Updates Outlook on Chainlink and Two Additional Altcoins

US Space Force pauses use of ChatGPT-like tools due to security fears: Report

At least 500 Space Force staff members have been affected, according to the department’s former chief software officer.

The United States Space Force has temporarily banned its staff from using generative artificial tools while on duty to protect government data, according to reports.

Space Force members were informed that they “are not authorized” to web-based generative AI tools — to create text, images, and other media — unless specifically approved, according to an Oct. 12 report by Bloomberg, citing a memorandum addressed to the Guardian Workforce (Space Force members) on Sept. 29.

“Generative AI “will undoubtedly revolutionize our workforce and enhance Guardian’s ability to operate at speed,” Lisa Costa, Space Force's deputy chief of space operations for technology and innovation reportedly said in the memorandum.

However, Costa cited concerns over current cybersecurity and data handling standards, explaining that AI and large language model (LLM) adoption needs to be more “responsible.”

The United States Space Force is a space service branch of the U.S. Armed Forces tasked with protecting the U.S. and allied interests in space.

The Space Force’s decision has already impacted at least 500 individuals using a generative AI platform called “Ask Sage,” according to Bloomberg, citing comments from Nick Chaillan, former chief software officer for the United States Air Force and Space Force.

Chaillan reportedly criticized the Space Force’s decision. “Clearly, this is going to put us years behind China,” he wrote in a September email complaining to Costa and other senior defense officials.

“It’s a very short-sighted decision,” Chaillan added.

Chaillan noted that the U.S. Central Intelligence Agency and its departments have developed generative AI tools of their own that meet data security standards.

Related: Data protection in AI chatting: Does ChatGPT comply with GDPR standards?

Concerns that LLMs could leak private information into the public has been a fear for some governments in recent months.

Italy temporarily blocked AI chatbot ChatGPT in March, citing suspected breaches of data privacy rules before reversing its decision about a month later.

Tech giants such as Apple, Amazon, and Samsung are among the firms that have also banned or restricted employees from using ChatGPT-like AI tools at work.

Magazine: Musk’s alleged price manipulation, the Satoshi AI chatbot and more

Trader Predicts More Rallies for Surging Ethereum Rival, Updates Outlook on Chainlink and Two Additional Altcoins

Iris Energy buys 248 Nvidia GPUs worth $10M for generative AI & Bitcoin mining

Iris Energy has invested $10 million in the latest generation Nvidia GPUs to explore generative AI while it continues to focus on Bitcoin mining.

Nasdaq-listed Iris Energy has bought 248 state of the art Nvidia H100 GPUs for $10 million as it looks to explore opportunities in generative AI in addition to its core business focus on Bitcoin mining.

The firm expects to receive delivery of the 248 GPUs in the coming months and plans to deploy the hardware to serve opportunities in cloud computing.

Iris Energy co-founder and co-CEO Daniel Roberts said the company was looking to leverage its existing data centers into serving generative AI computing requirements.

“We believe demand for sustainable computing is unlikely to go away, and feel we are uniquely positioned to capture ongoing growth in the broader industry; whether that be ASICs for Bitcoin mining, or GPUs for generative AI and beyond.”

Iris Energy operates in regions that have an abundance of renewable energy including wind, solar, hydro and has set up its modular data centers nearby to the source of low-cost excess renewable energy to be monetized for Bitcoin.

Nvidia's H100 Tensor Core GPU.  Source: Nvidia.com.

According to the Iris Energy website, it has four major data center mining facilities, including Canal Flats, Mackenzie and Prince George in Canada’s British Columbia as well as its Childress site in Texas.

Related: Tether CTO stays silent on Bitcoin mining locations

Renewable-powered Bitcoin mining operations continue to attract investment, with Genesis Digital Assets Limited opening a new data centre in Sweden in August 2023 that will operate off abundant power from the nearby Porjus Hydroelectric Power Station.

Meanwhile Blockstream recently announced its intent to raise up to $50 million in an official investment note to purchase, store and then sell BTC mining hardware ahead of Bitcoin's next halving event in 2024. 

GPU hardware manufacturer Nvidia has also seen significant windfall from the rise of AI-powered tools and AI computing, with its total market capitalization eclipsing $1 trillion in May 2023.

Nvidia also recently teased its next-generation GH200 Grace Hopper Superchip which is touted to be able to process complex generative AI workloads, includling large language models, recommender systems and vector databases.

Magazine: Recursive inscriptions: Bitcoin ‘supercomputer’ and BTC DeFi coming soon

Trader Predicts More Rallies for Surging Ethereum Rival, Updates Outlook on Chainlink and Two Additional Altcoins

High skilled jobs most exposed to AI, impact is still unknown – report

A deep dive into global employment data and trends indicates that AI could have the biggest impact on high-skill jobs.

An employment outlook paper suggests that highly skilled professions are the most exposed to artificial intelligence while its potential impact on employment is yet to be seen.

The Organisation for Economic Co-operation and Development (OECD) released its latest employment  report, with a focus on labour demand and widespread shortages given ongoing high inflation and resulting fiscal policies around the world.

A key takeaway is covered in a chapter dedicated to exploring why there is no significant sign of slowing labour demand due to advancements in AI. Measures of AI exposure show that available tools have shown the most progress in areas requiring “non-routine, cognitive tasks such as information ordering, memorization and perceptual speed”.

The OECD says these are key qualities of occupations requiring significant training or tertiary education. The research goes on to label “high-skill, white collar jobs” as the most exposed to AI.

Business professionals, managers, chief executives and science and engineering professionals are listed as the main occupations exposed to AI capabilities. Meanwhile food preparation assistants, agriculture, forestry and fishery labourers, cleaners and helpers are named as the least affected occupations by AI.

The publication also takes an in-depth look at evidence on the impact of AI on labour markets, noting that progress in space has been fast, making it hard to distinguish its outputs from those produced by humans.

The report states that the net impact of AI is ambiguous because while AI displaces some jobs, its can also stimulate labour demand by increasing productivity. AI also has the potential to create new tasks, which in part creates new jobs.

“AI will substitute for labour in certain jobs, but it will also create new jobs for which human labour has a competitive advantage.”

Related: AI-related crypto returns rose up to 41% after ChatGPT launched: Study

Meanwhile negative employment effects due to AI advances are hard to find. The OECD cites data which reflects high-skill workers seeing employment gains over the past decade in comparison to low skilled workers.

The chapter also notes that its findings on the impact on specific job levels comes before the advent of large language models like ChatGPT, noting that generative AI could further expand the scope of tasks and jobs that can be automated.

As Cointelegraph previously reported, the AI sector has seen a surge in job seekers, with Google searches for “AI jobs” four times higher than searches for “crypto jobs” during 2021s peak bull run. 

Magazine: ‘Moral responsibility’: Can blockchain really improve trust in AI?

Trader Predicts More Rallies for Surging Ethereum Rival, Updates Outlook on Chainlink and Two Additional Altcoins

What is DALL-E, and how does it work?

Discover the process of text-to-image synthesis using DALL-E’s autoencoder architecture and learn how it can transform textual prompts into images.

OpenAI created the ground-breaking generative artificial intelligence (AI) model known as DALL-E, which excels at creating distinctive, incredibly detailed visuals from textual descriptions. DALL-E, in contrast to conventional picture creation models, can produce original images in response to given text prompts, demonstrating its capacity to comprehend and transform verbal concepts into visual representations.

During training, DALL-E makes use of a sizable collection of text-image pairs. It learns to associate visual cues with the semantic meaning of text instructions. DALL-E creates an image from a sample of its learned probability distribution of images in response to a text prompt.

The model creates a visually consistent and contextually relevant image that corresponds with the supplied prompt by fusing the textual input with the latent space representation. As a result, DALL-E is able to produce a wide range of creative pictures from textual descriptions, pushing the limits of generative AI in the area of image synthesis.

How does DALL-E work?

The generative AI model DALL-E can produce incredibly detailed visuals from verbal descriptions. To attain this capability, it incorporates ideas from both language and image processing. Here is a description of how DALL-E works:

Training data

A sizable data set made up of pairs of photos and their related text descriptions is used to train DALL-E. The link between visual information and written representation is taught to the model using these image-text pairs.

Autoencoder architecture

DALL-E is built using an autoencoder architecture, which is made up of two primary parts: an encoder and a decoder. The encoder receives an image and reduces its dimensions to create a representation called latent space. The decoder then uses this representation of latent space to create an image.

Conditioning on text prompts

DALL-E adds a conditioning mechanism to the conventional autoencoder architecture. This indicates that DALL-E subjects its decoder to text-based instructions or explanations while creating images. The text prompts have an impact on the appearance and content of the created image.

Latent space representation

DALL-E learns to map both visual cues and written prompts into a common latent space using the latent space representation technique. The representation of latent space serves as a link between the visual and verbal worlds. DALL-E can create visuals that correspond with the provided textual descriptions by conditioning the decoder on particular text prompts.

Sampling from the latent space

DALL-E selects points from the learned latent space distribution to produce images from text prompts. The decoder’s starting point is these sampled points. DALL-E produces visuals that correlate to the given text prompts by modifying the sampled points and decoding them.

Training and fine-tuning

DALL-E goes through a thorough training procedure utilizing cutting-edge optimization methods. The model is taught to precisely recreate the original images and discover the relationships between visual and textual cues. The model’s performance is improved through fine-tuning, which also makes it possible for it to produce a variety of high-quality images based on various text inputs.

Related: Google’s Bard vs. Open AI’s ChatGPT

Use cases and applications of DALL-E

DALL-E has a wide range of fascinating use cases and applications thanks to its exceptional capacity to produce unique, finely detailed visuals based on text inputs. Some notable examples include:

  • Creative design and art: DALL-E can help designers and artists come up with concepts and ideas visually. It can produce appropriate visuals from textual descriptions of desired visual elements or styles, inspiring and facilitating the creative process.
  • Marketing and advertising: DALL-E can be used to design distinctive visuals for promotional initiatives. Advertisers can provide text descriptions of the desired objects, settings or aesthetics for their brands, and DALL-E can create custom photographs that are consistent with the campaign’s narrative and visual identity.
  • Interpretability and control: DALL-E has the capacity to produce visual material for a range of media, including books, periodicals, websites and social media. It can convert text into images that go with it, resulting in aesthetically appealing and interesting multimedia experiences.
  • Product prototyping: By creating visual representations based on verbal descriptions, DALL-E can help in the early stages of product design. The ability of designers and engineers to quickly explore many concepts and variations facilitates the prototyping and iteration processes.
  • Gaming and virtual worlds: DALL-E’s picture production skills can help with game design and virtual world development. It enables the creation of enormous and immersive virtual environments by producing realistically rendered landscapes, characters, objects and textures.
  • Visual aids and accessibility: DALL-E can assist with accessibility initiatives by producing visual representations of text content, such as visualizing textual descriptions for people with visual impairments or developing alternate visual presentations for educational resources.
  • Limited understanding of real-world constraints: DALL-E can help in the creation of illustrations or other visual components for the narrative. Authors can provide textual descriptions of objects or people, and DALL-E can produce related images to bolster the narrative and capture the reader’s imagination.

Related: What is Google’s Bard, and how does it work?

ChatGPT vs. DALL-E

ChatGPT is a language model designed for conversational tasks, while DALL-E is an image generation model capable of creating unique images from textual descriptions. Here's a comparison table highlighting the differences between ChatGPT and DALL-E:

Limitations of DALL-E

DALL-E has constraints to take into account despite its capabilities in producing graphics from text prompts. The model might reinforce prejudices seen in the training data, possibly perpetuating stereotypes or biases within society. Beyond the supplied prompt, it struggles with subtle nuances and abstract explanations because it lacks contextual awareness.

The complexity of the model can make interpretation and control difficult. DALL-E often creates very distinct visuals, but it could have trouble coming up with other versions or catching all of the potential outcomes. It can take a lot of effort and processing to produce high-quality photographs.

Additionally, the model might provide absurd but visually appealing results that ignore limitations in the real world. To responsibly manage expectations and ensure the intelligent use of DALL-E’s capabilities, it is imperative to be aware of these restrictions. These restrictions are being addressed in ongoing research in order to enhance generative AI.

Trader Predicts More Rallies for Surging Ethereum Rival, Updates Outlook on Chainlink and Two Additional Altcoins

What is generative AI?

Generative AI leverages large data sets and sophisticated models to mimic human creativity and produce new images, music, text and more.

Generative artificial intelligence (AI), fueled by advanced algorithms and massive data sets, empowers machines to create original content, revolutionizing fields such as art, music and storytelling. By learning from patterns in data, generative AI models unlock the potential for machines to generate realistic images, compose music and even develop entire virtual worlds, pushing the boundaries of human creativity.

Generative AI, explained

Generative AI is a cutting-edge field that investigates the potential of machine learning to inspire human-like creativity and produce original material. Generative AI is a subset of artificial intelligence concerned with creating algorithms that can produce fresh information or replicate historical data patterns.

It uses methods like deep learning and neural networks to simulate human creative processes and produce unique results. Generative AI has paved the way for applications ranging from image and audio generation to storytelling and game development by utilizing algorithms and training models on enormous amounts of data.

Both OpenAI’s ChatGPT and Google’s Bard show the capability of generative AI to comprehend and produce human-like writing. They have a variety of uses, including chatbots, content creation, language translation and creative writing. These models’ underlying ideas and methods promote generative AI more broadly and its potential to improve human-machine interactions and artistic expression.

Related: 5 AI tools for translation

This article will explain generative AI, its guiding principles, its effects on businesses and the ethical issues raised by this rapidly developing technology.

Evolution of generative AI

Here’s a summarized evolution of generative AI:

  • 1932: The concept of generative AI emerges with early work on rule-based systems and random number generators, laying the foundation for future developments.
  • 1950s–1960s: Researchers explore early techniques in pattern recognition and generative models, including developing early artificial neural networks.
  • 1980s: The field of artificial intelligence experiences a surge of interest, leading to advancements in generative models, such as the development of probabilistic graphical models.
  • 1990s: Hidden Markov Models became widely used in speech recognition and natural language processing tasks, representing an early example of generative modeling.
  • Early 2000s: Bayesian networks and graphical models gain popularity, enabling probabilistic inference and generative modeling in various domains.
  • 2012: Deep learning, specifically deep neural networks, started gaining attention and revolutionizing the field of generative AI, paving the way for significant advancements.
  • 2014: The introduction of generative adversarial networks (GANs) by Ian Goodfellow propels the field of generative AI forward. GANs demonstrate the ability to generate realistic images and become a fundamental framework for generative modeling.
  • 2015–2017: Researchers refine and improve GANs, introducing variations such as conditional GANs and deep convolutional GANs, enabling high-quality image synthesis.
  • 2018: StyleGAN, a specific implementation of GANs, allows for fine-grained control over image generation, including factors like style, pose and lighting.
  • 2019–2020: Transformers — originally developed for natural language processing tasks — show promise in generative modeling and become influential in text generation, language translation and summarization.
  • Present: Generative AI continues to advance rapidly, with ongoing research focused on improving model capabilities, addressing ethical concerns and exploring cross-domain generative models capable of producing multimodal content.

How does generative AI work?

With the use of algorithms and training models on enormous volumes of data, generative AI creates new material closely reflecting the patterns and traits of the training data. There are various crucial elements and processes in the procedure:

Data collection

The first stage is to compile a sizable data set representing the subject matter or category of content that the generative AI model intends to produce. A data set of tagged animal photos would be gathered, for instance, if the objective was to create realistic representations of animals.

Model architecture

The next step is to select an appropriate generative model architecture. Popular models include transformers, variational autoencoders (VAEs) and GANs. The architecture of the model dictates how the data will be altered and processed to produce new content.

Training

Using the gathered data set, the model is trained. By modifying its internal parameters, the model learns the underlying patterns and properties of the data during training. Iterative optimization is used during the training process to gradually increase the model’s capacity to produce content that closely resembles the training data.

Generation process

After training, the model can produce new content by sampling from the observed distribution of the training set. For instance, while creating photos, the model might use a random noise vector as input to create a picture that looks like an actual animal.

Evaluation and refinement

The created material is examined to determine its caliber and degree of conformity to the intended attributes. Depending on the application, evaluation metrics and human input may be used to improve the generated output and develop the model. Iterative feedback loops contribute to the improvement of the content’s diversity and quality.

Fine-tuning and transfer learning

Pre-trained models may occasionally serve as a starting point for transfer learning and fine-tuning certain data sets or tasks. Transfer learning is a strategy that enables models to use information from one domain to another and perform better with less training data.

It’s crucial to remember that the precise operation of generative AI models can change based on the chosen architecture and methods. The fundamental idea is the same, though: the models discover patterns in training data and produce new content based on those discovered patterns.

Applications of generative AI

Generative AI has transformed how we generate and interact with content by finding multiple applications in a variety of industries. Realistic visuals and animations may now be produced in the visual arts thanks to generative AI.

The ability of artists to create complete landscapes, characters, and scenarios with astounding depth and complexity has opened up new opportunities for digital art and design. Generic AI algorithms can create unique melodies, harmonies, and rhythms in the context of music, assisting musicians in their creative processes and providing fresh inspiration.

Beyond the creative arts, generative AI has significantly impacted fields like gaming and healthcare. It has been used in healthcare to generate artificial data for medical research, enabling researchers to train models and investigate new treatments without jeopardizing patient privacy. Gamers can experience more immersive gameplay by creating dynamic landscapes and nonplayer characters (NPCs) using generative AI.

Ethical considerations

The development of generative AI has enormous potential, but it also raises significant ethical questions. One major cause for concern is deepfake content, which uses AI-produced content to deceive and influence people. Deepfakes have the power to undermine public confidence in visual media and spread false information.

Additionally, generative AI may unintentionally continue to reinforce biases that are present in the training data. The AI system may produce material that reflects and reinforces prejudices if the data used to train the models is biased. This may have serious societal repercussions, such as reinforcing stereotypes or marginalizing particular communities.

Related: What is explainable AI (XAI)?

Researchers and developers must prioritize responsible AI development to address these ethical issues. This entails integrating systems for openness and explainability, carefully selecting and diversifying training data sets, and creating explicit rules for the responsible application of generative AI technologies.

Trader Predicts More Rallies for Surging Ethereum Rival, Updates Outlook on Chainlink and Two Additional Altcoins

AI automation could take over 50% of today’s work activity by 2045: McKinsey

Management consulting firm McKinsey & Co believes AI will have the “biggest impact” on high-wage workers.

In just 22 years, generative AI may be able to fully automate half of all work activity conducted today, including tasks related to decision-making, management, and interfacing with stakeholders, according to a new report from McKinsey & Co.

The prediction came from the management consulting firm report on June 14, forecasting 75% of generative AI value creation will come from customer service operations, marketing and sales, software engineering, as well as research and development positions.

The firm explained that recent developments in generative AI has “accelerated” its “midpoint” prediction by nearly a decade from 2053 — its 2016 estimate — to 2045.

McKinsey explained that its broad range of 2030-2060 was made to encompass a range of outcomes — such as the rate at which generative AI is adopted, investment decisions and regulation, among other factors.

Its previous range for 50% of work being automated was 2035-2070.

McKinsey’s new predicted “midpoint” time at which automation reaches 50% of time on work-related activities has accelerated by eight years to 2045. Source: McKinsey

The consulting firm said, however, the pace of adoption across the globe will vary considerably from country to country:

“Automation adoption is likely to be faster in developed economies, where higher wages will make it economically feasible sooner.”
Early and late scenario midpoint times for the United States, Germany, Japan, France, China, Mexico and India. Source: McKinsey.

Generative AI systems now have the potential to automate work activities that absorb 60-70% of employees’ time today, McKinsey estimated.

Interestingly, the report estimates generative AI will likely have the “biggest impact” on high-wage workers applying a high degree of “expertise” in the form of decision making, management and interfacing with stakeholders.

The report also predicts that the generative AI market will add between $2.6 to $4.4 trillion to the world economy annually and be worth a whopping $15.7 trillion by 2030.

This would provide enormous economic value on top of non-generative AI tools in mainstream use today, the firm said:

“That would add 15 to 40 percent to the $11.0 trillion to $17.7 trillion of economic value that we now estimate nongenerative artificial intelligence and analytics could unlock.”

Generative AI systems are capable of producing text, images, audio and videos in response to prompts by receiving input data and learning its patterns. OpenAI’s ChatGPT is the most commonly used generative AI tool today.

McKinsey’s $15.7 trillion prediction by 2030 is more than a three-fold increase in comparison to its $5 trillion prediction for the Metaverse over the same timeframe.

Related: The need for real, viable data in AI

However, the recent growth of generative AI platforms hasn’t come without concerns.

The United Nations recently highlighted “serious and urgent” concerns about generative AI tools producing fake news and information on June 12.

Meta CEO Mark Zuckerberg received a grilling by United States Senators of a “leaked” release of the firm’s AI tool “LLaMA” which the senators claim to be potentially “dangerous” and be possibly used for “criminal tasks.”

Magazine: AI Eye: ‘Biggest ever’ leap in AI, cool new tools, AIs are the real DAOs

Trader Predicts More Rallies for Surging Ethereum Rival, Updates Outlook on Chainlink and Two Additional Altcoins