1. Home
  2. XAI

XAI

Elon Musk denies Tesla will pay xAI for AI technology partnership

The clarification comes after a news report stated Musk’s AI startup has discussed sharing its technology with Tesla and in return would earn revenue from the carmaker. 

Elon Musk, owner of the social media platform X, has clarified that his artificial intelligence startup xAI is not looking to earn revenue from Tesla in exchange for sharing its technology. 

The clarification comes after a Wall Street Journal report claimed that xAI has discussed sharing its technology with Tesla and in return would earn revenue from the carmaker. 

“The xAI models are gigantic, containing, in compressed form, most of human knowledge, and couldn’t possibly run on the Tesla vehicle inference computer, nor would we want them to,” Musk wrote in a Sept. 8 X post. 

Read more

2021 Bull Run Déjà Vu? Altcoin Market Gains Momentum

OpenAI business users top 1M, targets premium ChatGPT subscriptions

OpenAI is looking to introduce more expensive subscription plans for upcoming large-language models like the Strawberry and Orion AI models. 

OpenAI’s paid users across its business segment, including ChatGPT Enterprise, Team and Edu, grew nearly 67% since April to cross one million on Sept. 5. The San Francisco-based artificial intelligence firm’s chatbot continues to thrive due to its advanced language model.

According to a Reuters report, OpenAI’s business products have grown to reach one million users, up from 600,000 in April.

OpenAI reportedly plans to introduce higher-priced subscription plans for its upcoming large language models, such as the Strawberry and Orion AI models. The creator of ChatGPT is considering subscription plans that could cost up to $2,000 per month.

Read more

2021 Bull Run Déjà Vu? Altcoin Market Gains Momentum

Tesla investors sue Elon Musk for diverting resources, talent to xAI

Shareholders accused Tesla boss Elon Musk of “brazen disloyalty” with his xAI startup that created “billions in AI-related value at a company other than Tesla.”

Tesla shareholders sued CEO Elon Musk and the vehicle maker’s board on Thursday, claiming Musk’s xAI startup is a “competing company” taking artificial intelligence talent and resources from the firm.

It comes the same day shareholders voted to restore Musk’s $44.9 billion pay package that a Delaware judge threw out in January.

Cleveland Bakers and Teamsters Pension Fund, Daniel Hazen and Michael Giampietro filed the June 13 stockholder complaint in Delaware’s Chancery Court on behalf of Tesla.

Read more

2021 Bull Run Déjà Vu? Altcoin Market Gains Momentum

Elon Musk drops lawsuit against OpenAI CEO Sam Altman

Musk’s decision came one day before a federal judge was set to decide whether to dismiss the case or allow it to proceed to the next stage.

Elon Musk has moved to withdraw his lawsuit against OpenAI and its CEO Sam Altman — which accused the artificial intelligence firm of deviating from its original mission to develop AI to benefit humanity, not for profit.

Musk’s attorneys requested to drop the breach of contract lawsuit without prejudice, according to court filings in the San Francisco Superior Court on June 11.

The dismissal without prejudice means the case isn’t dismissed forever and thus allows Musk to file again in the future.

Read more

2021 Bull Run Déjà Vu? Altcoin Market Gains Momentum

Trader Says Two Ethereum Rivals Could Outperform Crypto Market, Predicts Rally for Low-Cap Altcoin

Trader Says Two Ethereum Rivals Could Outperform Crypto Market, Predicts Rally for Low-Cap Altcoin

A popular crypto strategist is naming two Ethereum (ETH) challengers that he thinks will rise faster than the rest of the market once conditions improve. Pseudonymous analyst The Crypto Dog tells his 776,800 followers on the social media platform X that he’s long-term bullish on Near (NEAR). According to the trader, NEAR has been one […]

The post Trader Says Two Ethereum Rivals Could Outperform Crypto Market, Predicts Rally for Low-Cap Altcoin appeared first on The Daily Hodl.

2021 Bull Run Déjà Vu? Altcoin Market Gains Momentum

Trader Issues Warning on Solana Rival That’s Up 247% in One Month, Updates Outlook on LDO and One Other Altcoin

Trader Issues Warning on Solana Rival That’s Up 247% in One Month, Updates Outlook on LDO and One Other Altcoin

A widely followed crypto strategist warns that a Solana (SOL) competitor that’s up more than 3x in the last 30 days is flashing signals that it may have printed a short-term top. The pseudonymous analyst known as Altcoin Sherpa tells his 205,700 followers on the social media platform X that while layer-1 blockchain Sei (SEI) […]

The post Trader Issues Warning on Solana Rival That’s Up 247% in One Month, Updates Outlook on LDO and One Other Altcoin appeared first on The Daily Hodl.

2021 Bull Run Déjà Vu? Altcoin Market Gains Momentum

Top Crypto Exchange Binance Announces Upcoming Support for Soon-To-Be Launched Gaming Altcoin

Top Crypto Exchange Binance Announces Upcoming Support for Soon-To-Be Launched Gaming Altcoin

Top global crypto exchange Binance plans to list an upcoming gaming altcoin via its Launchpool platform. Binance Launchpool allows users to stake coins to farm new assets, the platform’s 43rd project will be the gaming blockchain Xai (XAI), which the exchange plans to list on January 9th. Between January 5th and the 9th, Binance users […]

The post Top Crypto Exchange Binance Announces Upcoming Support for Soon-To-Be Launched Gaming Altcoin appeared first on The Daily Hodl.

2021 Bull Run Déjà Vu? Altcoin Market Gains Momentum

Elon Musk launches AI chatbot ‘Grok’ — says it can outperform ChatGPT

Grok costs $16 per month on X Premium Plus. But for now it is only offered to a limited number of users in the United States.

Elon Musk and his artificial intelligence startup xAI have released “Grok” — an AI chatbot which can supposedly outperform OpenAI’s first iteration of ChatGPT in several academic tests.

The motivation behind building Gruk is to create AI tools equipped to assist humanity by empowering research and innovation, Musk and xAI explained in a Nov. 5 X (formerly Twitter) post.

Musk and the xAI team said a “unique and fundamental advantage” possessed by Grok is that it has real-time knowledge of the world via the X platform.

“It will also answer spicy questions that are rejected by most other AI systems,” Muska and xAI said. "Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!"

The engine powering Grok — Grok-1 — was evaluated in several academic tests in mathematics and coding, performing better than ChatGPT-3.5 in all tests, according to data shared by xAI.

However it didn’t outperform OpenAI’s most advanced version, GPT-4, across any of the tests.

“It is only surpassed by models that were trained with a significantly larger amount of training data and compute resources like GPT-4, Musk and xAI said. “This showcases the rapid progress we are making at xAI in training LLMs with exceptional efficiency.”

The AI startup noted that Grok will be accessible on X Premium Plus at $16 per month. But for now, it is only offered to a limited number of users in the United States.

Grok still remains a “very early beta product” which should improve rapidly by the week, xAI noted.

Related: Twitter is now worth half of the $44B Elon Musk paid for it: Report

The xAI team said they will also implement more safety measures over time to ensure Grok isn’t used maliciously.

“We believe that AI holds immense potential for contributing significant scientific and economic value to society, so we will work towards developing reliable safeguards against catastrophic forms of malicious use.”

“We believe in doing our utmost to ensure that AI remains a force for good,” xAI added.

The AI startup's launch of Grok comes eight months after Musk founded the firm in March.

Magazine: Hall of Flame: Peter McCormack’s Twitter regrets — ‘I can feel myself being a dick’

2021 Bull Run Déjà Vu? Altcoin Market Gains Momentum

Dogecoin Co-Founder Has High Hopes for Billionaire Elon Musk’s New AI Venture, Calling It ‘Really Interesting’

Dogecoin Co-Founder Has High Hopes for Billionaire Elon Musk’s New AI Venture, Calling It ‘Really Interesting’

The co-founder of the popular memecoin Dogecoin (DOGE) has enthusiasm for billionaire Elon Musk’s new artificial intelligence (AI) project. Earlier this week, Musk launched his artificial intelligence startup project, xAI, as a means of competing with chatbot ChatGPT, a prominent AI tool. According to Musk, who co-founded OpenAI in 2015, the firm that created ChatGPT, […]

The post Dogecoin Co-Founder Has High Hopes for Billionaire Elon Musk’s New AI Venture, Calling It ‘Really Interesting’ appeared first on The Daily Hodl.

2021 Bull Run Déjà Vu? Altcoin Market Gains Momentum

What is explainable AI (XAI)?

Learn about artificial intelligence, its transparency challenges, and how explainable AI increases accountability and interpretability.

What are the limitations of explainable AI?

XAI has several limitations, some of them relating to its implementation. For instance, engineers tend to focus on functional requirements, and even if not, large teams of engineers often develop algorithms over time. This complexity makes a holistic understanding of the development process and values embedded within AI systems less attainable. 

Moreover, “explainable” is an open term, bringing about other crucial notions when considering XAI’s implementation. Embedding in or deducting explainability from AI’s code and algorithms may be theoretically preferable but practically problematic because there is a clash between the prescribed nature of algorithms and code on the one hand and the flexibility of open-ended terminology on the other.

Indeed, when AI’s interpretability is tested by looking at the most critical parameters and factors shaping a decision, questions such as what amounts to “transparent” or “interpretable” AI arise. How high are such thresholds?

Finally, it is widely recognized that AI development happens exponentially. Combining this exponential growth with unsupervised and deep learning systems, AI could, in theory, find ways to become generally intelligent, opening doors to new ideas, innovation and growth. 

To illustrate this, one can consider published research on “generative agents” where large language models were combined with computational, interactive agents. This research introduced generative agents in an interactive sandbox environment consisting of a small town of twenty-five agents using natural language. Crucially, the agents produced believable individual and interdependent social behaviors. For example, starting with only a single user-specified notion that one agent wants to throw a party, the agents autonomously spread invitations to the party to one another.

Why is the word “autonomously” important? One might argue that when AI systems exhibit behavior that cannot be adequately traced back to its individual components, one must consider that black swan risks or other adverse effects may emerge that cannot be accurately predicted or explained. 

The concept of XAI is of somewhat limited use in these cases, where AI quickly evolves and improves itself. Hence, XAI appears insufficient to mitigate potential risks, and additional preventive measures in the form of guidelines and laws might be required. 

As AI continues to evolve, the importance of XAI will only continue to grow. AI systems may be applied for the good, the bad and the ugly. To which extent AI can shape humans’ future relies partly on who deploys it and for which purposes, how it is combined with other technologies, and to which principles and rules it is aligned. 

XAI could prevent or mitigate some of an AI system’s potential adverse effects. Regardless of the possibility of explaining every decision of an AI system, the existence of the notion of XAI implies that, ultimately, humans are responsible for decisions and actions stemming from AI. And that makes AI and XAI subject to all sorts of interests.

How does explainable AI work?

The principles of XAI surround the idea of designing AI systems that are transparent, interpretable and can provide clear justifications for their decisions. In practice, this involves developing AI models that humans understand, which can be audited and reviewed, and are free from unintended consequences, such as biases and discriminatory practices.

Explainability is in making transparent the most critical factors and parameters shaping AI decisions. While it can be argued that it is impossible to provide full explainability at all times due to the internal complexity of AI systems, specific parameters and values can be programmed into AI systems. High levels of explainability are achievable, technically valuable and may drive innovation. 

The importance of transparency and explainability in AI systems has been recognized worldwide, with efforts to develop XAI underway for several years. As noted, XAI has several benefits: Arguably, it is possible to discover how and why it made a decision or acted (in the case of embodied AI) the way it did. Consequently, transparency is essential because it builds trust and understanding for users while allowing for scrutiny simultaneously. 

Explainability is a prerequisite for ascertaining other “ethical” AI principles, such as sustainability, justness and fairness. Theoretically, it allows for the monitoring of AI applications and AI development. This is particularly important for some use cases of AI and XAI, including applications in the justice system, (social) media, healthcare, finance, and national security, where AI models are used to make critical decisions that impact people’s lives and societies at large. 

Several ML techniques can serve as examples of XAI. Such techniques increase explainability, like decision trees (which can provide a clear, visual representation of the decision-making process of an AI model), rule-based systems (algorithmic rules are defined in a human-understandable format in cases in which rules and interpretation are less flexible), Bayesian networks (probabilistic models representing causalities and uncertainties), linear models (models showing how each input contributes to the output), and similar techniques to the latter in the case of neural networks. 

AIs black box problem vs. XAIs transparency

Various approaches to achieving XAI include visualizations, natural language explanations and interactive interfaces. To start with the latter, interactive interfaces allow users to explore how the model’s predictions change as input parameters are adjusted. 

Visualizations like heat maps and decision trees can help individuals visualize the model’s decision-making process. Heat maps showcase color gradients and visually indicate the importance of certain input features, which is the information the (explainable) ML model uses to generate its output or decision. 

Decision trees show an ML’s decision-making process according to different branches that intersect, much like the name suggests. Finally, natural language explanations can provide textual justifications for the AI model’s predictions, making it easier for non-technical users to understand. 

Essential to note that where one is focused on the subfield of machine learning, explainable machine learning (XML) specifically concentrates on making ML models more transparent and interpretable, going beyond the broader field of XAI, which encompasses all types of AI systems. 

Why is explainable AI (XAI) important?

XAI involves designing AI systems that can explain their decision-making process through various techniques. XAI should enable external observers to understand better how the output of an AI system comes about and how reliable it is. This is important because AI may bring about direct and indirect adverse effects that can impact individuals and societies. 

Just as explaining what comprehends AI, explaining its results and functioning can also be daunting, especially where deep-learning AI systems come into play. For non-engineers to envision how AI learns and discovers new information, one can hold that these systems utilize complex circuits in their inner core that are shaped similarly to neural networks in the human brain. 

The neural networks that facilitate AI’s decision-making are often called “deep learning” systems. It is debated to what extent decisions reached by deep learning systems are opaque or inscrutable, and to which extent AI and its “thinking” can and should be explainable to ordinary humans.

There is debate among scholars regarding whether deep learning systems are truly black boxes or completely transparent. However, the general consensus is that most decisions should be explainable to some degree. This is significant because the deployment of AI systems by state or commercial entities can negatively affect individuals, making it crucial to ensure that these systems are accountable and transparent.

For instance, the Dutch Systeem Risico Indicatie (SyRI) case is a prominent example illustrating the need for explainable AI in government decision-making. SyRI was an automated decision-making system using AI developed by Dutch semi-governmental organizations that used personal data and other tools to identify potential fraud via untransparent processes later classified as black boxes.

The system came under scrutiny for its lack of transparency and accountability, with national courts and international entities expressing that it violated privacy and various human rights. The SyRi case illustrates how governmental AI applications can affect humans by replicating and amplifying biases and discrimination. SyRi unfairly targeted vulnerable individuals and communities, such as low-income and minority populations. 

SyRi aimed to find potential social welfare fraudsters by labeling certain people as high-risk. SyRi, as a fraud detection system, has only been deployed to analyze people in low-income neighborhoods since such areas were considered “problem” zones. As the state only deployed SyRI’s risk analysis in communities that were already deemed high-risk, it is no wonder that one found more high-risk citizens there (respective to other neighborhoods that are not considered “high-risk”). 

This label, in turn, would encourage stereotyping and reinforce a negative image of the residents who lived in those neighborhoods (even if they were not mentioned in a risk report or qualified as a “no-hit”) due to comprehensive cross-organizational databases in which such data entered and got recycled across public institutions. The case illustrates that where AI systems produce unwanted adverse outcomes such as biases, they may remain unnoted if transparency and external control are lacking.

Besides states, private companies develop or deploy many AI systems with transparency and explainability outweighed by other interests. Although it can be argued that the present-day structures enabling AI wouldn’t exist in their current forms if it were not for past government funding, a significant proportion of the progress made in AI today is privately funded and is steadily increasing. In fact, private investment in AI in 2022 was 18 times higher than in 2013.

Commercial AI “producers” are primarily responsible to their shareholders, thus, may be heavily focused on generating economic profits, protecting patent rights and preventing regulation. Hence, if commercial AI systems’ functioning is not transparent enough, and enormous amounts of data are privately hoarded to train and improve AI, it is essential to understand how such a system works. 

Ultimately, the importance of XAI lies in its ability to provide insights into the decision-making process of its models, enabling users, producers, and monitoring agencies to understand how and why a particular outcome was created. 

This arguably helps to build trust in governmental and private AI systems. It increases accountability and ensures that AI models are not biased or discriminatory. It also helps to prevent the recycling of low-quality or illegal data in public institutions from adverse or comprehensive cross-organizational databases intersecting with algorithmic fraud-detection systems.

What are the basics of artificial intelligence (AI) and explainable AI (XAI)?

In light of the idea that artificial intelligence (AI) systems may function as a black box and are therefore not transparent, explainable AI (XAI) has emerged as a subfield focused on developing systems humans can understand and explain. 

To understand XAI’s basics and goal, one must grasp what AI is. While artificial intelligence as a field of science has a long history and embraces an expanding set of technological applications, a globally accepted definition for AI is absent. Europe is at the forefront of developing various legal frameworks and ethical guidelines for deploying and developing AI. A ground-breaking proposal from the European Commission (EC) in 2021 set out the regulation for AI’s first legally binding definition.  

Per this proposal, AI can be defined as a system that generates outputs such as content, predictions, recommendations or decisions influencing the environments they interact with. Such AI systems are developed in line with one or more techniques and approaches, as discussed below. 

First, they work with machine learning (ML) models (supervised, unsupervised, reinforcement and deep learning), all of which are ML categories. It is important to note that ML is an imperative of AI, but not all AI systems work with advanced ML techniques, such as deep learning. ML systems can learn and adapt without following explicit instructions. Indeed, not all ML work toward a preset external goal, with some systems being engineered to “reason” toward abstract objectives and thus function without constant human input.

Moreover, AI systems may work with or combine logic and knowledge-based approaches such as knowledge representation or inductive (logic) programming. The former refers to encoding information in a way that an AI system can use (for instance, by defining rules and relationships between concepts). The latter refers to ML models that learn rules or hypotheses from a set of examples. 

An AI system may deploy other methods, such as statistical approaches (techniques deployed to learn patterns or relationships in data) and search and optimization methods, which seek the best solution to a particular problem by searching a large space of possibilities. 

In addition, AI has also been described as “the ability of a non-natural entity to make choices by an evaluative process,” as defined by Jacob Turner, a lawyer, AI lecturer and author. Taking both the definitions of Turner and of the EC, one can deduce that AI systems can often “learn” and, in this matter, influence their environment. Beyond software, AI may also be captured in different forms or be embodied, such as in robotics. 

So what are the other basics of AI? Since AI systems are data-driven, software code and data are two crucial components of AI. In this context, it can be argued that progress in AI is taking place in an era alongside phenomena including “software eating the world” (meaning that societies and the economy as a whole have seen an immense and ongoing digital transformation) and the “‘datafication’ of the world,” which argues  that said digital transformation went along an ever-increasing amount of data being generated and collected.

But why should one care? Crucially, the capturing and processing of data correlate with how an AI’s set of algorithms is designed. That said, algorithms are guidelines that decide how to perform a task through a sequence of rules.

Why is all of this important? AI makes “choices” or generates output based on the data (input) and the algorithms. Moreover, AI may move its decisions away from human input due to its learning nature and the abovementioned techniques and approaches. Those two features contribute to the idea that AI often functions as a black box. 

The term “black box” refers to the challenge of comprehending and controlling the decisions and actions of AI systems and algorithms, potentially making control and governance over these systems difficult. Indeed, it brings about various transparency and accountability issues with different corresponding legal and regulatory implications. 

Black Box problem: Internal behavior of the code is unknown 

This is where explainable AI (XAI) comes into play. XAI aims to provide human-understandable explanations of how an AI system arrives at a particular output. It is specifically aimed at providing transparency in the decision-making process of AI systems.

2021 Bull Run Déjà Vu? Altcoin Market Gains Momentum