1. Home
  2. openai

openai

OpenAI and Microsoft accused of stealing data to train ChatGPT in new class-action suit

The lawsuit alleges that OpenAI’s profits came as a result of using illegally scraped data to train its models.

OpenAI and Microsoft have been named as the defendants in yet another class-action lawsuit over their alleged use of web scraping techniques to obtain supposedly private data for the use of training ChatGPT and other associated artificial intelligence (AI) models. 

The most recent class-action suit was filed on Sept. 5 in San Francisco by a law firm representing a pair of unnamed engineers.

According to a filing registered with the United States District Court for the Northern District of California:

“This class action lawsuit arises from Defendants’ unlawful and harmful conduct in developing, marketing, and operating their AI products, including ChatGPT-3.5, ChatGPT-4.0, Dall-E, and Vall-E (the 'Products'), which use stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.”

The lawsuit goes on to complain that OpenAI “doubled down on a strategy to secretly harvest massive amounts of personal data from the internet” after restructuring in 2019.

“Without this unprecedented theft of private and copyrighted information belonging to real people,” write the plaintiffs, “the products,” referring to ChatGPT, DALL-E and OpenAI’s other models, “would not be the multi-billion-dollar business they are today.”

According to the filing, the plaintiffs are asking the courts to award damages to the plaintiffs and any members of the proposed classes — which could conceivably include anyone whose information was allegedly scraped.

The suit also asks the courts to order the defendants to conduct “nonrestituionary disgorgement” of profits made as a result of the alleged illegal scraping of data.

Scraping is the practice of using an automated bot, often called a “crawler," to collect data from the internet. This most recent suit alleges that OpenAI and Microsoft knowingly engaged in “illegal” scraping activity.

A previous class-action lawsuit making nearly identical claims against OpenAI and Microsoft was filed in the same court district on June 28. It’s unclear at this time if the court or defendants in the separate cases would consider combining the suits.

Related: US Copyright Office issues notice of inquiry on artificial intelligence

This isn’t the first time Microsoft’s been involved in a lawsuit over alleged scraping. The Redmond, Washington company issued a cease-and-desist order on behalf of its LinkedIn brand to data analytics company HiQ in 2019 over its admitted data scraping practices.

In that case, Microsoft and LinkedIn alleged that HiQ had violated the terms of service agreement required to log in to the LinkedIn website and thus have access to user data. Initially, the circuit court ruled in favor of HiQ, but upon Microsoft’s appeals, the Supreme Court vacated the judgment.

The case was then kicked back down to the circuit court, where Microsoft found itself on the winning side of the case. HiQ agreed to a settlement with Microsoft for an undisclosed amount and was ordered to cease its scraping activities.

Microsoft and OpenAI did not immediately respond to requests for comment.

Bitcoin corporate treasury shareholder proposal submitted to Meta

The UK releases key ambitions for global AI summit

The officials in the U.K. released their priorities for the upcoming global AI summit during which they plan to focus on risk and policy.

The United Kingdom released its five “ambitions” for its global artificial intelligence (AI) safety summit on Sep. 4, with a big focus on risks and policy to support the technology. 

The summit, which will take place on Nov. 1-2, is anticipated to unite thought leaders from around the world, including academics, politicians and major tech companies developing AI, in order to create a common understanding of how to regulate the technology.

According to the announcement, it will primarily focus on “risks created or significantly exacerbated by the most powerful AI systems” and the need for action. It will also focus on how safe AI development can be used for public good and overall quality of life improvement.

Additionally, the summit will touch on a way forward for international collaboration on AI safety and how to support international laws, AI safety measures for individual organizations and areas for “potential collaboration on AI safety research.” 

Related: US, UK intel agencies warn against new crypto malware: Report

The summit will be spearheaded by the U.K. Prime Minister Rishi Sunak’s representatives for the AI Safety Summit Jonathan Black and Matt Clifford.

Sunak called the U.K. a “global leader” in AI regulation and highlighted that his government wants to accelerate AI investment to improve productivity. Earlier this year it was announced that the U.K. would be receiving “early or priority access” to Google and OpenAI’s newest AI models. 

On Aug. 31, the U.K’s Science, Innovation and Technology Committee (SITC) released a report that recommended Britain align itself with countries holding similar democratic values to safeguard against the misuse of AI by malignant actors.

Prior to that announcement, on Aug. 21, the U.K. government said it will spend $130 million on AI semiconductor chips as a part of its effort to create an “AI Research Resource” by mid-2024.

Magazine: AI Eye: Get better results being nice to ChatGPT, AI fake child porn debate, Amazon’s AI reviews

Bitcoin corporate treasury shareholder proposal submitted to Meta

China’s Baidu and other tech companies release ChatGPT-like AI chatbots

Multiple Chinese tech companies launched their own ChatGPT-like AI chatbots for mass market use two weeks after China’s AI regulations came into force, which requires prior government approval.

Four China-based tech companies launched their own artificial intelligence (AI) chatbots on Aug. 30 for public use after receiving approval from the Chinese government.

Baidu, Baichuan Intelligent Technology, SenseTime and Zhipu AI all launched their chatbots less than two weeks after the government’s official AI legislation was enacted on Aug. 15, which requires government approval prior to launching AI-based products available in the mass market.

In order to receive approval, companies must submit security assessments and other proof of meeting set standards. There are 24 guidelines, which include mandatory labels for artificially created content and holding service providers accountable for anything created through their platform.

According to local Chinese media reports, 11 additional firms have received government approval for AI products, including the owners of TikTok, ByteDance and Tencent Holdings.

Related: Germany proposes screening Chinese investment in AI and related sectors: Report

Baidu likened its new chatbot, Ernie Bot, to the popular ChatGPT application created by Microsoft-backed OpenAI.

According to a local media report, Baidu CEO Robin Li said that by making Ernie Bot available to hundreds of millions of internet users:

“Baidu will collect massive valuable real-world human feedback.”

OpenAI’s chatbot is unavailable in China due to it being geo-blocked in the country. The government reportedly forced local social media platforms, such as WeChat and Weibo, to prevent access to the platform.

After major anticipation of a publicly available AI chatbot like ChatGPT, Baidu posted on social media that less than 12 hours after its release, the app had risen to the No. 1 spot on the Apple Store’s free app rankings in China.

Prior to the regulations set in place by the government, companies could only conduct public tests of their AI products on a small scale. Under the new rules, companies have widened their tests with more features enabled.

On Aug. 3, the Chinese tech and e-commerce giant Alibaba released two open-sourced AI models to rival Meta’s Llama 2.

Its two large language models (LLMs) called Qwen-7B and Qwen-7B-Chat each have 7 billion parameters and are said to be smaller versions of the Tongyi Qiawen released in April.

Although not chatbots, like Ernie Bot or ChatGPT, these developments continue to show signs of China’s intention to rival developments in AI coming out of the United States.

Magazine: AI Eye: Get better results being nice to ChatGPT, AI fake child porn debate, Amazon’s AI reviews

Bitcoin corporate treasury shareholder proposal submitted to Meta

US Copyright Office issues notice of inquiry on artificial intelligence

The inquiry seeks information and comment on issues related to the content AI produces and how policy makers should treat AI that imitates or mimics human artists.

The United States Copyright Office issued an official request for comments and notice of inquiry on copyright and artificial intelligence (AI) in the Federal Register on Aug. 30. 

According to the filing, the Copyright Office is seeking “factual information and views” on copyright issues raised by recent advances in generative AI models such as OpenAI’s ChatGPT and Google’s Bard.

In a press release sent via email from the Library of Congress and viewed by Cointelegraph, the U.S. Copyright Office stated:

“These issues include the use of copyrighted works to train AI models, the appropriate levels of transparency and disclosure with respect to the use of copyrighted works, the legal status of AI-generated outputs, and the appropriate treatment of AI-generated outputs that mimic personal attributes of human artists.”

Those interested in commenting during the official inquiry period will have until Oct. 18 to do so.

The request comes during a tumultuous time for the AI industry with regards to regulation in the U.S. and around the world. While the EU and other territories have enacted policies to protect citizen privacy and limit how corporations can use, share, and sell data, there’s been little in the way of regulation concerning the use of copyrighted material to train or prompt AI systems.

Related: British MPs call on government to scrap AI exemptions that hurt artists

As Cointelegraph reported previously, the media industry is grappling with how to deal with the emergence of AI systems capable of imitating the work of creators and artists. The New York Times and other news agencies have taken steps to block web crawlers from AI companies seeking to train their models on their data.

Artists such as comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey have sued OpenAI for allegedly training AI models on copyrighted work without the consent of the owners or creators.

Beyond copyright issues, there are also concerns related to AI involving misalignment (the idea that the machines could have objectives that clash with the wellbeing of humanity) and the mass proliferation of misinformation.

The U.S. government has held a series of meetings with stakeholders in the AI community, with the next, a closed-door meeting between Senator Chuck Schumer and Tesla CEO Elon Musk, Alphabet CEO Sundar Pichai, OpenAI CEO Sam Altman, and Microsoft CEO Satya Nadella, slated for Sep. 13.

Bitcoin corporate treasury shareholder proposal submitted to Meta

Meta launches community-licensed AI coding tool to the public

Code Llama is available for both personal and business use under the Llama2 community license agreement.

Meta AI announced the launch of ‘Code Llama,’ a community-licensed artificial intelligence (AI) coding tool built on the Llama2 large language model (LLM), on Aug. 24.

The new tool is a fine-tuned version of LLama2 that’s been trained specifically for the purpose of generating and discussing computer code.

According to a blog post from Meta, Code Llama is split into several variants with one model fine-tuned for general coding in a number of languages (including Python, C++, Java, PHP, Typescript, C#, Bash and more).

Other models include Code Llama Python and Code Llama Instruct. The former is fine-tuned for Python applications. As Meta puts it, this is to further support the AI community:

“Because Python is the most benchmarked language for code generation — and because Python and PyTorch play an important role in the AI community — we believe a specialized model provides additional utility.”

According to Meta, the emphasis in these first two model variants is on understanding, explaining, and discussing code.

Code Llama Instruct, however, is the fine-tuned version of Code Llama that Meta recommends for actually generating code. According to the blog post, it's been engineered specifically to generate “helpful and safe answers in natural language."

The models are also available in different parameter sizes in order to operate in different environments. Code Llama comes in 7-billion, 14-billion, and 34-billion parameter sizes, each with different functionality.

Meta says the 7B models, for example, can run on a single GPU. While the 14B and 34B models would require more substantial hardware, they’re also capable of more complex tasks — especially those requiring low-latency feedback such as real-time processes.

Code Llama is generally available under the same community license agreement as Llama2, meaning it can be used for personal or business use with proper attribution.

This could be a massive boon for businesses and individuals who have a high need use-case for LLM models for coding purposes, such as fintech institutions that are traditionally underserved by the AI and big tech communities.

Web3 innovators, trading bot developers, and cryptocurrency exchanges all operate in a constantly-shifting environment that, to date, has seen relatively little in the way of dedicated B2B or B2C solutions for day-to-day crypto and blockchain coding problems from big tech.

Related: Naver Corp unveils South Korea’s answer to ChatGPT and generative AI

Dedicated coding tools, such as GitHub’s Co-Pilot (built with ChatGPT technology) can go a long way towards aiding developers in these underserved areas, but the costs of use can be prohibitive for some users and the lack of open-source options can pose problems for proprietary software developers.

The existence of a free-to-use, community-licensed alternative based on Meta’s highly-touted Llama2 LLM could help level the playing field for blockchain and crypto projects with small development teams.

Bitcoin corporate treasury shareholder proposal submitted to Meta

What is OpenAI code interpreter, and how does it work?

Discover the OpenAI code interpreter, an AI tool that translates human language into code. Learn about its functions, benefits and drawbacks in this guide.

Key considerations before using OpenAI code interpreter

When utilizing the OpenAI code interpreter, it is important to understand its capabilities, limitations and potential use cases to maximize its effectiveness. 

Here are some key considerations to bear in mind:

Understanding the model’s limitations

While the OpenAI code interpreter is advanced and capable of comprehending a wide range of programming languages, it is not infallible. It doesn’t “understand” code in the human sense. 

Instead, it recognizes patterns and extrapolates from them, which means it can sometimes make mistakes or give unexpected outputs. Knowing this can help users approach its suggestions with a critical mind.

Data security and privacy 

Given that the model can process and generate code, it’s crucial to consider data security and privacy. Any sensitive or proprietary code should be handled with care. OpenAI retains API data for roughly 30 days but doesn’t use it to improve models. Users should ensure they are updated on the latest privacy policies of OpenAI.

Oversight and review 

AI tools like the code interpreter can be incredibly helpful, but humans should always review their output. An AI model can generate syntactically correct code that does something harmful or unintended. Therefore, human oversight is essential to ensure the code’s accuracy and safety.

Understanding the training process

The OpenAI code interpreter uses reinforcement learning from human feedback, trained on a vast corpus of public text, including programming code. Recognizing the implications of this training process can provide insights into how the model generates its outputs and why it might sometimes produce unexpected results.

Exploration and experimentation 

Like any tool, the more you use the OpenAI code interpreter, the more you’ll understand its strengths and weaknesses. Use it for various tasks to see how it handles different prompts, and experiment with refining your prompts to get the desired results.

Complementing, not replacing human coder 

While the OpenAI code interpreter can automate some coding tasks, it’s not a replacement for human coders. It’s a tool that can augment human abilities, speed up development processes, and aid learning and teaching. However, the creativity, problem-solving abilities and nuanced understanding of a human coder are currently irreplaceable by AI.

Benefits and drawbacks of OpenAI code interpreter

OpenAI code interpreter is a powerful tool, but like any technology, it must be used responsibly and with a clear understanding of its limitations.

Benefits of OpenAI code interpreter 

Code understanding and generation

It can interpret and generate code from natural language descriptions, making it easier for non-programmers to leverage coding solutions.

Versatility

It can handle many tasks, from bug identification to code translation and optimization, and it supports multiple programming languages.

Time efficiency

It can speed up tasks like code review, bug identification and generation of test cases, freeing up time for developers to focus on more complex tasks.

Accessibility

The model bridges the gap between coding and natural language, making programming more accessible to a wider audience.

Continuous learning

The model learns iteratively from human feedback, enabling it to improve its performance over time.

Drawbacks of OpenAI code interpreter 

Limited understanding

The model lacks the depth of understanding a human coder has. It operates based on patterns learned during training rather than an intrinsic understanding of the code.

Dependence on training data 

The quality of the model’s outputs depends on the quality and diversity of its training data. If it encounters code constructs it hasn’t been trained on, it might fail to interpret them accurately.

Error propagation 

If the model makes a mistake in its interpretation or generation of code, it can propagate and lead to more significant issues down the line.

Over-reliance risk

Relying too heavily on the model might lead to complacency among developers, who could skip the crucial step of thoroughly checking the code themselves.

Ethical and security concerns 

The automated generation and interpretation of code can potentially be misused, raising ethical and security questions.

Types of tasks OpenAI code interpreter can handle

The OpenAI code interpreter is a versatile tool capable of handling various tasks related to code interpretation and generation.

Here are some types of tasks that the OpenAI code interpreter can handle:

Code generation

Given a description in natural language, the code interpreter can generate appropriate programming code. This ability benefits those who might not have extensive programming knowledge but need to implement a specific function or feature.

Code review and optimization

The model can review existing code and suggest improvements, offering more efficient or streamlined alternatives. This can be a helpful tool for developers looking to optimize their code.

Bug identification

The code interpreter can analyze a code snippet and identify potential bugs or errors. It can highlight the specific part of the code causing the problem and often suggest ways to fix it.

Explaining code

The model can take a piece of code as input and provide a natural language explanation of what the code does. This feature can be invaluable for learning new programming concepts, understanding complex code structures or documenting code.

Code translation

The code interpreter can translate code from one programming language to another. For instance, if you have a Python function that you want to replicate in JavaScript, the model could help with that translation.

Predicting code outputs

Given a code snippet, the model can predict the output when the code is run. This is useful for understanding unfamiliar code’s functionality or debugging purposes.

Generating test cases

The model can also generate test cases for a particular function or feature. This can be handy in software testing and quality assurance processes.

Example task request to code interpreter

Although the OpenAI code interpreter is highly capable, its performance is based on the data it was trained on. It’s not infallible and, in some situations, might produce inaccurate or unexpected outputs. However, as machine learning models evolve and improve, we can expect the OpenAI code interpreter to become even more versatile and reliable in handling different code-related tasks.

How OpenAI code interpreter works

OpenAI code interpreter operates using a technology that harnesses the power of artificial intelligence (AI) to understand and generate programming code. 

It’s built upon machine learning principles, with an iterative training methodology that refines its capabilities over time. Let’s delve into the workings of this AI model and its no-code interpretation prowess.

OpenAI code interpreter primarily use a RLHF model, which is first pre-trained on a large corpus of publicly available text using a diverse range of programming languages and code contexts. This unsupervised learning phase allows the model to develop a general understanding of language and code syntax, semantics, and conventions.

Once the pre-training is complete, the model undergoes a second phase known as fine-tuning. This process uses a smaller, carefully curated data set and incorporates human feedback to align the model’s responses with human-like interpretations. 

During this stage, model outputs are compared, and rewards are assigned based on how accurately they align with the desired responses. The model then uses these rewards to improve its future outputs, learning from each interaction to make better predictions over time.

It’s important to clarify that while the code interpreter can generate and comprehend code, it doesn’t “understand” code in the human sense. The model doesn’t have consciousness or a conceptual understanding of what it’s doing. Instead, it identifies patterns and structures within the data it was trained on and uses that knowledge to generate or interpret code.

For instance, if the model is given a piece of code to interpret, it doesn’t comprehend the code’s purpose or function as a human would. Instead, it analyzes the code’s patterns, syntax and structure based on the massive amount of programming data it has processed during training. It then generates an output that mirrors what it has learned, providing a human-like interpretation of the code.

The no-code understanding of the OpenAI code interpreter is its ability to take natural language inputs and generate appropriate programming code. This feature makes the tool accessible to users without coding expertise, allowing them to leverage the power of programming by merely expressing their needs in plain English.

The basics of OpenAI code interpreter

OpenAI, a leading entity in the field of artificial intelligence, has developed OpenAI code interpreter, a specialized model trained on extensive data sets to process and generate programming code. 

OpenAi code interpreter is a tool that attempts to bridge the gap between human language and computer code, offering myriad applications and benefits. It represents a significant step forward in AI capabilities. It is grounded in advanced machine learning techniques, combining the strengths of both unsupervised and supervised learning. The result is a model that can understand complex programming concepts, interpret various coding languages, and generate human-like responses that align with coding practices.

At its core, the code interpreter uses a technique known as reinforcement learning from human feedback (RLHF). RLHF is an iterative process that refines the model’s performance over time by integrating human feedback into the learning cycle. During the training phase, the model processes vast amounts of data, including multiple programming languages and coding concepts. When encountering a new situation, it uses this background knowledge to make the best possible decision.

The code interpreter is not limited to any specific coding language or style, which is a testament to the diversity and depth of the training data it has processed. From popular languages like Python, JavaScript and C to more specialized ones like Rust or Go, the model can handle a wide array of languages and their associated syntax, semantics and best practices.

Furthermore, the tool’s ability to interpret code extends beyond simply understanding what a piece of code does. It can identify bugs, suggest code improvements, provide alternatives and even help design software structures. This ability to provide insightful, contextually relevant responses based on input is a defining feature of the OpenAI code interpreter.

Bitcoin corporate treasury shareholder proposal submitted to Meta

Argentina’s Data Privacy Agency Investigating Controversial Crypto Project Worldcoin (WLD)

Argentina’s Data Privacy Agency Investigating Controversial Crypto Project Worldcoin (WLD)

Argentina’s data privacy government agency is following in France and the United Kingdom’s footsteps by investigating the eye-scanning crypto project Worldcoin (WLD). The Agency of Access to Public Information (AAIP) says in a new press release it’s looking into how the Worldcoin Foundation collects, stores and uses personal data in Argentina. The agency’s new investigation […]

The post Argentina’s Data Privacy Agency Investigating Controversial Crypto Project Worldcoin (WLD) appeared first on The Daily Hodl.

Bitcoin corporate treasury shareholder proposal submitted to Meta

Dear crypto writers: No one wants to read your ChatGPT-generated trash

Opinion: If you used ChatGPT to write an article, do us all a favor and delete it. None of us want to read it.

There was a time when anything generated by ChatGPT was fascinating. 

To fathom that a super-intelligent robot was writing poetry, drafting legal documents or even acting as a personal life coach was mind-boggling.

As if each prompt was a window into a kind of shared human consciousness.

That was the novelty then, at least. But it’s been nine months since GPT-3 and almost five months since the launch of GPT-4.

ChatGPT has become the go-to strategy for churning out useless content

In May, a crypto firm shared a 10-minute video presentation it had whipped up about artificial intelligence’s role in increasing business productivity (in the hopes of coverage).

Ironically, the presenter was an AI-generated avatar reading from a script most likely lifted straight out of ChatGPT. It took all of 20 seconds to close it.

Related: Worldcoin: Should you let Sam Altman scan your eyeballs for WLD?

Oh, was it written by a bot? No worries, send it to my AI transcriber.

Another great example came in recently from a colleague who received a pitch from a third party — offering an article on how to use MetaMask more efficiently.

The 1,159-word explainer discussed six cold wallets that could be connected to MetaMask, a potentially relevant subject given the recent spate of hot wallet hacks, such as the $23 million hack of the Bitrue crypto exchange on April 14.

At first glance, the article was decently written. It had an introduction, a sub-introduction and six subheadings for each individual cold wallet, followed by a conclusion. There wasn’t a single grammatical error.

Unfortunately, it reads like the nutritional label of a cereal box

Packed with facts but void of any personality, flair or human element. No impactful conclusions.

“Each of these wallets has its unique features and advantages, so it’s important to do your research and choose the one that best suits your needs,” was its fireworks ending.

Surprise, surprise. The article came back on an AI detection tool as 74.2% AI/GPT generated. Even the cover letter that pitched the idea wasn’t authentic — scoring a whopping 93.57% on ZeroGPT.

Screenshot of ZeroGPT’s analysis of the pitch received via email. Source: ZeroGPT

Crypto news aggregators are also starting to show more of these types of articles — with headlines along the lines of: “We asked ChatGPT… and this is what it said.”

Get ready for a hell of a lot more

Unfortunately, it’s only the latest indication of more AI-generated news content to come.

In May, misinformation watchdog NewsGuard identified 49 websites spanning seven languages that appear to be entirely or mostly generated by artificial intelligence language models — mainly in the form of independent news outlets.

Related: Think AI tools aren’t harvesting your data? Guess again

By Aug. 9, that number had rocketed to 408, with these fake news websites spanning 14 languages. There’s no telling how much that will reach by year’s end.

So, here’s my non-AI-generated conclusion.

Generative AI technology like ChatGPT is a game-changing piece of technology that can be wielded by nearly anyone with an internet connection.

But if you’re a crypto writer, do me a favor: Submit your typo-laden rants, share your clickbait articles, and forward your poorly disguised advertorials.

Just make sure it’s your real work — not this ChatGPT-generated trash.

Felix Ng began writing about the blockchain industry through the lens of a gambling industry journalist and editor in 2015. He is most interested in innovative blockchain technology aimed at solving real-world challenges.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Bitcoin corporate treasury shareholder proposal submitted to Meta

Aptos token rises 11.6% after Microsoft deal to marry AI with blockchain

APT pumped 17.6% within the first 50 minutes before cooling off to $7.51 — still 11.6% above pre-announcement levels.

Aptos (APT), the cryptocurrency powering layer-1 blockchain Aptos Network, is up approximately 11.6% since announcing it will leverage Microsoft’s suite of artificial intelligence tools to advance Web3 adoption among banks and financial enterprises.

This will be achieved by enabling the Aptos Network to tap into Microsoft’s Azure OpenAI service to explore innovations in asset tokenization, on-chain payments and central bank digital currencies, Aptos explained in an Aug. 9 statement.

Mo Shaikh, the CEO of Aptos Labs who previously worked at Meta and BlackRock, signaled high hopes for AI-powered blockchain solutions:

"Artificial Intelligence and blockchain technologies are quickly converging for one important reason: they are both generational breakthroughs that profoundly impact the evolution of the internet and shape society.”

One of the new tools, Aptos Assistant — a ChatGPT-powered chatbot — will aim to help users navigate from Web2 to Web3 by offering virtual guidance with the onboarding process.

Microsoft will also boost the security of the Aptos Network by allowing Aptos Labs to run validator nodes on Azure, according to the cryptocurrency firm.

The news — which was unveiled on Aug. 9, 12:30 pm UTC — immediately pushed APT up 17.6% to $7.92 within the first 50 minutes before cooling off to $7.51 at the time of writing, according to CoinGecko.

APT pumped nearly 18% after the announcement. Source: CoinGecko.

Related: Superblock raises $8M for “Over Protocol,” a new layer 1 blockchain focusing on lightweight full nodes

Despite the price pump, the Aptos token is still down 62.9% from its all-time high price of $19.92 on Jan. 26, 2023, according to CoinGecko.

The Aptos Network launched on Oct. 17 after four years of development. Aptos was founded by former Meta employees Mo Shaikh and Avery Ching, who also had a role in Meta’s failed Diem project.

It closed $150 million in funding in July 2022 and $200 million in March 2022 from the likes of Andreessen Horowitz, Coinbase Ventures and FTX Ventures.

Magazine: China’s blockchain satellite in space, Hong Kong’s McNuggets Metaverse: Asia Express

Bitcoin corporate treasury shareholder proposal submitted to Meta

OpenAI launches web crawler ‘GPTBot’ amid plans for next model: GPT-5

ChatGPT users have the option to scrap the web crawler by adding a “disallow” command to a standard file on the server.

Artificial intelligence firm OpenAI has launched “GPTBot” — its new web crawling tool which it says could potentially be used to improve future ChatGPT models.

“Web pages crawled with the GPTBot user agent may potentially be used to improve future models,” OpenAI said in a new blog post, adding it could improve accuracy and expand the capabilities of future iterations.

A web crawler, sometimes called a web spider, is a type of bot that indexes the content of websites across the internet. Search engines like Google and Bing use them in order for the websites to show up in search results. 

OpenAI said the web crawler will collect publicly available data from the world wide web, but will filter out sources that require paywalled content, or is known to gather personally identifiable information, or has text that violates its policies.

It should be noted that website owners can deny the web crawler by adding a “disallow” command to a standard file on the server.

Instructions to “disallow” GPTBot for ChatGPT users. Source: OpenAI

The new crawler comes three weeks after the firm filed a trademark application for “GPT-5,” the anticipated successor to the current GPT-4 model.

The application was filed at the United States Patent and Trademark Office on July 18, and covers the use of the term “GPT-5,” which includes software for AI-based human speech and text, converting audio into text and voice and speech recognition.

However, observers may not want to hold their breath for the next iteration of ChatGPT just yet. In June, OpenAI’s founder and CEO Sam Altman said the firm is “nowhere close” to beginning training GPT-5, explaining that several safety audits need to be conducted prior to starting.

Related: 11 ChatGPT prompts for maximum productivity

Meanwhile, Concerns have been raised over OpenAI’s data-collecting tactics of late, particularly revolving around copyright and consent.

Japan’s privacy watchdog issued a warning to OpenAI about collecting sensitive data without permission in June, while Italy temporarily banned the use of ChatGPT after alleging it breached various European Union privacy laws in April.

In late June, a class action was filed against OpenAI by 16 plaintiffs alleging the AI firm to have accessed private information from ChatGPT user interactions.

If these allegations are proven to be accurate, OpenAI — and Microsoft, who was named as a defendant — will be in breach of the Computer Fraud and Abuse Act, a law with a precedent for web-scraping cases.

Magazine: AI Eye: AI travel booking hilariously bad, 3 weird uses for ChatGPT, crypto plugins

Bitcoin corporate treasury shareholder proposal submitted to Meta