1. Home
  2. artificial intelligence

artificial intelligence

100K ChatGPT logins have been leaked on dark web, cybersecurity firm warns

The compromised accounts could give bad actors confidential information about companies and individuals.

Over the past year, more than 100,000 login credentials to the popular artificial intelligence chatbot ChatGPT have been leaked and traded on the dark web, according to a Singaporean cybersecurity firm.

A June 20 blog post by Group-IB revealed just over 101,000 compromised logins for OpenAI’s flagship bot have traded on dark web marketplaces between June 2022 and May 2023.

The login information was found in the logs of “info-stealing malware.” May 2023 saw a peak of nearly 27,000 ChatGPT-related credentials made available on online black markets.

The Asia-Pacific region had the highest amount of compromised logins for sale over the past year, making up around 40% of the nearly 100,000 figure.

Indian-based credentials took the top spot overall with over 12,500 and the United States had the sixth most logins leaked online at nearly 3,000. France was seventh overall behind the U.S. and took the pole position for Europe.

The number of exploited ChatGPT accounts over the past year by region. Source: Group-IB

ChatGPT accounts can be created directly through OpenAI. Additionally, users can choose to use their Google, Microsoft or Apple accounts to login and use the service.

Cointelegraph contacted OpenAI for comment but did not immediately receive a response.

Related: How AI is changing the way humans interact with machines

Group-IB said it noticed an uptick in the number of employees using ChatGPT for work. It warned confidential information about companies could be exposed by unauthorized users as user queries and chat history is stored by default.

Such information could then be exploited by others to undertake attacks against companies or individual employees.

The firm advised users to regularly update passwords and use two-factor authentication to better secure ChatGPT accounts.

Interestingly, the firm noted that the press release was written with the assistance of ChatGPT. 

AI Eye: Is AI a nuke-level threat? Why AI fields all advance at once, dumb pic puns

Trader Warns of Potential XRP Correction, Says Dogecoin Trading at Most Likely Area To Expect Rejection

Deutsche Bank Warns US Recession Imminent, Says Avoiding a Hard Landing Is Next to Impossible: Report

Deutsche Bank Warns US Recession Imminent, Says Avoiding a Hard Landing Is Next to Impossible: Report

The US appears headed for a hard landing and a recession, according to the chief economist at one of Europe’s biggest banks. Deutsche Bank’s David Folkerts-Landau says a recession is the highly likely cost of the Federal Reserve’s rapid sequence of interest rate hikes, even if it achieves the desired outcome of lower inflation, reports […]

The post Deutsche Bank Warns US Recession Imminent, Says Avoiding a Hard Landing Is Next to Impossible: Report appeared first on The Daily Hodl.

Trader Warns of Potential XRP Correction, Says Dogecoin Trading at Most Likely Area To Expect Rejection

German newspaper giant denies reports of replacing editors with AI

Artificial intelligence will soon make its appearance in one of Europe’s best-selling tabloid newspapers, but only to support journalistic work, not replace it.

Disclaimer: This article has been updated to reflect a comment from a Bild Group spokesperson regarding the reason for the job cuts and where it sees AI's role in the company.

German tabloid company Bild has denied reports it is laying off parts of its editorial team and replacing staffers with artificial intelligence and “automated processes.” 

The Guardian and other media outlets reported on June 21 that Bild’s parent publishing firm, Axel Springer SE, was planning to replace a range of editorial jobs with AI, citing an internal email.

Screenshot of one of the headlines. Source: Sydney Morning Herald

However, Bild Group’s director of communications, Christian Senft, told Cointelegraph that the “reports are false” and that “with our current measures, we have no intention of replacing journalism with AI."

Instead, Senft said the announcement was regarding a restructuring program for regional newspaper editions, which involves reducing from 18 regional editions to 12 by the end of the year, and the closing of 10 out of 15 regional offices, with many functions moving centrally to Berlin.

“Therefore, these tasks such as secretariats and photo production are no longer necessary in the regions,” he said, reiterating that the associated job reductions have nothing to do with AI. 

Senft confirmed that the moves will affect employees in the “low three-digit number.” He also clarified that the announcement states that the company will “increasingly use AI to support journalistic work.”

“To this end, we are approaching the topic with an open mind and currently have many initiatives with which we are exploring areas of application for AI for our journalistic brands, both in the production processes of the editorial offices and in relation to the reader experience,” he added.

“The use of AI creates more time and space for journalistic creativity for editors and reporters. Wherever AI supports, a journalist always has to check and double-check the result at Axel Springer.”

The daily tabloid was founded in June 1952. In the 1980s, Bild reportedly sold more than five million copies per day. By 2010, Bild’s circulation had fallen to 3.55 million, according to Mondo Times. As of 2022, the print newspaper only had a circulation of just over 1 million, according to Media Impact.

Related: AI is coming for your job: What industries will be affected?

The rapid development of AI has nevertheless sparked concerns over job losses in the future.

In May, IBM CEO Arvind Krishna told Bloomberg that 7,800 jobs at the firm could be replaced by AI and automation over the next five years, representing approximately 30% of its workforce.

In a June 14 report, management consulting firm McKinsey & Co. predicted that generative AI may be able to fully automate as much as 50% of all work activity conducted in workplaces today, including tasks related to decision-making, management, and interfacing with stakeholders.

AI Eye: Is AI a nuke-level threat? Why AI fields all advance at once, dumb pic puns

Update (June 21, 6:39 am UTC): This article has been updated to include information given by a Bild Group spokesperson.

Trader Warns of Potential XRP Correction, Says Dogecoin Trading at Most Likely Area To Expect Rejection

Billionaire Stanley Druckenmiller Names One Tech Sector That Could Print Gains Despite Recession Threats

Billionaire Stanley Druckenmiller Names One Tech Sector That Could Print Gains Despite Recession Threats

Billionaire investor Stanley Druckenmiller says he is upbeat on one nascent technology sector even amid signs of a looming recession. In a recent interview with Bloomberg, Druckenmiller says that artificial intelligence (AI) technology is a “huge thing” and could be as revolutionary as the internet. According to the billionaire investor, the AI tech sector could […]

The post Billionaire Stanley Druckenmiller Names One Tech Sector That Could Print Gains Despite Recession Threats appeared first on The Daily Hodl.

Trader Warns of Potential XRP Correction, Says Dogecoin Trading at Most Likely Area To Expect Rejection

5 AI tools for learning and research

Supercharge your learning and research capabilities with AI tools, enabling you to gain a competitive edge and reach new levels of understanding.

AI tools are revolutionizing learning and research in today’s digital age by providing sophisticated capabilities and effective solutions. These tools make use of artificial intelligence to speed up various tasks, increase output and offer insightful data. 

Consensus, QuillBot, Gradescope, Elicit and Semantic Scholar are five well-known AI tools that are frequently used in the learning and research fields.

Consensus

The goal of the Consensus AI search engine is to democratize expert knowledge by making study findings on a range of subjects easily accessible. This cutting-edge engine, which runs on GPT-4, uses machine learning and natural language processing (NLP) to analyze and evaluate web content.

When you pose the “right questions,” an additional AI model examines publications and gathers pertinent data to respond to your inquiry. The phrase “right questions” refers to inquiries that lead to findings that are well-supported, as shown by a confidence level based on the quantity and caliber of sources used to support the hypothesis.

QuillBot

QuillBot is an artificial intelligence (AI) writing assistant that helps people create high-quality content. It uses NLP algorithms to improve grammar and style, rewrite and paraphrase sentences, and increase the coherence of the work as a whole.

QuillBot’s capacity to paraphrase and restate text is one of its main strengths. This might be especially useful if you wish to keep your research work original and free of plagiarism while using data from previous sources.

QuillBot can also summarize a research paper and offer alternate wording and phrase constructions to assist you in putting your thoughts into your own words. QuillBot can help you add variety to your writing by recommending different sentence constructions. This feature can improve your research papers readability and flow, which will engage readers more.

Additionally, ChatGPT and QuillBot can be used together. To utilize both ChatGPT and QuillBot simultaneously, start with the output from ChatGPT and then transfer it to QuillBot for further refinement.

Gradescope

Widely used in educational institutions, Gradescope is an AI-powered grading and feedback tool. The time and effort needed for instructors to grade assignments, exams and coding projects are greatly reduced by automating the process. Its machine-learning algorithms can decipher code, recognize handwriting and provide students with in-depth feedback.

Related: How to use ChatGPT to learn a language

Elicit

Elicit is an AI-driven research platform that makes it simpler to gather and analyze data. It uses NLP approaches to glean insightful information from unstructured data, including polls, interviews and social media posts. Researchers can quickly analyze huge amounts of text with Elicit to find trends, patterns and sentiment.

Using the user-friendly Elicit interface, researchers can simply design personalized surveys and distribute them to specific participants. To ensure correct and pertinent data collection, the tool includes sophisticated features, including branching, answer validation and skip logic.

In order to help academics properly analyze and interpret data, Elicit also offers real-time analytics and visualizations. Elicit streamlines the research process, saves time and improves data collection for researchers in a variety of subjects thanks to its user-friendly design and powerful capabilities.

Semantic Scholar

Semantic Scholar is an AI-powered academic search engine that prioritizes scientific content. It analyzes research papers, extracts crucial information, and generates recommendations that are pertinent to the context using machine learning and NLP techniques.

Researchers can use Semantic Scholar to research related works, spot new research trends and keep up with the most recent advancements in their fields.

Related: 5 free artificial intelligence courses and certifications

Striking a balance: Harnessing AI in research responsibly

It’s crucial to keep moral standards in mind and prevent plagiarism when employing AI research tools. The use of another person’s words, ideas or works without giving due credit or permission is known as plagiarism. While using AI research tools, one may follow the guidelines below to prevent plagiarism and uphold ethical standards:

  • Understand the purpose of the AI research tool.
  • Attribute sources properly.
  • Paraphrase and synthesize information.
  • Cross-verify information from multiple sources.
  • Check for copyright restrictions.
  • Review and edit AI-generated content.
  • Seek ethical AI tools.

Though AI research tools might be beneficial for improving your research and writing processes, it is important to uphold academic integrity and observe ethical standards. Always make an effort to give fair credit to others and make sure that your work accurately reflects your own thoughts and understanding.

Trader Warns of Potential XRP Correction, Says Dogecoin Trading at Most Likely Area To Expect Rejection

The Synergistic Potential of Blockchain and Artificial Intelligence

The Synergistic Potential of Blockchain and Artificial Intelligence

In a world where the distinction between hype and innovation is becoming increasingly blurred, blockchain and artificial intelligence (AI) stand out as the most significant technological advancements. Clearly, these technologies provide a great deal of room for the disruption of existing systems, and the number of potential applications is increasing every day. Some believe that […]

The post The Synergistic Potential of Blockchain and Artificial Intelligence appeared first on The Daily Hodl.

Trader Warns of Potential XRP Correction, Says Dogecoin Trading at Most Likely Area To Expect Rejection

Think AI tools aren’t harvesting your data? Guess again

When you use a “free” product like ChatGPT, your personal data is the product — and governments across the globe are using that to their advantage.

The meteoric ascent of generative artificial intelligence has created a bonafide technology sensation thanks to user-focused products such as OpenAI’s ChatGPT, Dall-E and Lensa. But the boom in user-friendly AI has arrived in conjunction with users seemingly ignoring or being left in the dark about the privacy risks imposed by these projects.

In the midst of all this hype, however, international governments and major tech figures are starting to sound the alarm. Citing privacy and security concerns, Italy just placed a temporary ban on ChatGPT, potentially inspiring a similar block in Germany. In the private sector, hundreds of AI researchers and tech leaders, including Elon Musk and Steve Wozniak, signed an open letter urging a six-month moratorium on AI development beyond the scope of GPT-4.

The relatively swift action to try to rein in irresponsible AI development is commendable, but the wider landscape of threats that AI poses to data privacy and security goes beyond one model or developer. Although no one wants to rain on the parade of AI’s paradigm-shifting capabilities, tackling its shortcomings head-on now is necessary to avoid the consequences becoming catastrophic.

AI’s data privacy storm

While it would be easy to say that OpenAI and other Big Tech-fuelled AI projects are solely responsible for AI’s data privacy problem, the subject had been broached long before it entered the mainstream. Scandals surrounding data privacy in AI have happened prior to this crackdown on ChatGPT—they’ve just mostly occurred out of the public eye.

Just last year, Clearview AI, an AI-based facial recognition firm reportedly utilized by thousands of governments and law enforcement agencies with limited public knowledge, was banned from selling facial recognition technology to private businesses in the United States. Clearview also landed a fine of $9.4 million in the United Kingdom for its illegal facial recognition database. Who’s to say that consumer-focused visual AI projects such as Midjourney or others can’t be used for similar purposes?

The problem is they already have been. A slew of recent deepfake scandals involving pornography and fake news created through consumer-level AI products have only heightened the urgency to protect users from nefarious AI usage. It takes a hypothetical concept of digital mimicry and makes it a very real threat to everyday people and influential public figures.

Related: Elizabeth Warren wants the police at your door in 2024

Generative AI models fundamentally rely upon new and existing data to build and strengthen their capabilities and usability. It’s part of the reason why ChatGPT is so impressive. That being said, a model that relies on new data inputs needs somewhere to get that data from, and part of that will inevitably include the personal data of the people using it. And that amount of data can easily be misused if centralized entities, governments or hackers get ahold of it.

So, with a limited scope of comprehensive regulation and conflicting opinions around AI development, what can companies and users working with these products do now?

What companies and users can do

The fact that governments and other developers are raising flags around AI now actually indicates progress from the glacial pace of regulation for Web2 applications and crypto. But raising flags isn’t the same thing as oversight, so maintaining a sense of urgency without being alarmist is essential to create effective regulations before it’s too late.

Italy’s ChatGPT ban is not the first strike that governments have taken against AI. The EU and Brazil are all passing acts to sanction certain types of AI usage and development. Likewise, generative AI’s potential to conduct data breaches has sparked early legislative action from the Canadian government.

The issue of AI data breaches is quite severe, to the point where OpenAI even had to step in. If you opened ChatGPT a couple of weeks ago, you might have noticed that the chat history feature was turned off. OpenAI temporarily shut down the feature because of a severe privacy issue where strangers’ prompts were exposed and ​​revealed payment information.

Related: Don’t be surprised if AI tries to sabotage your crypto

While OpenAI effectively extinguished this fire, it can be hard to trust programs spearheaded by Web2 giants slashing their AI ethics teams to preemptively do the right thing.

At an industrywide level, an AI development strategy that focuses more on federated machine learning would also boost data privacy. Federated learning is a collaborative AI technique that trains AI models without anyone having access to the data, utilizing multiple independent sources to train the algorithm with their own data sets instead.

On the user front, becoming an AI Luddite and forgoing using any of these programs altogether is unnecessary, and will likely be impossible quite soon. But there are ways to be smarter about what generative AI you grant access to in daily life. For companies and small businesses incorporating AI products into their operations, being vigilant about what data you feed the algorithm is even more vital.

The evergreen saying that when you use a free product, your personal data is the product still applies to AI. Keeping that in mind may cause you to reconsider what AI projects you spend your time on and what you actually use it for. If you’ve participated in every single social media trend that involves feeding photos of yourself to a shady AI-powered website, consider skipping out on it.

ChatGPT reached 100 million users just two months after its launch, a staggering figure that clearly indicates our digital future will utilize AI. But despite these numbers, AI isn’t ubiquitous quite yet. Regulators and companies should use that to their advantage to create frameworks for responsible and secure AI development proactively instead of chasing after projects once it gets too big to control. As it stands now, generative AI development is not balanced between protection and progress, but there is still time to find the right path to ensure user information and privacy remain at the forefront.

Ryan Paterson is the president of Unplugged. Prior to taking the reins at Unplugged, he served as the founder, president and CEO of IST Research from 2008 to 2020. He exited IST Research with a sale of the company in September 2020. He served two tours at the Defense Advanced Research Agency and 12 years in the United States Marine Corps.
Erik Prince is an entrepreneur, philanthropist and Navy SEAL veteran with business interests in Europe, Africa, the Middle East and North America. He served as the founder and chairman of Frontier Resource Group and as the founder of Blackwater USA — a provider of global security, training and logistics solutions to the U.S. government and other entities — before selling the company in 2010.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Trader Warns of Potential XRP Correction, Says Dogecoin Trading at Most Likely Area To Expect Rejection

UN report highlights ‘serious and urgent’ concerns about AI deepfakes

The UN wants to address AI-generated fake news and information as the organization looks to bring in voluntary guidelines for the technology.

The United Nations has called artificial intelligence-generated media a “serious and urgent” threat to information integrity, particularly on social media.

In a June 12 report, the UN claimed the risk of disinformation online has “intensified” due to “rapid advancements in technology, such as generative artificial intelligence” and singled out deepfakes in particular.

The UN said false information and hate speech generated by AI is “convincingly presented to users as fact.” Last month, the S&P 500 briefly dipped due to an AI-generated image and faked news report of an explosion near the Pentagon.

It called for AI stakeholders to address the spread of false information and asked them to take “urgent and immediate” action to ensure the responsible use of AI, and added:

“The era of Silicon Valley’s ‘move fast and break things’ philosophy must be brought to a close.”

The same day UN Secretary-General António Guterres held a press conference and said “alarm bells” over generative AI are “deafening” and “are loudest from the developers who designed it.”

Guterres added the report “will inform a UN Code of Conduct for Information Integrity on Digital Platforms.” The code is being developed ahead of the Summit of the Future — a conference to be held in late September 2024 aiming to host inter-government discussions for a raft of issues.

“The Code of Conduct will be a set of principles that we hope governments, digital platforms and other stakeholders will implement voluntarily,” he said.

'Most substantial policy challenge ever’

Meanwhile, on June 13 the former Prime Minister of the United Kingdom, Tony Blair, and Conservative Party politician William Hague released a report on AI.

The pair suggested the governments of the U.K., United States and “other allies” should “push for a new UN framework on urgent safeguards.”

Related: UK to get ‘early or priority access’ to AI models from Google and OpenAI

The arrival of AI “could present the most substantial policy challenge ever faced” due to its “unpredictable development” and “ever-increasing power,” the pair said.

Blair and Hague added that the government’s “existing approaches and channels are poorly configured” for such a technology.

Magazine: ‘Moral responsibility’ — Can blockchain really improve trust in AI?

Trader Warns of Potential XRP Correction, Says Dogecoin Trading at Most Likely Area To Expect Rejection

Pro-Bitcoin DeSantis tagged over AI-faked photos in Trump smear campaign

The images depicting Donald Trump cuddling up to and kissing Anthony Fauci were labeled as being AI-generated on Twitter's disinformation alert feature.

Pro-Bitcoin (BTC) presidential bidder Ron DeSantis has been tagged for apparently using artificial intelligence-generated images in an ad campaign smearing rival and former president Donald Trump.

It comes amid a rise in AI-generated deep fakes being used in political ads and movements in recent months.

On June 5, DeSantis’ campaign tweeted a video purporting to show Trump’s close support of Anthony Fauci, the chief medical advisor to Trump when he was president of the United States.

Fauci is a contentious figure in GOP circles for, among other reasons, his handling of the federal response to the COVID-19 pandemic which many deemed to be heavy-handed.

The video features a collage of real images depicting Trump and Fauci mixed in with what appears to be AI-generated images of the pair hugging with some depicting Trump appearing to kiss Fauci.

Twitter’s Community Notes feature — the platform's community-driven misinformation-dispelling project — added a disclaimer to the tweet calling it "AI-generated images."

AFP Fact Check, a department within the news agency Agence France-Presse said the images had "the hallmarks of AI-generated imagery."

A screenshot from the video, the top left, bottom middle and bottom right images are AI-generated. Source: Twitter

DeSantis and Trump are facing off to take the Republican nominee for president. DeSantis kicked off his bid last month in a Twitter Space and promised to “protect” Bitcoin — current polling has him trailing Trump.

AI in the political sphere

Others in politics have used AI-generated media to attack rivals, even Trump’s campaign is guilty of using AI to smear DeSantis.

Shortly after DeSantis announced his presidential bid, Trump posted a video mocking DeSantis’ Twitter-based announcement, using deepfaked audio to create a fake Twitter Space featuring the likeness of DeSantis, Elon Musk, George Soros, Adolf Hitler, Satan, and Trump.

A screenshot of the video posted by Trump depicting a Twitter Space. Source: Instagram

In April, the Republican party released an ad with its predictions on what a second term for President Joe Biden would look like which was packed with AI-generated images that depicted a dystopian future.

Related: Forget Cambridge Analytica — Here’s how AI could threaten elections

New Zealand politics has also recently featured AI-made media with the country’s opposing National Party using generated images to attack the ruling Labour Party in multiple social posts in May.

The National Party used AI to generate Polynesian hospital workers in a social media campaign. Source: Instagram

One image depicts Polynesian hospital staff, another shows multiple masked men robbing a jewelry store and a third image depicts a woman in a house at night — all were generated using AI tools.

AI Eye: 25K traders bet on ChatGPT’s stock picks, AI sucks at dice throws, and more

Trader Warns of Potential XRP Correction, Says Dogecoin Trading at Most Likely Area To Expect Rejection

Millions in Cardano (ADA) Will Be Stolen by Artificial Intelligence by This Time Next Year: Charles Hoskinson

Millions in Cardano (ADA) Will Be Stolen by Artificial Intelligence by This Time Next Year: Charles Hoskinson

Cardano (ADA) creator Charles Hoskinson warns that crypto scams will proliferate with the emergence of generative artificial intelligence (AI). Hoskinson predicts in a new video that millions of dollars worth of Cardano will be lost one year from now as scams that employ generative AI take over. “This time next year because of generative AI, millions […]

The post Millions in Cardano (ADA) Will Be Stolen by Artificial Intelligence by This Time Next Year: Charles Hoskinson appeared first on The Daily Hodl.

Trader Warns of Potential XRP Correction, Says Dogecoin Trading at Most Likely Area To Expect Rejection