
OpenAI fears people will forms bonds with the AI it developed to fool humans
The warning shows that developers are aware that anthropomorphization is a legitimate concern in the AI industry.
When a safety tester working with OpenAI’s GPT-4o sent a message to the chatbot stating “this is our last day together,” it became clear to company researchers that some form of bonding had happened between the AI and the human using it.
In a blog post detailing the company’s safety efforts in developing GPT-4o, the flagship model for ChatGPT users, the company explained that these bonds could pose risks to humanity.
Per OpenAI:
Go to Source
Author: Tristan Greene