OpenAI Warns ChatGPT Voice Mode Users Might End Up Forming ‘Social Relationships’ With the AI

by Pelican Press
20 views 3 minutes read

OpenAI Warns ChatGPT Voice Mode Users Might End Up Forming ‘Social Relationships’ With the AI

OpenAI warned on Thursday that the recently released Voice Mode feature for ChatGPT might result in users forming social relationships with the artificial intelligence (AI) model. The information was part of the company’s System Card for GPT-4o, which is a detailed analysis about the potential risks and possible safeguards of the AI model that the company tested and explored. Among many risks, one was the potential of people anthromorphising the chatbot and developing attachment to it. The risk was added after it noticed signs of it during early testing.

ChatGPT Voice Mode Might Make Users Attached to the AI

In a detailed technical document labelled System Card, OpenAI highlighted the societal impacts associated with GPT-4o and the new features powered by the AI model it has released so far. The AI firm highlighted that anthromorphisation, which essentially means attributing human characteristics or behaviours to non-human entities.

OpenAI raised the concern that since the Voice Mode can modulate speech and express emotions similar to a real human, it might result in users developing an attachment to it. The fears are not unfounded either. During its early testing which included red-teaming (using a group of ethical hackers to simulate attacks on the product to test vulnerabilities) and internal user testing, the company found instances where some users were forming a social relationship with the AI.

In one particular instance, it found a user expressing shared bonds and saying “This is our last day together” to the AI. OpenAI said there is a need to investigate whether these signs can develop into something more impactful over a longer period of usage.

A major concern, if the fears are true, is that the AI model might impact human-to-human interactions as people get more used to socialising with the chatbot instead. OpenAI said while this might benefit lonely individuals, it can negatively impact healthy relationships.

Another issue is that extended AI-human interactions can influence social norms. Highlighting this, OpenAI gave the example that with ChatGPT, users can interrupt the AI any time and “take the mic”, which is anti-normative behaviour when it comes to human-to-human interactions.

Further, there are wider implications of humans forging bonds with AI. One such issue is persuasiveness. While OpenAI found that the persuasion score of the models were not high enough to be concerning, this can change if the user begins to trust the AI.

At the moment, the AI firm has no solution for this but plans to observe the development further. “We intend to further study the potential for emotional reliance, and ways in which deeper integration of our model’s and systems’ many features with the audio modality may drive behavior,” said OpenAI.



Source link

#OpenAI #Warns #ChatGPT #Voice #Mode #Users #Forming #Social #Relationships

You may also like