Renowned artificial intelligence systems, such as ChatGPT, Gemini and Character.AI, have established themselves as an integral part of many users' lives - some even seem to have developed emotional relationships with these technological aids. ChatGPT, besides being a sort of ‘fan-favourite’ for fans of the field, may soon become even more even more ambiguously appreciated.
Towards the end of July 2024, OpenAI in fact released a new feature for its famous model: a voice interface frighteningly similar to the human voice. During the launch of this update the company recognised that such an anthropomorphic feature could lead some users to form a strongly emotional bond with the chatbot itself.
The warning was communicated and published within a ‘system card’, a document specifically dedicated to illustrating the risks associated with the model recognised by the company itself: they include the possibility of the chatbot amplifying societal prejudices and spreading misinformation.