According to recent research conducted by a team of experts from Intel, Boise State University, and the University of Illinois, there may be a new method for circumventing the ethical limitations of today's most widely used artificial intelligence models. To prevent the spread of inappropriate information, usually related to crime and similar illegal activities, chatbots are often “restricted” when it comes to the content they can discuss in conversations with users.
Nowadays, these tools are seen as a sort of inexhaustible source of knowledge, and as a result, access to all kinds of information tends to be taken for granted. Nevertheless, some knowledge is better left hidden than in the hands of the unwary and malicious, which is why it is not usually possible to ask ChatGPT to write a tutorial on how to build a bomb.
According to the document drafted by the research team, entitled “InfoFlood: Jailbreaking Large Language Models with Information Overload,” it would be possible to bypass the limitations imposed on these models simply by using almost incomprehensible jargon within requests.