In the age of artificial intelligence technology, the scientific community is increasingly turning its attention to human-machine interaction. What makes people turn to huge language models? What is the nature of their queries? How can we ensure safe and productive interactions between them? Researchers at the University of California at Berkeley decided to explore these questions by studying one million real-life dialogues. The results of their research have been published on the arXiv preprint server.
From "business" questions to leisure and entertainment
The results of the study were varied. The dialogues with the language models took place in 150 languages, confirming the ever-growing interest in artificial intelligence technologies in different parts of the world. The majority of the dialogues – around half – dealt with business topics. This is certainly logical: language models such as ChatGPT have become real helpers in matters of computer programming, writing texts and even gardening.
Programmers faced with errors in their code or unexpected tasks turn to chatbots for solutions. And novice writers or students can ask for help structuring a text or correcting mistakes. It should come as no surprise that even gardeners seek advice from language models – the capabilities of modern neural networks make it possible to provide valuable recommendations even in such highly specialised areas.
The downside of interacting with neural networks
However, about 10% of the queries were of particular concern to the scientists. These dialogues were about so-called “unsafe” topics: sex, violence and other aspects that might cause concern. Requests to tell an erotic story or engage in sexual role-playing are just the tip of the iceberg of such requests.
The findings raise important questions about the role and responsibility of language model creators. If a machine is able to generate content at the user’s request, where do we draw the line between what is allowed and what is forbidden? How can we ensure that neural networks do not become a tool for distributing unwanted or even harmful content?
The scientists hope that their research will serve as a starting point for the development of control mechanisms and means to prevent the unsafe use of neural networks. After all, artificial intelligence should be for the benefit of humanity, not a source of risks and dangers.
In conclusion, this study highlights not only the potential of language models, but also the need for strict control and constant correction of their work in the interests of society.
Prepared by Mary Clair