Abstract: Large language models like GPT-3 are increasingly playing a significant role in human communication. Through tasks such as suggesting text, providing grammar support, and enabling machine translation, these models are facilitating more efficient communication. However, the full extent of how incorporating these models into communication may impact culture and society is not yet fully understood. For instance, when language models that tend to favor specific viewpoints are widely integrated into applications, they have the potential to influence people’s opinions.
This paper empirically demonstrates that integrating large language models into human communication poses systematic risks to society. Through a series of experiments, it shows that humans may not be able to detect text generated by GPT-3, that the use of large language models in communication could undermine interpersonal trust, and that interacting with opinion-holding language models can alter users’ attitudes.
The authors introduce the concept of “AI-mediated communication,” where AI modifies, expands, or generates people’s statements. They theorize that the use of large language models in communication represents a paradigm shift different from traditional computer-mediated communication.
In conclusion, this study emphasizes the need to manage the risks of AI technologies like large language models in a more systematic, democratic, and empirical manner.