Skip to main content

Verified by Psychology Today

Artificial Intelligence

Large Language Models May Radically Change Psychology

AI is starting to shake up psychology research. Maybe that's not a bad thing.

Key points

  • Large Language Models and other AIs have grown increasingly capable.
  • New studies show that LLMs may be indistinguishable from humans in several classic psychological studies.
  • LLMs pose both challenges and opportunities for psychological science.

In the fall of 2022 ChatGPT, a Large Language Model (LLM), became available to the public. This new artificial intelligence (AI), and others like it are capable of feats that seemed far off just a couple years ago. AI’s can generate art, write computer code, and craft essays that are difficult to distinguish from those produced by human beings. And as anyone who had a conversation with ChatGPT or other recent LLMs can tell you, it feels eerily like talking to a real person. Blake Lemoine, a former software engineer at Google went so far as to claim that LaMDA, Google’s LLM, was sentient.

The rise and widespread availability of these powerful new programs has already shaken up industries from animation to copy writing. Two new articles, published in Science (Grossmann, et al., 2023) and Trends in Cognitive Sciences (Dillion, et al., in press) suggest that academic psychology might also be radically altered by this new generation of AI.

It turns out that LLMs do such a good job of approximating human responses that their responses in several early studies looking at values, consumer behavior, and public opinion suggest that these AI’s can be trained to respond in ways that are difficult to distinguish from real human participants (Grossmann, et al., 2023). In fact, one research group used ChatGPT to successfully replicate several classic findings in behavioral science including Milgram’s (1963) classic obedience experiment and the wisdom of crowds phenomenon (Aher, Arriaga, & Kalai, 2023). Another recent study used ChatGPT to try to replicate the results of several studies in which people made moral judgments of various scenarios. The researchers found, much to their surprise, that the AI’s responses were almost perfectly correlated with those of actual humans in prior experiments (Dillon, et al., in press).

Challenges and Opportunities for Researchers

Bots have become an increasing problem for psychologists and other researchers who conduct studies online (Webb, et al., 2022). But up until now, responses from these simpler programs have been easier to flag and weed out. Given the advent of these powerful new AI programs, however, it seems that this may be difficult or impossible going forward. This may pose something of a crisis for psychological scientists who have become used to conducting large experiments online (Grossmann, et al., 2023). After all, if we are interested in studying how humans think and behave, we need to make sure our participants are actual human beings. Or do we?

The fact that LLMs are so good at simulating human responses may also be an opportunity. As Grossmann and colleagues (2023) argue, if LLMs can pass for humans in our experiments then it opens up a number of possibilities. For example, some social dynamics, such as violence or mating behavior, are difficult or unethical to try to capture in the lab. Yet with human-like LLMs, these phenomena become more accessible to researchers. Using simulated participants we can also potentially address issues like self-selection. One concern for psychological scientists when conducting research is that people who volunteer to take part in our studies may be different in some ways from those who don’t. As LLMs are built based on the online speech of a broader and larger section of the population, this might be less of a concern if we were to use them as participants in our work. We could also use such models to create samples of simulated participants from groups or demographics that are scarce in the real-world, potentially enhancing our ability to conduct more inclusive research.

Full-Circle AI Research

Another possibility is that we might eventually use AI not only to realistically simulate human participants but also to play the role of scientist. In fields ranging from cancer research to physics, scientists have begun to test neural networks, a type of AI, to generate new research hypotheses. It’s conceivable that one day, perhaps not too long from now, psychological science might sometimes be a closed loop requiring little or no human input. AIs might generate hypotheses, conduct studies of AI participants, and, based on some LLMs ability to generate fairly sophisticated academic text, write up the results. This may seem far-fetched now, but so too just a couple years ago was the idea that a computer program following a few human prompts written in natural language could create art that wins awards, script a film, or fool people into thinking it’s alive.

References

Aher, G., Arriaga, R. I., & Kalai, A. T. (2023). Using large language models to simulate multiple humans and replicate human subject studies. http://arxiv.org/abs/2208.10264

Dillion, D., Tandon, N., Gu, Y., & Gray, K. (in press). Can AI language models replace human participants? Trends in Cognitive Sciences.

Grossmann, I., Feinberg, M., Parker, D. C., Christakis, N. A., Tetlock, P. E., & Cunningham, W. A. (2023). AI and the transformation of social science research. Science, 380(6650), 1108-1109.

Milgram, S. (1963). Behavioral study of obedience. The Journal of Abnormal and Social Psychology, 67(4), 371-378.

Webb, M. A., & Tangney, J. P. (2022). Too Good to Be True: Bots and Bad Data From Mechanical Turk. Perspectives on Psychological Science. https://doi.org/10.1177/1745691622112002

advertisement
More from Michael E. W. Varnum Ph.D.
More from Psychology Today