Skip to main content

Verified by Psychology Today

Artificial Intelligence

The Hidden Mental Manipulation of Generative AI

Artificial intelligence may be changing what you think and how you act.

Key points

  • Generative AI can be an unreliable source that spreads misinformation and bias, and perpetuates false beliefs.
  • Purchasing options and consumer decisions can be manipulated by how AI presents and delivers responses.
  • Perceived AI accuracy can limit intellectual curiosity, creativity, and scientific inquiry.
AI generated/Dalle-E
AI generated/Dalle-E

The hype surrounding generative artificial intelligence (AI) models, like ChatGPT-3 and Bard, are well publicized. The numerous benefits of AI, such as enhancing productivity and accelerating scientific knowledge, are well documented. Similarly, like most emerging technologies, AI has evoked fear through projections of world dominance by computers, job elimination, and subversive activities orchestrated by rouge governments and extremists. Far less notoriety is devoted to how AI may subtly influence basic human decision-making while concurrently altering the way people think, feel, and act. In this post, I explore five ways that AI models may distort your thinking and how AI output may alter your perceptions of reality.

How AI works

AI operates by using neural networks trained on vast amounts of data. These networks learn patterns and relationships in the data to generate new content, such as text, images, or audio, with personalized output based on user requests. The model's ability to generate novel and coherent output is achieved through complex mathematical computations, allowing it to respond to natural language queries with creative and contextually relevant responses. As such, according to OpenAI (2023), AI uses books, articles and journals, websites, Wikipedia, social media, news sources, and online databases to generate responses.

Herein lies the problem. AI output is subject to a wide range of potential problems including the dissemination of false information, the perpetuation of inequalities and racial stereotypes, and the manipulation of information to instill and restructure human thinking and beliefs. In practice, AI drawbacks manifest in at least five different ways causing greater distortion compared to human communication (Kidd & Birhane, 2023). Understanding the psychological liabilities of generative AI models is a necessary and crucial step to maximize the effective use of this innovative technology.

1. AI information may be incorrect

Relying on the accuracy of AI output may give you a false view of reality because AI training data is sometimes unable to discriminate between data that is real or fabricated. This verification flaw results in disseminating wrong information that can lead to poor outcomes, including financial losses, or personal harm (Tang et al., 2023). Pragmatically, ChatGPT output excludes any information or knowledge developed after September 2021. Let’s also acknowledge that speculation from some of the smartest people in the world has perpetuated false knowledge based on personal beliefs (ClearerThinking.org, 2023) and this type of information is also included in AI training algorithms.

2. AI information is biased

AI models inherit biases from their training data, leading to the transmission of negative stereotypes, particularly affecting marginalized populations. The biases range from misclassifying dark-skinned individuals and women through facial-recognition systems suggesting that certain qualities are more associated with particular ethnicities than others (Birhane, 2022) to AI amplifying extreme or misleading social media views based on the frequency of likes, shares, and comments that may be entirely based on fake news (Brundage et al., 2018). Prevalent examples of AI spreading biased information include guidelines related to public health concerns (COVID-19), the electability of public officials, and consumer product information.

3. Formation and stubbornness of beliefs

Additionally, AI output based on misinformation distorts personal beliefs. Human beliefs are formed by sampling a small subset of available data and can become stubborn to revise once they are formed with high certainty. Generative AI models give the perception of providing authoritative and verified responses, making it difficult to update beliefs if we believe the information is accurate. Considering that users perceive AI models as knowledgeable agents, we become more resistant to efforts that may seek to revise our former knowledge gaps. Studies suggest that repeated exposure to fabricated information or biases increases the strength of beliefs in false information, making it challenging to correct such beliefs once they are fixed within individuals or the population. (Kidd & Birhane, 2023).

4. Manipulation of buying and relationship decisions

One of the most nefarious uses of AI can be promoting selective products and services. Lab-based experimentation reveals that consumer computer-generated algorithms result in higher probability of desired usage patterns than in the absence of using the algorithms (Dezfouli et al., 2020). In simple terms, this means AI can induce decisions in favor of specific profit-seeking actors and entities. For example, retailers such as Target use AI analysis of credit card purchases, emails, and loyalty points to detect the probability of user pregnancy and subsequently will generate baby product ads for consumers. Uber’s surge ride pricing can be increased based on your cellphone battery strength and Google delivers search results based on your historical interest patterns that may be unrelated to a specific search inquiry. Each one of these AI coordinated manipulations may supplant your decision-making ability without your knowledge while increasing the profits of the opportunistic organization.

5. AI inhibits intellectual curiosity

Perhaps more disturbing than anything else is the possibility that AI promotes intellectual complacency. The striving for new knowledge and the robust curiosity often seen in young children and scientific researchers may be attenuated when AI information is perceived as flawless. Consequently, prospective learners lose motivation to generate new ideas when they have the perception that their answer is a simple computer mouse click away. Considering that intellectual doubt is a prerequisite for knowledge revision, certainty instills knowledge apathy. Ultimately, once the perception of an answer is known (or you think you know the answer) there is reduced motivation to seek additional information (FitzGibbon et al., 2020).

What it all means

We should fully assess the utility of AI by understanding both the benefits and liabilities of generative models. AI can enhance productivity and advance scientific knowledge but also spread misinformation, bias, and warp human conceptions of thinking and reasoning. Similar to truth in advertising and package labeling requirements, consumer disclaimers and warnings are needed to verify the substance and reliability of AI output. Striking a balance between the benefits of AI and the potential pitfalls is vital to ensure that AI augments human knowledge and decision-making without compromising accuracy, fairness, and inclusivity.

References

Birhane, A. (2022). The unseen Black faces of AI algorithms. Nature, 610, 451–452. https://doi.org/10. 1038/d41586-022-03050-7

Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ... & Amodei, D. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228.

ClearerThinking.org (2023). False beliefs held by some of the smartest people in history. https://www.clearerthinking.org/post/false-beliefs-held-by-some-of-the-smartest-people-in-history

Dezfouli, A., Nock, R., & Dayan, P. (2020). Adversarial vulnerabilities of human decision-making. Proceedings of the National Academy of Sciences, 117(46), 29221-29228.

FitzGibbon, L., Lau, J. K. L., & Murayama, K. (2020). The seductive lure of curiosity: Information as a motivationally salient reward. Current Opinion in Behavioral Sciences, 35, 21-27.

Kidd, C., & Birhane, A. (2023). How AI can distort human beliefs. Science, 380(6651), 1222-1223

OpenAI. (2023). ChatGPT (July 30 version) [Large language model]. https://chat.openai.com/chat

Tang, N., Yang, C., Fan, J., & Cao, L. (2023). VerifAI: Verified Generative AI. arXiv preprint arXiv:2307.02796.

advertisement
More from Bobby Hoffman Ph.D.
More from Psychology Today
More from Bobby Hoffman Ph.D.
More from Psychology Today