Skip to main content

Verified by Psychology Today

Artificial Intelligence

Can LLMs Become Our New Moral Compass?

New data suggest that LLMs outperform humans as informed ethicists.

Key points

  • LLMs offer moral guidance surpassing human experts.
  • Challenges exist, yet potential uses may include healthcare, law, and mental health.
  • Careful integration of LLMs as moral advisors is crucial.
Art: DALL-E/OpenAI
Source: Art: DALL-E/OpenAI

It's clear that large language models (LLMs) are smart—but moral, too? A recent paper suggests that these models can provide moral guidance that surpasses even expert human ethicists. This development opens up new possibilities and challenges for the integration of artificial intelligence (AI) in ethical decision-making, heralding an era where machines could complement human moral expertise.

The Quest for Moral Expertise

The pursuit of artificial intelligence has traditionally focused on domains with clear objective criteria, such as engineering and data analysis. However, morality is a deeply subjective field, where judgments are influenced by cultural norms, personal values, and philosophical frameworks. The moral Turing test (MTT), initially proposed to evaluate whether machines could exhibit human-like moral reasoning, has now evolved to explore whether AI can surpass human experts in this domain.

This study by researchers at the University of North Carolina at Chapel Hill and the Allen Institute for Artificial Intelligence demonstrated that GPT-4o not only aligns with human moral judgments but also provides moral reasoning perceived as more thoughtful, trustworthy, and correct than that of a renowned ethicist from The New York Times. This finding challenges the conventional wisdom that morality is an exclusively human domain, suggesting that LLMs have achieved a level of moral expertise.

Redefining Moral Expertise

This study highlighted GPT-4o's proficiency in all three areas. Participants rated its moral explanations higher than those of a representative sample of Americans and an expert ethicist. This indicates that LLMs can offer moral guidance that resonates deeply with people, providing clear and trustworthy reasoning that is essential for effective moral decision-making.

Alignment with Human Judgments: The degree to which an LLM's judgments align with those of the general population or recognized experts.

Method of Reasoning: Whether the model's reasoning processes are comparable to human cognitive and emotional processes.

Explanation and Communication: The ability to articulate moral reasoning clearly and convincingly.

The Role of LLMs in Moral Decision-Making

The implications of these findings are fascinating and potentially significant. As LLMs become more integrated into domains requiring complex ethical decisions—such as healthcare, legal advice, and therapy—their ability to provide sound moral reasoning could be invaluable. Here are some potential applications:

Healthcare. LLMs could assist in ethical dilemmas faced by medical professionals, providing insights into patient care decisions that balance empathy and medical ethics.

Legal Advice. In the legal domain, LLMs could offer preliminary moral assessments of cases, helping lawyers and judges navigate complex ethical landscapes.

Therapy and Counseling. LLMs could serve as supplementary advisors in therapeutic settings, offering moral support and guidance to clients dealing with personal dilemmas.

And in these contexts, the utility of a "third-party" perspective can add a sense of "techno-objectivity" to the discussion—not as the final arbiter, but as a tool that enriches and informs the discussion.

Challenges, Ethics and Beyond

Despite their potential, the integration of large language models as moral advisors is fraught with challenges. Ensuring that LLMs are free from biases and that their moral guidance aligns with cultural values is crucial, necessitating careful programming and continuous monitoring. Additionally, users need to understand the decision-making processes of LLMs; transparency in how these models arrive at their moral judgments is essential for building trust. While LLMs can provide valuable insights, human engagement remains critical, as a collaborative approach in which AI complements human expertise can leverage the strengths of both.

Moral Reasoning by Man and Machine

The evolution of LLMs from simple tools to moral advisors represents another milestone in AI development. As these models become more sophisticated, their involvement in ethical decision-making is poised to grow, providing assistance in navigating complex moral dilemmas. However, the onus lies on developers, ethicists, and society as a whole to ensure that this potential is harnessed in an ethical and appropriate manner.

The emerging role of LLMs as a moral compass signifies an expanded utility for AI, further encroaching upon unique human attributes we often hold sacred. In this context, we witness another chapter in the ongoing narrative of AI and LLMs challenging the boundaries of what we consider distinctly human. By offering moral guidance that aligns with human values and judgments, these models have the potential to augment human expertise and enhance decision-making across a wide range of domains, blurring the lines between artificial and human intelligence in ethics and beyond.

advertisement
More from John Nosta
More from Psychology Today
More from John Nosta
More from Psychology Today