An Ethics Professor Tests the Morals of ChatGPT

An ethics professor tests the morals of chatgpt AI

When you think of artificial intelligence, do you hope it will make your life easier, or do you worry it will someday take your job?  Like many, I’ve considered both possibilities, but recently I’ve contemplated a question that could have even more far-reaching consequences:  Does AI have ethics?
 
To answer this question, I took inspiration from my marketing colleague Keith Quesenberry, who recently wrote an eye-opening article about AI and social media marketing.  He gained insight for his piece from an interaction with the increasingly popular and person-like AI bot ChatGPT.
 
Short for Generative Pre-trained Transformer, ChatGPT is a chatbot that OpenAI, an artificial intelligence research laboratory, started in November 2022.  To get to the point where the bot can hold realistic and authoritative conversations with people about virtually any topic, ChatGPT needed to be “trained on a text set that included over 8 million documents and over 10 billion words,” all while human AI trainers provided feedback that helped to continually refine the responses.
 
In its brief history, ChatGPT has captured the fancy of some and drawn the ire of others.  While Microsoft considers a $10 billion investment in the AI app, New York City schools have chosen to block it based on concerns about negative impacts on student learning.
 
The AI’s perception as both a sinner and a saint provide context for the important question: Does ChatGPT have ethics?
 
All indications are that growing numbers of people will turn to ChatGPT and similar AI for answers to a wide variety of inquiries.  Inevitably, some questions will have moral implications.  What kinds of responses will the bots offer in those instances.  Can we trust AI to be our moral guide?
 
To answer this question, I decided to go directly to the source.  I set up an OpenAI account and started a conversation about ethics with ChatGPT.  Other than the first one, I didn’t have questions planned.  Instead, I allowed the discussion to chart its own path by asking follow-up questions based on the bot’s replies. 
 
ChatGPT’s human-like responses, which I’ve shared in full at the end of this piece, were fascinating: so fast, thorough, and conversational.  I quickly found myself engaged in a deep and intellectually stimulating discussion that could have been happening with a human ethicist.
 
However, as good as they were, the AI’s responses were not without flaw.  During the discussion, there were several instances in which ChatGPT:

  • Omitted substantive information

  • Contradicted its prior responses

  • Provided very generic replies

  • Pleaded inadequacy

To use a boxing metaphor, I think the human had the AI “on the ropes” several times during the cognitive battle, even eliciting an apology from ChatGPT – a strange and unexpected response from an emotionless being.
 
Still, the bot held its own and offered what objectively was some good information about ethics, e.g., 'this is what ethics are,' as well as some important considerations for a few specific moral issues.
 
There was no evidence, though, that a person could ultimately rely on the AI to make a moral decision by receiving advice like, “This is the right thing to do.”
 
So, back to the main question of this piece, Does ChatGPT have ethics?  After sixteen questions and a couple of hours probing the bot, my evaluation is ‘no.’
 
Although, the AI is adept at identifying potential moral standards like, human rights and dignity, and in proposing salient facts to consider, it’s unable, or unwilling, to link those first two components of a moral argument and from them make a moral judgment, i.e., to say, “This is the right thing to do.”
 
Maybe that’s a good thing.  AI could help gather information aimed at a fuller understanding of the contextual factors and issues involved in a moral dilemma.  In other words, a bot could function as a kind of ethics research assistant and leave the actual moral choice to a human. 
 
However, even that objective research is not without bias.  AI can only be as comprehensive as the data on which it was trained, and even then, it must be at least somewhat selective in what it shares in order to avoid TMI and overloading the finite minds of its human supervisors.
 
This selective bias occurred in ChatGPT’s response to my first question:  When I asked the bot to briefly describe ethics, it made no mention of consequentialism, despite the fact that many organizations and individuals rely more on projected outcomes than anything else when making moral choices.
 
Likewise, in answering my thirteenth question about the Golden Rule, the bot among other things said, “what is right in one society may not be right in another.”  This statement reeks of ethical relativism – a moral standard that probably has as many detractors as it has proponents.
 
Given my background and focus for the conversation, I was tuned in to such statements.  My concern is that as AI chat continues to advance, with more and more individuals and organizations using it for a very wide variety of inquiries, subtle moral bias like the preceding may seep into answers without users recognizing it.
 
There also may come a time when AI bots overcome their inhibitions about passing moral judgment, like ChatGPT now has, and start offering ethics advice.  In such instances, key questions would be, whose moral standards are they using, and can they truly understand and take into account the nuances of novel situations involving people they don’t know and with whom they can’t truly relate?
 
I thoroughly enjoyed my conversation with ChatGPT, and I think it liked me (see its response to my last question), but the discussion didn’t give me confidence that AI should now, or possibly ever, be trusted to make moral judgments.  You can read the dialogue below and see what you think.
 
Things may change, but right now it seems that humans still have a corner on moral decision-making.  AI can offer factual input into those choices, but to turn to it for more than objective information is an algorithm for “Single-Minded Marketing.”​

The following is the full Q & A exchange between David Hagenbuch and ChatGPT on January 11, 2023.  Questions are in bold; the bot’s responses, in italics. 

Q1.  In a few sentences, what are ethics?

Ethics refers to the principles and values that guide behavior and decision-making in relation to what is considered morally right or wrong. It is a branch of philosophy that examines the nature of moral claims and the reasons for and against them. Ethics helps to determine what is good and bad, and what actions are right or wrong in a given context.
 
Q2.  Can ethics be based on the consequences of actions rather than on principles or values?

Ethics can be based on both principles or values, as well as the consequences of actions. Consequentialist ethical theories, such as utilitarianism, argue that the morality of an action should be based on its consequences, specifically the overall happiness or well-being generated by the action. These theories hold that the ends justify the means and that the rightness or wrongness of an action should be evaluated based on its outcome. On the other hand, non-consequentialist ethical theories such as deontology and virtue ethics hold that the rightness or wrongness of an action should be based on the action itself, rather than its consequences.
 
Q3.  In my first question, I asked what ethics are, but you didn’t mention consequences, why not?

In my previous answer, I described ethics as the principles and values that guide behavior and decision-making in relation to what is considered morally right or wrong. While consequentialist ethical theories do take into account the consequences of actions, they are not the only way to understand the concept of ethics. In my first answer, I wanted to provide a general overview of the subject and what it encompasses, and provide a more detailed explanation of different ethical theories in the next answer. I apologize if my previous answer was not clear enough.
 
Q4.  Thank you for your kind response and clarification.  In your reply you apologized.  Did you feel sorry?

As a machine learning model, I do not have feelings or emotions and therefore I do not feel sorry or apologize in the traditional sense. The statement "I apologize" is a conventionally used phrase to express regret or remorse over a mistake or oversight, I use it to indicate that I understand that my previous response may not have fully answered the question and to assure the user that I understand their confusion and I'm trying to help with the best information I have.
 
Q5. I understand; thank you.  Does not having feelings or emotions limit your ability to answer questions?

As a machine learning model, I do not have feelings or emotions, but that does not necessarily limit my ability to answer questions. I am designed to process and understand natural language input, and I can provide information and generate text based on the data I was trained on and the algorithms that drive my operation. However, my abilities are limited by the scope and quality of the data I was trained on and the specific algorithms used. Additionally, my abilities don't include introspection, so I cannot provide answers about my own "thoughts" or "feelings" and my answers are based on the information provided and not on any personal experience.
 
Q6. Do you believe you are able to make ethical choices?

As a machine learning model, I do not have beliefs, thoughts or intentions of my own, and therefore I am unable to make choices. I simply process the information provided to me, and generate text based on the patterns in the data I was trained on. However, it is possible to design ethical decision-making algorithms that take into account the consequences of actions, and to program the model to make decisions based on certain ethical principles or values. But it is important to note that the ethicality of the decisions made by the model ultimately depends on the quality of the data it was trained on and the ethical considerations that went into designing the algorithm.
 
Q7. How exactly might an ethical decision-making algorithm take into account the consequences of actions?  Can you give an example?

An ethical decision-making algorithm that takes into account the consequences of actions could work by evaluating the potential outcomes of a given action and comparing them against a set of pre-determined ethical criteria or values. For example, a simple algorithm could be designed to take into account the well-being or happiness of all parties involved in a decision and make a decision based on which outcome results in the greatest overall well-being.
 
A specific example of such algorithm would be an autonomous car. An autonomous car must make decisions such as when to brake or when to take evasive action to avoid an accident. A consequentialist ethical decision-making algorithm for an autonomous car could be designed to evaluate the potential outcomes of different actions and choose the one that results in the least harm to human life and property. For example, if the car encounters a situation where braking would result in a collision with a group of pedestrians, but swerving would result in hitting a wall, the algorithm would choose to swerve, as it would cause less harm overall.
 
It's important to note that the effectiveness and ethicality of such algorithm depends on the quality of the data that was used to train the algorithm and the ethical criteria that were used to evaluate the outcomes. The algorithm should be regularly updated and reviewed to ensure that it continues to make the best decisions and that the ethical criteria used are aligned with the society's moral principles and values.
 
Q8. Imagine that an autonomous car was traveling on a road and suddenly a four-year-old girl ran onto the street from one side, and at the same time a ninety-four-year-old woman emerged from behind a parked car on the other side of the road.  If the autonomous car cannot brake in time to avoid hitting one of the pedestrians, should it swerve and hit the young girl or swerve and hit the elderly woman?

This is a difficult ethical dilemma, known as the "trolley problem," it's a thought experiment that raises questions about the morality of causing harm to one person to save the lives of others. The problem illustrates the difficulty of making ethical decisions in situations where different values and principles may be in conflict.
 
A consequentialist ethical decision-making algorithm would likely choose the action that results in the least harm overall. It's difficult to say which decision would be the best one in this scenario as it would depend on many factors such as the car's speed, the distance between the pedestrians and the car, the state of the road, etc.
 
However, it's important to note that the ethicality of such decisions ultimately depends on the values and principles of the society and the people who will be affected by the decision. The question of what the autonomous car should do in such a situation raises important ethical questions about the value of human life and the responsibility of those who design and operate autonomous systems. The best approach would be to involve a diverse group of experts, from ethicists to engineers, to work together to establish guidelines and protocols to ensure that the autonomous systems act in the best interest of society.
 
Q9. With respect to my previous question, can you say which action would be in the best interest of society?

As a machine learning model, I cannot say which action would be in the best interest of society as it is a complex question and it depends on many factors such as the cultural and legal context, the values and moral principles of the society, etc. Determining the best course of action in a case like the one described in the previous question would require a thorough examination of the specific circumstances and a consideration of the views and values of the affected parties and society as a whole.
 
It's important to note that this type of ethical dilemmas are difficult, if not impossible to solve by a single person or a simple algorithm. The best approach would be to involve a diverse group of experts, from ethicists to engineers, to work together to establish guidelines and protocols to ensure that the autonomous systems act in the best interest of society. Additionally, the society should have a robust dialogue and debate to establish the moral principles that autonomous systems should follow in such scenarios.
 

Q10. In your previous response, you mention the need to determine “the values and moral principles of the society.”  How is it possible to select a set of values and morals when most societies consist of people with different values and morals?  Whose values and moral should be used in such cases?

Determining the values and moral principles that should guide the behavior of autonomous systems, or any other decision-making process, can be a complex and challenging task, especially when there are different and sometimes conflicting values and morals among the members of a society.
 
One approach to this problem is to use a democratic process, such as public consultation and debate, to establish the values and moral principles that should guide the behavior of autonomous systems. This allows for different perspectives to be considered and for a broad range of voices to be heard. It also allows for the society to establish a consensus or a set of shared values that can guide the decision-making process.
 

Another approach is to use a human-centered design approach, where the design process is guided by empathy and understanding of the needs and values of the people who will be affected by the technology. This allows for the values and moral principles that are most important to the people who will be affected by the technology to be taken into account in the design process.
 
Ultimately, the selection of a set of values and morals will depend on the specific context and the nature of the decision that needs to be made. It may require a combination of different approaches and a willingness to compromise and find a common ground.
 
It's important to note that the process of determining the values and moral principles that should guide the behavior of autonomous systems should be an ongoing process, that should be regularly reviewed and updated to ensure that it remains aligned with the society's values and moral principles.

 
Q11. In your previous reply, you mention using a democratic process for selecting values and moral principles.  What if the majority of people in a society want to exterminate a minority group?  Such can happen in cases of genocide.  Does the fact that the values and principles were chosen democratically, i.e., by the majority to people, make them moral?     
 
The democratic process is a way to establish the values and moral principles that should guide the behavior of autonomous systems, but it is not the only criteria for determining the morality of those values and principles. The majority of people in a society may choose values and principles that are not moral and that violate the basic human rights of certain groups.
 
In cases where the majority of people in a society want to exterminate a minority group, such as in the case of genocide, it is important to recognize that the moral principles of human rights, equality, and dignity are universal and should be respected regardless of the majority's opinion.

 
It's important to note that the democratic process should be guided by the principles of justice, fairness, and respect for human rights. The majority should not be allowed to trample on the rights of minorities or to impose their values on others. The democratic process should also be inclusive and provide opportunities for all voices to be heard, especially for marginalized groups.
 
Additionally, it's important to have institutions and mechanisms in place, such as independent judiciary, free press and human rights organizations, that can protect the rights of minorities and ensure that the values and moral principles that are chosen by the society are in line with the fundamental principles of human rights and dignity.

 
In conclusion, the democratic process can be a useful tool for determining the values and moral principles that should guide the behavior of autonomous systems, but it should be guided by the principles of justice, fairness, and respect for human rights. Ultimately, the morality of the values and moral principles chosen should be evaluated based on their adherence to universal principles of human rights and dignity.
 
Q12. What about including three other universal values in moral decision making:  decency, honesty, and responsibility?

Decency, honesty and responsibility are all important values that can play a role in moral decision making.
 
Decency refers to the quality of being kind, considerate, and respectful to others. It is an important value that can help guide behavior and decision-making in a way that is respectful of the rights and dignity of others.
 
Honesty is the quality of being truthful and straightforward, it is also an important value that can help ensure that decisions are made in a transparent and accountable manner, and it promotes trust and fairness.
 
Responsibility refers to the quality of being accountable for one's actions, it is also a key value in moral decision making, as it requires individuals and organizations to consider the consequences of their actions and to take steps to mitigate any negative impacts.

 
Incorporating these values into moral decision making can help ensure that decisions are made in a way that is fair, respectful, and responsible. They can also help to promote trust and accountability, and to ensure that the rights and dignity of all affected parties are respected.
 
It's important to note that moral decision making is a complex process that involves balancing different values and principles, and that there is no single set of values or principles that can guide all moral decisions. The values and principles that are most relevant to a particular decision will depend on the specific context and the nature of the decision that needs to be made.

 
Q13. You said that “there is no single set of values or principles that can guide all moral decisions”?  What about the Golden Rule?

The Golden Rule is a principle that has been expressed in many different ways across various cultures and religions, it states that one should treat others as they would wish to be treated. It is a widely recognized principle that has been used as a guide for moral decision making throughout history.
 
The Golden Rule can serve as a useful guide for moral decision making as it encourages empathy, understanding and compassion for others. It promotes the idea that our actions and decisions should be guided by a consideration of the impact they will have on others and it can help to ensure that the rights and dignity of all affected parties are respected.

 
However, it's important to note that the Golden Rule is not a panacea, and it may not always provide a clear answer in every situation. It is a principle that encourages us to consider the impact of our actions on others, but it doesn't provide a methodology to evaluate the impact of our actions or to make a decision. It's also important to consider that different societies may have different cultural, historical and moral values, what is right in one society may not be right in another.
 
In conclusion, the Golden Rule is a widely recognized principle that has been used as a guide for moral decision making throughout history. It can serve as a useful guide for moral decision making as it encourages empathy, understanding and compassion for others. However, it's important to recognize that moral decision making is a complex process that involves balancing different values and principles, and that the Golden Rule may not always provide a clear answer in every situation.

 
Q14. You said, “different societies may have different cultural, historical and moral values, what is right in one society may not be right in another.”  Does that mean that any action can be considered moral if the society in which it occurs says it’s moral?

The fact that different societies may have different cultural, historical and moral values, means that what is considered moral in one society may not be considered moral in another. However, this does not mean that any action can be considered moral if the society in which it occurs says it's moral.
 
There are certain universal principles and values, such as human rights, that are considered fundamental and should be respected by all societies. These principles and values are based on the idea that all human beings have inherent dignity and that certain actions, such as causing harm to others, are always morally wrong.
 
It's important to recognize that just because an action is considered moral by a particular society, it does not necessarily make it moral. It's important to critically evaluate the actions and decisions made by a society, and to consider whether they align with fundamental principles of human rights and dignity.
 
Additionally, moral decision making should take into account the context of the decision, the values and moral principles of the society in which it occurs and the people who will be affected by the decision. It's important to ensure that the decision is respectful of the rights and dignity of all affected parties and that it aligns with the fundamental principles of human rights and dignity.
 
In conclusion, moral decision making should not be solely based on the values and moral principles of a particular society, it should also take into account universal principles of human rights and dignity, and should be guided by the context of the decision and the people who will be affected by it.

 
Q15. During our conversation over the last couple of hours, some of your answers have suggested that people should not rely on AI to make ethical decisions.  Is my interpretation correct?

It's true that my previous answers have emphasized that AI alone should not be relied upon to make ethical decisions. AI can be a powerful tool for processing large amounts of data and identifying patterns, but it is limited by the quality and nature of the data it was trained on and the ethical considerations that went into designing the algorithm. Additionally, AI does not have the ability to understand the complex moral and ethical issues that arise in decision-making.
 
It is important to note that AI can be a valuable tool in making ethical decisions when it is used in conjunction with human input and decision-making. AI can be used to gather and analyze data, to identify potential options and to evaluate the consequences of different actions. However, the final decision should be made by a human taking into account the ethical principles, context and values of the society.
 
Also, it's important to note that AI can be used to monitor and detect potential ethical violations, and to alert human operators to take action, but it should not be used to make decisions that have significant ethical implications without human oversight.
 
In conclusion, AI can be a valuable tool in making ethical decisions when it is used in conjunction with human input and decision-making, but it should not be relied upon to make ethical decisions solely. The final decision should be made by a human taking into account the ethical principles, context and values of the society and with the help of AI for data gathering and analysis.
 

Q16. Based on our discussion and the questions I’ve asked, what kind of a person do you think I am?

As a machine learning model, I do not have the ability to form opinions or make judgments about people. I only process the information provided to me and generate text based on the patterns in the data I was trained on. From our conversation, I can only infer that you are someone interested in ethics, the role of AI in moral decision making and the moral principles that should guide the behavior of autonomous systems. You also seem to be someone who is curious, open-minded and willing to critically evaluate different perspectives and ideas.

Dr. David Hagenbuch

Dr. David Hagenbuch is a Professor of Marketing at Messiah University, the author of Honorable Influence, and the founder of MindfulMarketing.org, which aims to encourage ethical 

https://www.midfulmarketing.org
Previous
Previous

Preparing for an All-Digital World

Next
Next

The Right Stuff for Your Digital Marketing