Bruce Weinstein, Ph.D.
You can use ChatGPT, Bard, Bing and other AI-assisted chatbots to enhance your business in the right way if you apply the principles of ethical intelligence.
Let’s look at several ethical questions raised by AI chatbots, including:
- How much of what the bots generate is true?
- Where does the material come from?
- How might we cause harm by using AI-generated text in our work?
- Are we at risk of damaging the trust others place in us when we use this material in our work?
Why should you bother with ethics at all? I’ll explain why and then present five principles that enable you to use artificial intelligence with ethical intelligence.
Ethics: So What?
Doing the right thing takes time and effort. It can cost money. Some of your competitors don’t care about it. Won’t you lose clients to them if you factor ethics into how you use ChatGPT?
You will not lose clients if you take ethics seriously. Just the opposite. The best leaders recognize that ethical conduct is good for its own sake. We owe some things to others, such as not harming them, and it’s wrong to overlook this.
There’s another reason why the best leaders take ethics seriously. It is in their own financial and reputational interest to do so.
The following five principles will help you use ChatGPT with ethical intelligence.
1. Do No Harm
The most fundamental principle of ethical intelligence is Do No Harm. The least we can expect of one another is that we don’t make things worse for anyone.
The good news about Do No Harm is that it is a principle of restraint. For example, if you’re driving on the highway and the car in front of you is going more slowly than you’d like, you apply the Do No Harm principle by not tailgating them and flashing your bright headlights.
Concerning ChatGPT, you avoid causing harm by not publishing any text that could hurt another person or damage your reputation. You do this through a corollary to the Do No Harm principle, Prevent Harm.
Prevent Harm
Sometimes we must take action so that harm doesn’t occur. You do this in your personal life by ensuring your toddler is secured when you drive. Preventing harm to children in this manner is both a legal requirement and an ethical obligation in each state of the United States and in many countries.
When you use ChatGPT, you prevent harm to others and yourself through due diligence. For example, suppose you want ChatGPT to generate a short essay on three hot trends in your field. You plan to post this on your LinkedIn page and with a few groups you belong to. But simply cutting and pasting what the bot generates would be irresponsible. It might contain a direct quote from someone without attribution. It might contain false statements. Or both.
Researching what the chatbot gives you will help you prevent harm to others who might act upon false information. Research will also prevent harm to your reputation.
2. Make Things Better
Do No Harm and Prevent Harm are two crucial principles that smart leaders live by, but they’re not enough. Not causing harm is the least we can expect of one another.
The second principle of ethical intelligence takes us further: Make Things Better.
Your job description and your company’s mission are primarily about improving things. If everything is fine the way it is, why would anyone need to hire you or buy your products?
Applied to your use of ChatGPT, Make Things Better means ensuring that you check what the bot generates for origin and accuracy. The saying, “garbage in, garbage out,” is worth taking to heart anytime you use artificial intelligence. AI is only as good as what goes into it.
3. Respect Others
The third principle of ethical intelligence is Respect Others. Three components of this principle are:
- Tell the truth
- Protect confidentiality
- Keep promises
Let’s consider how each applies to using ChatGPT the right way.
Tell the Truth
Keeping in mind “garbage in, garbage out,” the text that ChatGPT and other generative AI platforms yield are as truthful or fanciful as the information they have been given. Whether you’re using an AI bot to craft a press release, write a blog post or help you create chapters for a book, it behooves you to ensure that whatever you release into the world is true or likely to be true.
Yes, this requires more time and effort than just publishing whatever you get from the bot, but it’s the right thing to do. You might even have to spend some money and hire an expert to review what the bot has generated.
It’s easier to skip this step, but this is one example where easier isn’t better.
Another way that truth-telling demonstrates your respect for others is by acknowledging the role that ChatGPT has played in the written material you present to the world. You wouldn’t quote someone in your article or speech without giving them credit. Likewise, being transparent about how you’ve used an AI chatbot is the right thing to do.
Protect Confidentiality
Whether your field is healthcare, the law, business, education, or the government, you risk violating the duty to protect confidentiality if you don’t carefully review what ChatGPT generates before you put it out into the world.
I was once on an airplane awaiting takeoff and overheard a passenger calling in a prescription for a patient. He mentioned the patient’s full name and the name of the medication. This was a gross violation of the duty in medical ethics to protect patient confidentiality. The physician should have known better.
As of this writing, however, not all generative AI can exercise the discretion that the physician on the plane lacked. Since the bot can’t, you must. And that requires yet again some time and effort in research. Do you see a recurring theme here in the proper use of ChatGPT and other chatbots?
Keep Promises
Consider the contract you’ve signed with your employer or, if you’re an entrepreneur, directly with a client. Is your contract a legal document? Yes. Is it more than that? Yes. A business contract is a two-way promise. You agree to provide certain services or products, and your company or your client pays you in return. If you or the other party reneges, the deal is off.
Suppose you publish or distribute what ChatGPT generates without carefully reviewing, fact-checking and editing it. In doing so, you break the implicit or explicit promise you have made to be a trustworthy person. As Walter Landor said, “A brand is a promise.”
What exactly are you promising to the people who read your work? That it is substantially your own. Using an AI chatbot to write all or most of what you put forth is a breach of faith that your audience has in you.
Wise leaders care deeply about being trustworthy. It is through consistent ethical conduct that they earn the trust of their employers and clients.
4. Be Fair
The fourth principle of ethical intelligence is Be Fair. To be fair is to give others their due. An obstacle to this is the bias that can be embedded in the information that ChatGPT and other chatbots use to answer the questions you pose.
Again, it’s garbage in, garbage out. Suppose the written material you’re using has been shaped by biases related to age, race, gender, politics or sexual orientation. You risk perpetuating that bias by cutting and pasting whatever the bot gives you into an email, blog, social media post, or book.
A second way to use ChatGPT fairly is to ensure you’re not appropriating someone else’s intellectual property.
Once again, the ethically intelligent use of ChatGPT and the like requires human intervention.
5. Care
Care is the fifth principle of ethical intelligence. Care is a feeling about the world and a way of acting in it. You evince care in your professional life by doing something as simple as sending a handwritten thank-you note to a new client or as time-consuming as taking on a project your colleague can’t finish because of illness.
Concerning ChatGPT, you demonstrate care by double- and triple-checking the research you’ve done to ensure that what you’re about to distribute is accurate, fair and not likely to harm others or the good reputation of your business.
Where do these principles come from?
The five principles of ethical intelligence—Do No Harm, Make Things Better, Respect Others, Be Fair and Care—are derived from Tom L. Beauchamp and James F. Childress’s pioneering work, Principles of Biomedical Ethics, 8th edition (New York: Oxford University Press, 2019). I’ve made two changes:
- I simplified the language. For example, what Beauchamp and Childress refer to as the principle of nonmaleficence I call Do No Harm.
- I broadened the scope of the principles to include not only healthcare and biomedical research but also business, the law, government, education and beyond.
Also, our parents and teachers taught us these principles in one form or another. Corporate codes of conduct and values statements are based on and, in some cases, explicitly refer to these principles. The principles have a prominent place in the sacred texts of religions. (See, for example, Jeffrey Moses, Oneness: Great Principles Shared by All Religions, Revised and Expanded Edition (New York: Ballantine Books, 2002).
What would your life be like if most of the people connected to you in some way habitually disregarded the principles of ethical intelligence?
Call to Action
You’ve just spent several minutes reading this article. You can get a significant return on your investment by doing the following two things.
- Look at ChatGPT and other AI-assisted chatbots as devices that can enhance your work, not substitute for it.
- Instead of distributing to others whatever ChatGPT says in answer to a question you ask, do your due diligence. Find out if the statements ChatGPT has made are true. See who else has been writing or saying something along these lines and cite the most trustworthy of them. Avoid cutting and pasting what the bot says and calling it your own.
The principles presented here will help you use artificial intelligence with ethical intelligence. They are a framework, not a formula, for doing the right thing. Your company, your clients and your reputation deserve nothing less than the best you can give them.