Artificial Intelligence: The Imperative Role of Ethics in its Development

Emmanuel Tchividjian, The Markus Gabriel Group

Artificial Intelligence has and will have greater potential for improving the lives of millions. It may very well be the New Frontier of our century. Today, AI is being used extensively and beneficially in practically all fields of our daily lives such as in healthcare, the automotive industry, finance, the economy, politics, justice and marketing.

Towards Data Science has compiled the following interesting statistics: According to Adobe, only 15% of enterprises are using AI as of today, but 31% are expected to add it over the coming 12 months. International Data Corporation (IDC) predicts that the compound annual growth rate for global spending on AI will be 50.1%, reaching $57.6 billion by 2021. Accenture predicts that The AI healthcare market is expected to hit $6.6 billion by 2021. On the negative side PWC found that 38% of U.S. jobs are vulnerable to being replaced by AI in the next 15 years.

Such transforming innovation comes with significant risks. Bill Gates is aware of those risks. He recently said: “Humans should be worried about the threat posed by artificial intelligence.” Stephen Hawkins believed that: ““AI is likely to be the best or the worst thing to happen to humanity.” Elon Musk thinks that AI could be “potentially more dangerous than nukes.”

A great number of thought leaders fear that AI will bring unintended and apocalyptic consequences.

Sean Haylock who teaches philosophy and Literature at Flinders University in Australia, wrote in his article Which Intelligence? Whose Artifice? that “AI  is to be feared not because it may turn its superior intelligence against us and condemn us to mere annihilation, but because it may redefine in appalling ways, on an individual as well as on a civilizational scale, what it means to be human.”

Dr. Calestous Juma, now deceased, was the Director at the Science, Technology and Globalization at Harvard Kennedy School of Government. He had an interesting point of view as to why people oppose new technologies. In his book: Innovation and its Enemies: Why People Resist New Technologies, he argued that “our sense of what it means to be human lies in the root of some of the skepticism about technological innovation.” He believes that the way forward is to “bring back the Social into Entrepreneurship.” This means exploring new ways by which enterprises can contribute to the common good.

Academia is looking into understanding the possible consequences of AI development. Harvard’s AI Initiative-A Civic Debate on Governance has the mission to “help shape AI policy framework.” “This civic consultation proactively engages citizens, practitioners, world experts, and researchers working on AI, robotics, cyber, public policy, international relations and economics. The aim of the civic debate is to trigger a broad and inclusive dialogue which informs our understanding of the dynamics and consequences of the rise of AI and how to govern the current technological revolution.”

The University of Oxford’s Computer Science Department project: Towards a Code of Ethics for Artificial Intelligence Research  is investigating the safe and ethical use of artificial intelligence, laying foundations that will have far reaching societal impacts as these technologies continue to progress.  The project is exploring ways to keep AI beneficial to humanity.

Major tech companies such as Apple, Google, Facebook and Microsoft are members of the Partnership on AI whose goal is to look at the social impact of AI and address important issues such as, safety, values and ethics. Tim Cook, the CEO of Apple said: “What all of us must do is to make sure we are using AI in a way that is for the benefit of humanity, not to the detriment of humanity.”

Scientific discoveries and innovation have often raised philosophical and moral issues particularly in the application of those discoveries. In the field of ethics, AI is largely unchartered territory.

What are the guiding principles that will help us think through the role of ethics in AI?

Let me suggest a few.

First, a reassuring thought: AI will never replace the human spirit and conscience. It will never replace our humanity. That is why we need to make every effort so that:

  1. The human mind does no lose control of the machine.
  2. We do not allow AI to violate our basic human rights.
  3. The applications of AI do not violate basic ethical values such as empathy, fairness, honesty, transparency.

What should we do?  What action should we, communication professionals, take to assure that ethics is taken into consideration in the development of AI, that ethicists have a seat at the table?

I suggest that we use our influence to engage and compel the scientific community, the business community, as well as government agencies, including the military, to incorporate fundamental ethical values in their developing AI policies.

We should also encourage Watchdogs organizations such as Human Rights Watch, the ACLU, the Center for Digital Democracy, CorpWatch and others to do the same.

We must not shun away from that responsibility, it is a critical an urgent matter.

As Antoine de Saint-Exupery once said:

“The time for action is now. It’s never too late to do something.”

Emmanuel Tchividjian, The Markus Gabriel GroupAbout the Author: Mr. Tchividjian is the principal and owner of The Markus Gabriel Group, an ethics and communication consulting practice. Prior, Mr. Tchividjian was the ethics officer of the PR firm Ruder Finn. He was certified Compliance & Ethics Professional from the SCCE in 2006.  He is the Ethics Officer of the NY Chapter of the Public Relations Society of America and a member (ex-officio) of National Board of Ethics and Professional Standards.  Phone: 646-209-0711