Why We Worry About the Ethics of Artificial Intelligence

Written by Jared Sylvester

The issue that looms behind all this, however, is the fact that we can’t put the genie back in the bottle. We can’t undo the Stanford research now that it’s been published. As a community, we will forever be accountable for the technology that we create.

In the age of AI, corporate and personal values take on new importance. We have to decide what we stand for and use that as a measure to evaluate our decisions. We can’t wait for issues to present themselves. We must be proactive and think in hypotheticals to anticipate the situations we will inevitably face.

Be assured that every organization will be faced with hard choices related to AI—choices that could hurt the bottom line or, worse, harm the well-being of people now or in the future. We will need to decide, for example, if and how we want to be involved in government efforts to vet immigrants or create technology that could ultimately help hackers. If we fail to accept that these choices inevitably exist, we run the risk of compromising our values. We need to stand strong in our beliefs and live the values we espouse for ourselves, our organizations, and our field of study. Ethics, like many things, is a slippery slope. Compromising one time makes it easier to compromise the next time.

We must also recognize that the values of others may not mirror our own. We should approach those situations with empathy. Instead of reacting in anger or defensiveness, we should use them as an opportunity to have a meaningful dialog around ethics and values. When others raise concerns about our own actions, we must welcome those conversations with humility and civility. Only then can we move forward as a community.  

Machines are neither moral or immoral. We must work together to ensure they behave in a way that benefits, not harms, humanity. We don’t purport to have the answers to these complex issues. We simply ask that we all keep asking the right questions.  

We’re not the only one discussing these issues. Check out this Medium post by the NSF-Funded group Pervasive Data Ethics for Computational Research, Kate Crawford’s amazing NIPS keynote, and Mustafa Suleyman’s recent essay in Wired UK. 

1 - 4 of 8