Developing Artificially Intelligent Software Systems: The Question of Ethics!

While we are still quite far off from the age of “true” sentient machines, the age of smart systems and software has already begun.

Look around and you’ll find artificial intelligence powering the applications at your workplace, appliances at your home and routine tasks on your smartphone—“Hey Siri!

Yet experts believe that we’re only at the surface of the possibilities; there are loads more to come from artificial intelligence in the field of software development.

This brings us to the subject of today’s discussion:

Do we need to be wary of what we are letting into our lives from the giant-sized technology door, specifically of its ethical and moral values?

When AI Gets Rogue

In 2016, Microsoft’s teen-talking AI chatbot “Tay”—built to mimic and converse with Twitter users in real time—suddenly started spouting sexist, racist and generally awful epithets in response to the people who were Tweeting at it.

Obviously Tay wasn’t programmed to behave that way, so what made the “friendly” chatbot turn into a genocide supporter and negationist?

Well, it was the app’s ability to learn new things with each new interaction which made it an advocate of racist and offensive ideas.

Tay was still working as intended, but ethically it was a failure.

Microsoft had to silence its AI-powered creation.

This is not the only example that underscores the concern in discussion. A similar thing happened back in 2015 with Google’s Photos.

Google Photos labeled a picture of two black people as ‘gorillas’ due to a flaw in its facial recognition algorithm.

Although the company later fixed the problem and apologized to the users, the incident highlighted how misconception and poor implementation can turn AI ethically rogue.

Where Do We Set the Boundaries with “Human-Like”?

Artificially intelligent software systems are designed to think like humans, act like humans and work like humans—albeit with much greater efficiency. They are built to save our time and money. But where do we set boundaries with “human like”?

There are both good and bad people in this world. There are some very evil people as well, who do evil things and try to harm others. How do we define what Al-powered software systems with “human-like” behavior do to mimic, so that their ethical and moral values can’t be brought to question?

It’s something that all developers need to think about.

About EX2 Outcoding

EX2 Outcoding is a nearshore IT service company, located in Costa Rica. The company provides quality software development services, web development services and other IT solutions for small and medium sized businesses. For more information, call +1 800 974 7219 or visit the company’s website.

Facebook Comments