inteligencia-artificial-etica Artificial Intelligence

6 Principles to Design an Ethical Artificial Intelligence

10/10/19 4 min. read


“One of the most hotly debated issues in the coming years will be the meaning, advantages, setbacks and limits of technique.”

A Meditation on Technique, by Ortega y Gasset, 1939.

Have we started blaming machines since we apply Artificial Intelligence on them?

19 October 1987: a drop in the stock market in Hong Kong spreads to the rest of the world’s financial capitals until it reaches Wall Street, leading to a 22.6% crash on the Dow Jones.

Known as ‘Black Monday’, part of the blame for this financial disaster can be attributed to automatic trading, arbitration algorithms and chain reactions in the network of machines programmed by humans.

Many of the problems that arose then have been resolved now thanks to improved technological infrastructure, greater financial supervision and improvements to algorithms and at no stage was there any doubt we would continue to live with machines.

If it was anyone’s fault, it was the humans that programmed the machines, the fragility of the systems and a lack of analysis of the possible negative consequences of using these technologies.

Nowadays, we haven’t changed that much, although artificial intelligence is expanding the possibilities at a startling rate; not only the benefits of its use but also its dangers.

In the same way that advances in AI and the capacity of machines can lead to better results, a lack of foresight and supervision of the designs can give rise to errors at an even faster rate.

10 applications of Artificial Intelligence in Finance

Before considering the mechanisms necessary to achieve reliable AI in different sectors, how do you think AI will transform the finance sector?

The following diagram shows its possible uses, some of which are already successfully functioning, while others will continue to advance towards new horizons:

AI applications in finance

AI Ethics: learning to trust machines

“It was exciting to see her grow and both of us grow and change together. But, you know, that’s also the hard part.  Growing without growing apart. Or changing without it scaring the other person.”

From the film Her, by Spike Jonze

The potential improvements offered by AI are manifold, but due to its increasing importance in society, we need to consider the ethical implications of these algorithms and designs in order to reduce possible negative consequences.

However, is an ethical dimension necessary in AI developments?

According to the State of AI Report 2019, companies should undoubtedly consider the need for an AI review board to oversee the decisions that may be made by machines.

Let’s consider two examples: In this example, we can see how Google Translate has failed to carry out ethical supervision of its AI.

google translator AI update

We see how following the update it corrected this gender partiality in its translations.

Here’s a second example: do you remember the chatbot created by Microsoft in 2016 on Twitter? In less than 24 hours it went from being a friendly soul with tweets like:

taytweets when was nice

To make comments which are far from ethical only a matter of hours after its release, such as the following:

taytweets when hate feminists
taytweets when love hitler

Without a doubt, incorporating an ethics component in the implementation designs and algorithms could have changed Tay’s brief history.

To prevent these errors from recurring in the future, we need to define the ethical principles that all AI should apply to ensure that humans and machines can continue to grow together. It’s not a case of anthropomorphising AI, but rather inserting certain ‘ethical’ components to create an AI that focuses on people and their progress in order to prevent possible undesirable deviations as in the above examples.

To design a reliable and ethical AI, we need to think on those 6 principles:

principles to allow design of an ethical artificial intelligence

AI supervision and developments should take these principles into account at all times. Anticipation is vital to achieve adequate progress. Security measures should also be facilitated to prevent violation of these principles. For example, Google has already developed a ‘big red button’ that disables all advanced algorithms.

This button and other similar measures may well have prevented Black Monday in 1987. Nevertheless, just like back then, we’ll undoubtedly continue to trust in technological progress, because it would be just as bad to fail to control AI mechanisms as it would be to stop enjoying all their benefits.

Jesus Lopez

Jesús López

Santander Global Tech

Inquisitive, restless and interested in indisputable things.

 

? My LinkedIn profile

 

Other posts