Dr. Emmanuel R. Goffi is a philosopher of technology who co-directs and co-founded the Global AI Ethics Institute in Paris. He has research affiliations with the Big Data Lab at Goethe Universität in Frankfurt, Germany, and the Centre for Defence and Security Studies in Winnipeg, Canada. Drawing on 27 years of experience in the French Air Force, Dr. Goffi offers a unique perspective in his academic pursuits.  Throughout his 15-year academic career in France, Canada, and Germany, Dr. Goffi has taught and conducted research in international relations and ethics. He is a highly sought-after speaker and lecturer on these topics worldwide. Dr. Goffi earned his PhD in Political Science from Sciences Po-CERI. 

 1. Can the Internet be considered a no-man’s-land? If not, what limits it?

The Internet is a normative vacuum where anyone can state almost anything without being held accountable. The power of the Internet has become so huge that it is now widely used.

2. What regulates behavior on the internet? Laws? Ethics? Rules of cyberspace? Are the fundamental ethical principles that guide our behavior offline also applicable in the online realm, or does the digital environment necessitate a completely distinct set of standards? 

Nothing regulates behaviors on the Internet. You have legal instruments that have been set in some countries. Yet, it seems that their effectiveness is still to be proved. Besides, given that regulations are uneven at the international level,  in a given country. The issue is that they must go along with formal sanctions, which disqualify ethics, and consequently, with resources to check the Internet and sue the rule violators.  This is time-consuming and has a cost that many countries are not ready to assume.

As far as ethics is concerned, behind digital behaviors are hidden. So their ethics are the same as in the non-digital world. For some actors, it is clear that being hidden helps them behave in a way they would not dare elsewhere. Being hidden gives you the feeling of impunity. Behaving through keyboards and screens creates a moral buffer that allows more freedom to express our dark sides.

Nonetheless, until there are fully autonomous artificial intelligence systems that will make decisions by themselves, those human beings who act behind the veil of computers apply the same moral rules as all of us. The difference is that our choices are less constrained, so we can easily free our social inhibitions and express our darkest side.

3. Generally speaking, free-of-charge online services (searching engines, social media, for example) commercialize personalized ads based on users’ online experience. Is it fair and ethical to charge users with customized offers to use a service?

That is a tricky question. My point would be to stress that ethical doesn’t mean good. Both words are not synonymous. So strictly speaking, everything is ethical since ethics is the appraisal of bad and good. The real question is, is it ethically acceptable? Then the answer depends on who you are asking. Those who benefit from these practices are perfect. They are huge sources of revenue.  They provide some people with jobs and a salary, considering that their services help consumers choose faster and better ways. For those who disagree with that, it is not acceptable. The issue here is that most of the time, there is a huge hypocrisy from those who complain against this kind of practice. Indeed, we all offer tons of data without even wondering how it will be used. We like to flaunt ourselves on social networks, providing others with very personal data that can be used in a harmful way.

Most of the time, we forget that as users, we are responsible for what is happening on the net. It is very human and very comfortable to blame others without questioning one’s behavior.

4. How can one deal ethically with users’ privacy and data storage issues?

That is impossible. First of all, we are addressing the issue in a biased way. We try to offer a Western solution. Privacy does not mean the same thing if you approach it through a Buddhist perspective. Privacy is related to the reification of the individual, putting the self ahead of the group. In the Buddhist tradition, the Self does not exist, so individuals are seen as cogs in a wider ecosystem. They are part of a society towards which they are accountable. So, their privacy is limited by their accountability towards others. This relational ethics can also be found in other wisdoms and traditions, such as Ubuntu in Africa and Hinduism. That means in some places, privacy is irrelevant or very different from our Western perspective.

Consequently, when addressing the question, you choose between accepting that, due to ethical particularism. The  Solutions will be local, or fall into moral absolutism and imposing a solution on other cultures. If you go this way, you can be sure that the rules will not be applied.

5. What is your opinion about selling users’ data for merchandising?

I do not have any issue with that. As I mentioned, we are all grown-ups responsible for what we leave on the internet in personal data. So, if you give personal information on websites and social networks, you cannot complain that some people will merchandize it. We are now very well informed about the risks associated with personal data, so we can pretend we are mere victims.

Once again, it is a matter of individual responsibility. I fear we are now too lazy to think. We live in a hedonistic society where our comfort surpasses any other consideration. Anything that requires effort is rejected, and we fall quickly into easy options.

If we were less lazy, we would first ask ourselves what kind of society we want to live in and what we want to hand over to future generations. Then we might discover that the quest for happiness is relational and not individual. Happiness goes hand in hand with some suffering, and a life cleared of difficulties cannot be satisfying. This intellectual effort we are no longer ready to make.

The paradox is that we see that some things are not acceptable. But it is way more comfortable to give up our free will to others who will make decisions for us than to decide by ourselves.

Finally, we accept being instrumentalized and used for financial and political purposes.

Here, there is a strong need for revitalizing philosophical debates on what we are as human beings and what we ultimately want.

6. Considering that fake news is present in everyday life, can it be an example of unethical behavior?

Fake news is nothing new. What is new is the quickness and ease of their spreading.  Considering my previous comment, it is important to remember that unethical does not mean anything here. Some might find fake news ethically unacceptable; others will see it justifiable. No one holds any absolute truth.

So, fake news is like lies or deception strategies used in the military and the political realms. They are tools for specific ends.

To assess their ethical acceptability, you have to focus on specific cases in specific situations. Any other universal standpoint would lead to moral absolutism.

Again, there is some hypocrisy in saying that fake news is “unethical.” If we were less intellectually lazy, we would look for better and more reliable information by using and comparing different sources to check whether the information is true. So, we are condemning something that exists only because we do not want to spend time double-checking information.

7. What is your opinion about creating AI algorithms free of “human teaching”? Considering some cases of racism reported by the media, should the AI creator be held responsible for an act contrary to ethics?

I do not see how, at least today, we could build this algorithm. Because humans provide the data that algorithms learn from, algorithms are taught by humans. It will remain so well after humankind has disappeared from the globe.

Regarding racism, I do believe that this needs to be contextualized. First, racism is an ill-defined word often used to label people without checking if they are racists. Racism speaks to everybody, but everybody is not able to define it.

So, where there are real cases of racism, some people must be held responsible depending on the situation. But as in any democratic judicial system, responsibility cannot be established a priori. It will be set on a case-by-case basis through a thorough investigation that will determine who was knowledgeable in the matter, to what extent, whether things have been done purposely, and whether they could have been done differently. It is too easy and ethically unacceptable to point someone out as a culprit beforehand.

8. Do you think we will achieve an AI equivalent to the human being, at some point (complete-AI)? Does mankind need this?

I do believe we will reach the point where we will have intelligent machines that will look like human beings and have the same abilities. Lots of work and money are invested in research going that way.  Over the past two decades, you will see that we have progressed towards some post-humanism.

Whether we are moving towards human-like machines or a new kind of sentient beings made of either flesh and technology, or technology alone. I believe we are at a point where we all are exocyborgs augmented by cars, glasses, cell phones, and other technological tools.

We see two trends: on the one hand, more and more augmented human beings, and on the other hand, more and more intelligent machines. This will inevitably lead to something new that will not be a change of paradigm but the mere continuation of life through other means. Machines will not “replace” human beings, they are the next step of our evolution.

9. How can AI be exploited to benefit humanity, in general?

There is no way AI could benefit humankind in general. At best, it benefits the greatest number. Yet, AI will benefit mankind only if it does not play the role of a Trojan horse for Western interests and if the debate on this transforming technology opens to different voices in the non-Western world.

AI can be both beneficial and dangerous. We need to have a real debate where people listen to each other without rejecting any option. We need to question our Western tropism to think we hold the truth. Consequently, we need debates on a new philosophical perspective on AI. So far, we have addressed the topic through a Western perspective of a highly superficial understanding of ethics, mostly used as a communication tool. This cosm-ethics, namely the reassuring discourse built on ethics, is problematic since it kills reflection. At the same time, our intellectual laziness allows others to shape the discourse and our perceptions and behaviours.