Interview and Illustration by Corinna Heumann
Professor Meckenstock, you are deeply involved in digital ethics. What is digital ethics about?
Digital ethics is a fairly new term. Basically it is the logical consequence of technological progress. It is about advancing already existing ethical principles in the digitized world. It is about the implementation of values, basic freedoms as well as accountability in the domains and processes of new digital phenomena such as artificial intelligence, machine learning and big data analytics.
A crucial problem is that many or even most of us hardly understand this rapid development. We simply accept the use of the new, pre-designed technical possibilities. We are trusting that it will all be kind of rightful. But this is a trap. We are leaving very far-reaching decisions to a relatively small number of programmers without social or political control.
So, please, as an example, it’s about..?
..for example, the question of whether we want an algorithm to determine which applicant is awarded the contract. Or, whether we want a nursing robot to monitor whether one of its protégés is drinking alcohol against medical advice. Or, whether we want the police to check the identity of people in certain parts of the city without any specific reason, but solely based on the crime statistics of past months.
Let’s take your last example, what exactly is happening there?
The accumulation of crimes in an area is used as a basis for concluding that there is a continuing danger there. This triggers preventive deployment of the police with checks on people, without any concrete indications of new crimes to be committed. This usually involves neighborhoods with a high proportion of socially disadvantaged population groups and migrants from certain countries.
Extrapolating from the past to the future sounds reasonable, so what’s the problem?
First of all, systematic identity security checks violate the personal rights of the citizen being checked, especially if, in addition to an identity check, there is also a search. This infringement must be justified. This can only be done in the context of weighing the values affected – in this case the human dignity of the individual and the protection of the general public from danger – knowing the exactly established facts. As for the violation of personal rights, there must be a certain well defined probability of endangerment of the general public.
Is the local accumulation of crimes not sufficient for this?
Preventive policing and predictive policing are not the same thing; they are based on different aspects of statistics and social research for the application of algorithms in police work. So, in this area – as often times – there is no simple causality between a documented past and a valid forecast.
However, “predictive policing” is also based on statistics and the application of algorithms?
Yes, but it is about more complex than „simplified“ crime statistics. So, it is about using digital systems to make decisions in a way that produce correct and responsible correlations. We all know that digitalization facilitates results that cannot be achieved based on human decisions only.
On the other hand, there are also convincing applications of artificial intelligence in terms of basic criteria. For example, such programs can sort out cognitive biases such as stereotypes, pre-formed ideas of certain groups of people, or pigeonhole thinking. But this must not result in a position that we blindly and ignorantly surrender ourselves to the “rule of algorithms.”
Is there such a thing as a “basic question” in the use of artificial intelligence?
I think so. In short, who do we trust to do what end? What leads to better results, AI or human decisions?
The European Union sees itself as a community of values. Are its values being implemented in digitization?
The EU Commission has launched an initiative to develop clear principles for digitization, including explainability and transparency. Above all, this includes the publication of the algorithms used. It must not be the case that decisions that affect us all are made by a system without transparent, comprehensible criteria. However, a serious problem is that technological development is often faster than the regulatory framework can be democratically established and that the global companies involved have great, still increasing market power.
How can this development be controlled to safeguard the basic rights of the citizens?
I think it comes down to two factors. In addition to determined regulatory efforts by strong and competent authorities, in-depth education of the population in digital matters is essential. It is fundamental that people know exactly what is happening, that they can make conscious decisions about the framework of the digitized domains of their personal lives, how and where they participate. Today, the vast majority of users are still poor objects to the stakeholders of digitization.