A reorientation of EU foreign policy to more closely monitor the growing power of US internet corporations like Google and Facebook has been underway for some time. The project is gaining momentum.
EU digital diplomacy
The European Union is planning to establish an embassy in Silicon Valley to strengthen its digital diplomacy. This emerged from an internal paper of the European External Action Service (EEAS) referred to by Handelsblatt in a publication in the summer of 2021. The plans have both economic and geopolitical reasons.
On one hand, the EU aims to establish a closer relationship with large tech companies such as Google and Facebook to engage in an informed dialogue about their role and responsibility within the context of European initiatives, such as the General Data Protection Regulation (GDPR).
On the other hand, the EU intends to prioritize the geopolitical implications of new digital technologies, particularly focusing on concerns related to authoritarian systems like China and Russia, which employ digital technologies for surveillance and repression.
The EU is working to ensure that artificial intelligence (AI)-based technologies are secure before they reach the market and is increasingly standing up to the world’s tech giants. This importance extends beyond political and formal perspectives to encompass economic considerations as well.
By obtaining the “Ethics Inside” seal of approval, Europe can position itself as a strong competitor alongside the two major AI developers, China and the USA. China is primarily driving its AI development by the state, while America is relying on economically supported AI.
AI Made in Europe: Securing the technological future with a quality seal
To achieve this goal, the EU is now introducing special “crash test” systems designed specifically for artificial intelligence.
Through this process, EU regulators aim to ensure that innovations and technologies are safe before they reach the market. This was announced by EU Director for Artificial Intelligence and Digital Industries Lucilla Sioli at the launch event in Copenhagen in June 2023.
Four permanent test facilities in Europe
She said the European Union has already set up four permanent testing and trial facilities across Europe to ensure that innovations powered by artificial intelligence are safe before they reach the market. According to Bloomberg, the EU has invested around $240 million (220 million euros) in this project.
What does “trustworthiness” of an artificial intelligence mean?
When developing AI systems, it is crucial to continuously assess potential risks. AI models should perform their intended functions reliably and without hesitation. Therefore, it is necessary to scrutinize the methodology, training data, and algorithms for any possible errors that may jeopardize or discriminate against individuals.
For this purpose, AI models are tested for their robustness and resilience to ensure that they work reliably and error-free even under difficult conditions. In terms of fairness and non-discrimination of AI models, it is important that AI applications do not reinforce unintended prejudices or inequalities.
That is why AI models are also tested for fairness to ensure that they guarantee equal treatment and equal opportunities.
This applies both to the data basis with which they were trained, because it can happen that the training data transport historical prejudices and systematic disadvantages into the AI algorithms, and to the algorithms themselves.
AI: Simplification and new dangers
Artificial intelligence has in many ways simplified our daily lives and is ubiquitous in areas such as healthcare, finance and government and corporate decision-making. But while AI brings many benefits, there are also dangers that need to be considered. Given the use of AI, it is essential to take measures to ensure safety.
The importance of AI safety using the example of a mushroom detection app.
The reliability of an AI app for mushroom detection is of vital importance to avoid potential hazards. Misidentification of a mushroom can lead to poisoning and life-threatening situations. For this reason, it is extremely important that the AI app is carefully developed and trained with the right data. An important aspect of this is that the app must be able to recognise poisonous mushrooms, even if they are different from the mushrooms in the training data. Among other things, the app’s review will examine the sufficient number of mushroom photos as training data and the AI’s ability to reliably and correctly identify each mushroom regardless of its environment.
EU test methods and test catalogues
The EU is continuously working on the further development of test methods and test catalogues to increase confidence in AI technologies. This is to ensure that the use of AI is ethical, responsible and free from discrimination.
Trustworthy AI development through AI crash test stations.
From 2024, crash test systems and facilities will be commissioned to ensure sustainable and trustworthy AI development. These dedicated spaces will enable AI technology providers to test AI and robotics in various sectors such as manufacturing, healthcare, agriculture and food, and urban environments. The AI crash test stations are not only intended to boost consumer confidence, but also to pave the way for a thriving AI industry in Europe. This will allow European companies to offer their AIs to the global market with a quality seal of approval.
With “ethics inside” to the EU’s leading role in AI development?
The EU reaffirms its determination to be a world leader in the development of ethical and safe AI solutions. This commitment goes hand in hand with the goal of not unconditionally leaving the field in Europe to either the AI innovations of the Chinese state or the claims to dominance of the US tech giants.
The EU is committed to its own standards of quality, safety and ethics and strives to become a pioneer in AI development and application.
It remains exciting to see how the EU can achieve these goals and strengthen its position as a leading force in AI technology.