German Ethics Council on Artificial Intelligence Countercurrents

Source: + human input, i.e. author’s answer to AI

Germany has a troubled relationship with morality. On the one hand, it is home to moral philosophers such as Kant with his nuanced imperatives, as well as the slightly more upmarket ideas of Hegel’s Sitlichkeit.

Yet on the other hand, Germany is also the country that committed the most immoral thing in the world – Auschwitz, as immortalized in Claude Lanzmann’s nine-hour documentary The Shoah.

Germany, unlike France, never had a real revolution, only a revolution of philosophy. It has to do with philosophy and even more so with moral philosophy. Not surprisingly, almost all problems – not just current AI and ChatGPT – are ethical problems.

Nevertheless, Germany has a specialized institution that deals with ethical questions, aptly named the Ethics Council or Ethikrat. Recently, the Ethics Council released its findings on Artificial Intelligence (AI).

Institutionally, the Ethics Council is an independent assembly of experts that addresses questions of ethics, society, science, medicine and law. It assesses outcomes for individuals and society, prepares opinions and issues recommendations while encouraging discussion in society. Bundestag – Parliament of Germany. Its recommendation on AI was issued on 20m March 2023.

One of the core ideas of the Ethics Council is that AI should not replace humans. Overall, the Ethics Council investigates human-versus-AI relationships in schools, medicine, online platforms and administration.

Interestingly, in a country that has started two world wars, bombed Serbia twenty years ago, and more recently supplied Ukraine with “Made in Germany” Leopard tanks, the Ethics Council has surprisingly little – in fact nothing – to say about AI. pointed weapon.

The council believes that AI can penetrate almost every aspect of human existence, from shopping to work, crime to recruitment and beyond. Its latest 287-page report says: Man and machine – It says that artificial intelligence must be used Expand human development – it must not diminish it. These are guiding principles for its ethical assessment of interactions between humans and AI-controlled technology.

Necessarily, this involves aspects of social justice and power. It claims AI applications cannot replace human intelligence and responsibility. Its findings are based on philosophical and anthropological concepts that are important to the relationship between humans and machines. The Council named four aspects of human-machine interaction:

  1. intelligence,
  2. logic,
  3. Human action, and
  4. responsibility

Still, the Council believes AI will present opportunities and risks. Regardless, AI has already shown that, and in many cases has had clear and positive results in the sense of expanding human authorship possibilities. At the same time, however, there is always the possibility of decline in human development.

On the negative side, the use of digital technology can create dependency and even pressure to adapt to AI. Worse, ideas previously established by humans could, potentially, be overturned by AI. For the Council, one of the central ethical questions in their assessment is,

How operations previously performed by people are transferred to technology systems affects the prospects for other people, especially those affected by decisions made by AI.

Consequently, the AI-to-human process needs to be transparent under two guiding questions: For whom does an AI application create opportunities and risks?; And, will AI augment or diminish the human author? This means for the Ethics Council, all Aspects of social justice and power are involved.

26 members of the council also discussed the question of whether or not Are the conditions for human authorship and responsible action expanded or diminished by the use of AI? According to the council, artificial intelligence can certainly be used in medical fields, for example in diagnostics and therapy recommendations.

However, the Ethics Council also insists on adhering to the highest standards for the protection of data and privacy and must meet strict due diligence obligations. It claims that vulnerabilities must be identified at an early stage in deploying AI programs. At the same time, AI-supported results must be subjected to a feasibility check.

Additionally, the Ethics Council argues that once specific AI systems are established in the medical field, their ethically correct application should be integrated into medical education as soon as possible.

It warns that complete replacement of medical experts by AI systems could jeopardize patient well-being. The Council strongly cautions against giving AI technology too much influence, for example, in the medical sector. The use of AI should not lead to further devaluation of medicine and reduction of medical staff.

At the same time, the council is open to the use of AI-based software in schools, for example, to assess learning progress, identify common mistakes made by students and outline students’ strengths and weaknesses. As a result, AI software can be used to recognize students’ learning profiles and adapt learning content accordingly.

In addition, teachers’ subjective impressions can—potentially—be channeled toward data-based proven outcomes that better address a student’s particular needs. However, the Council remains concerned about the meaningfulness of data collection. Potentially, the data could be misused to screen and stigmatize individual students.

The Council is critical of AI-based possibilities to adequately, accurately and reliably measure students because it can create systematic distortions. Besides, digitalization is not an end in itself. As a result, AI in schools should not be driven by a purely technological approach – known as techno-solutionism.

Instead, AI should be driven by the fundamental concept of learning which incidentally includes the formation of a personality or what philosophers call personality and what the German philosopher Adorno calls it. Mundigket – Self-reflective and critical maturity.

It follows from this that if AI systems are to be used, they must be incorporated into teacher training and education. Outside and inside schools, the council is clearly in favor of regulating online platforms, known as (anti)-social media.

Understanding the ongoing shift of public communication to online platforms, the Council strongly advocates stronger regulation of AI operators, i.e. corporations. At the same time, it warns of an AI threat to pluralism and independent opinion.

It warns that the selective use of information by algorithms based on users’ personal preferences as well as the economic (read: profit) interests of platform operators (red: corporations) promotes fake news, hate speech, and a rapid proliferation. personal insult There is a distinct possibility that AI will contribute to the creation of filter bubbles and echo-chambers.

As a result, there is a danger that what the Council calls “relevant” decisions are made on the basis of very limited information. And it comes in addition to pre-planned manipulation as well as misinformation and confusion.

In other words, AI has the ability to reduce the freedom to seek high-quality information—now downgraded by an invisible algorithm. At the same time and as a result, AI can easily lead to what the Council refers to as the “brutalization” of online political discourse. The Council also argues that three already existing regulations:

  1. Germany’s State Media Treaty;
  2. Germany’s famous Network Enforcement Act (netzdg); And
  3. EU Digital Services Act

Don’t go far enough in regulating online platforms. As a result, existing platforms have to make content available without personalized tailoring. Perhaps more importantly, online platforms are required to demonstrate “oppositional positions” that run counter to their own preferences.

In using AI, any form of discrimination should be avoided and people’s right to object should be protected. It also demands that those using AI should ensure the highest level of transparency, employ only trained staff and raise public awareness of potential dangers.

For the use of AI-enabled systems in law enforcement and policing, the opportunities and risks of AI must be carefully considered and Establishing a suitable relationship, as the Council calls it. This means there needs to be some “social discussion” about the relationship between AI, human freedom and security.

Above all, Germany’s Ethics Council fundamentally opposes technological development without complying with its three ethical principles: First, AI must extend human development; Second, AI Human development should not be underestimated; and thirdly, AI should not replace humans.

Thomas Clickwater Author of German Conspiracy Fantasy – Now on Amazon!

Leave a Reply

Your email address will not be published. Required fields are marked *