AI-based Defense Systems – How to Design them Responsibly?
Share this Post
In order to protect their common heritage of culture, personal freedom and the rule of law in an increasingly fragile world, democracies must be able “to fight at machine speed” if necessary. For this reason, digitization in defense cannot not be confined to logistics, maintenance, intelligence, surveillance, and reconnaissance, but must equally enable responsible weapons engagement.
“The more lethal and far-reaching the effect of weapons are, the more necessary it is that people behind the weapons know what they are doing,” observes Wolf von Baudissin (1907-1993), the visionary architect of the Adenauerian Bundeswehr. “Without the commitment to the moral realms, the soldier threatens to become a mere functionary of violence and a manager.” Thoughtfully, he adds: “If this is only seen from a functional point of view, i.e., if the goal to be achieved is in any case above human beings, armed forces will become a danger.”[1]
In this spirit and with a focus on the European Future Combat Air System (FCAS), we consider aspects of ethically aligned systems engineering for AI-based weapon systems that might enjoy broader consent within the international community.[2] In the FCAS program, the largest European armament effort since WW II, manned jets are elements of a networked system of systems, where unmanned ‘remote carriers’ protect the pilots and assist them on combat missions.
In view of ongoing debates about an alliance that has ensured more than 75 years of peace in Europe, a German Defense Minister has emphasized: “The idea of strategic autonomy for Europe goes too far if it is taken to mean that we could guarantee security, stability and prosperity in Europe without NATO and without the US. That is an illusion.” (source)[3] According to this understanding, FCAS is aligned to NATO’s goals.
“For knowledge itself is power…”
Francis Bacon’s (1561-1626) statement on achieving power as the meaning of all knowledge according to Thomas Hobbes’ (1588–1679) interpretation,[4] marks the very beginning of the modern project. It seems to be remarkable that Josep Borrell (b. 1947), High Representative of the EU for Foreign Affairs and Security Policy, has recently referred to Hobbes in this sense.[5] Since the advent of Artificial Intelligence (AI) in the defense domain, a powerful technology meant for the benefit of humanity, may turn against it. Instrumental knowledge in the form of AI-based weapons systems is thus a glaring example of the modern crisis. There is a pressing need for ethical knowledge, an understanding of human nature and its end, to complement Baconian-type knowledge.
There is an “ecology of man,” a German pope once reminded the German parliamentarians: “Man does not create himself. He is intellect and will, but he is also nature, and his will is rightly ordered if he respects his nature, listens to it and accepts himself for who he is, as one who did not create himself.” (source).[6] Since man is responsible for himself and other, any ethically aligned engineering must be anthropocentric. This is most pressingly true for AI in defense. Digital ethics and a corresponding ethos and morality are thus essential “soft skills” to be built up systematically in parallel to technical excellence. Leadership philosophies and personality development plans of armed forces should therefore encourage ethical competence for designing and using AI-based defense systems.
How can the defense research community technically support responsible use of the great power we are harvesting from AI? In considering this question, we will examine documents of the German Bundeswehr spanning the period from its founding in the 1950s, when the term AI was actually coined, to its most recent statements on the matter. Since the German post WW II armed forces have learned lessons from the totalitarian tyranny 1933-1945 and the horrors of “total war”,[7] characterized by High-tech of this time, they are presumably conceptually prepared for mastering the digital challenge. This is even more true since the Bundeswehr is a parliamentary army enshrined in the German Grundgesetz, which acts exclusively in accordance with specific mandates from the Bundestag, i.e., on behalf of the German people.
AI in defense intends to unburden military decision-makers from routine or mass tasks and to tame complexity so that soldiers can focus on doing what only persons can do, i.e., to consciously perceive a situation intelligently and act responsibly. The importance of automation for the Bundeswehr was recognized as early as 1957, when von Baudissin wrote that thanks to automation, “human intelligence and manpower will once again be able to be deployed in the area that is appropriate human beings.”[8] From this point of view, armed forces do not face fundamentally new challenges as users of AI-based systems, since the technological development has long extended the range of perception and action.
Ethically Aligned Artificial Intelligence
According to documents of the German Ministry of Defense, the importance of AI lies “not in the choice between human or artificial intelligence, but in an effective and scalable combination of human and artificial intelligence to ensure the best possible performance.” (source)[9] This statement forms the basis for research questions for systems engineering aimed at fulfilling a fundamental military requirement: “Characteristic features of military leadership are the personal responsibility of decision-makers and the implementation of their will in every situation,” (source) according to the ‘Concept of the Bundeswehr’, the official text defining the mission of the German Armed Forces.[10]
For the very first time in Germany, an intellectual struggle over the technical implementation of ethical and legal principles accompanies a major defense project from the outset. The goal of the Working Group on Responsible Use of New Technologies in an FCAS is to operationalize ethically aligned engineering.[11] Readiness to defend ourselves against highly armed opponents must not only be technologically credible, but also correspond to the consciously accepted “responsibility before God and man, inspired by the determination to promote world peace as an equal partner in a united Europe,” as the very first sentence of the German Grundgesetz proclaims.
Anyone who thinks about ethics and law must become aware of the ends of right action. “Artificial things [e.g., AI], according to Aristotle, are indeed characterized by the fact that they themselves consist of a ‘what’ and a ‘what of,’” explains the philosopher Robert Spaemann (1927-2018). “Their ‘how’ and ‘why’ is not in them,” he continues, “but in the person who made them or use them. Natural things, on the other hand, are characterized by the fact that their ‘what’ and ‘to-what-end’ in itself fall into one. Their end is the form of the thing itself.”[12]
What ends guided Konrad Adenauer (1876-1967), the first post WW II Chancellor, when he led Germany into NATO? Perhaps examining their timeless value can shape the ‘how’ and ‘to-what-end’ of technically designing an FCAS as well. For Adenauer, NATO was a community of free nations determined “to defend the common heritage of Western culture, personal freedom and the rule of law.” Therefore and “in view of the political tensions in the world” its ends “correspond completely to the natural interests of the German people, who […] long for security and peace more than almost any other nation.” However, NATO’s mission to promote common defense must be embedded in a broader concept, namely, “the promotion of the general welfare of the signatory nations“ and preservation of “their common cultural heritage, cooperation in economic and cultural matters.” Germany therefore pledged to “devote all its energies to ensuring that human freedom and human dignity are preserved.” (source)[13]
Innere Führung as a Guiding Principle
Innere Führung, lit. “leadership from within,” is fundamental to Adenauerian Bundeswehr, which was founded in this spirit. Its ‘mission statement’, updated in 2018, stimulates a philosophical discussion while proclaiming it to be “the underlying philosophy of leadership valid for the German soldiers,” together with the principle of Staatsbürger in Uniform, lit. “citizen in uniform.”[14] The term Innere Führung is not easily translated. According to von Baudissin, its overall goal is to reconcile the functional conditions of operational armed forces with the principles of a democratic constitutional state.[15] In other words, it comprises all aspects of military leadership with special consideration of the individual and social aspects of the person as such. This leadership philosophy closely links the personality development of German soldiers with the notion of the Common Good, deeply rooted in Western moral thinking.
In this philosophy, the role of the ends to be reached, specified by superior leaders, is essential. The ends are also usually combined with a time frame and the required powers and means. Within this framework, subordinate leaders pursue and reach the ends independently, such that individual responsibility as well as judgement and decisiveness are important characteristics required of the soldiers. However, while soldiers are largely free to fulfill their mission, they must inform the higher echelons about the status of the mission to ensure oversight and allow for necessary corrections.
Correspondingly, digitalization in defense can only be successful if it adapts Innere Führung. The digital upgrading of the armed forces therefore leads to a timeless question: How can ethical, legal and social compliance be guaranteed? Any answer leads to two distinct research questions for defense scientists and engineers:
1. How can we intellectually and morally continue to be the masters of our tools?
2. Which design principles of systems engineering facilitate responsible use of AI?
FCAS Ethical AI Demonstrator
Algorithms drive an information cycle by processing massive amounts of data that can no longer be handled by humans. In this way, they cognitively assist the minds of military decision-makers in understanding complex, spatially distributed and variable situations and volitively support “the enforcement of their will in every situation” (source)[16] in terms of appropriate and responsive action.
The concepts of mind and will and, therefore, of consciousness and responsibility bring natural beings into view that are ‘somebody’ and not ‘something’, i.e., persons. Cognitive and volitive assistance systems on the other hand, no matter the degree of technical sophistication they attain, are and will always be ‘something.’ It seems important to stress the dichotomy ‘something vs. somebody’ to counter both the exaggerated expectations and excessive fears that seem rampant in the public concern about automation in the military. Pop culture reveals a psychogram of modern man with pseudo-religious hopes and gloomy foreboding, while the Image of Man, the very „conception of humankind“, is increasingly shaped according to the model of machines, while human and even superhuman qualities are attributed to machines.
In Germany, but also in European partner nations, there is a basic consensus that the final decision on the use of armed force must be reserved for a human decision-maker. In view of automation and the use of AI procedures in warfare – especially by potential opponents – it must be clarified on which technically realizable basis a human operator can ultimately make balanced, consciously considered decisions regarding the use of armed force (meaningful authorization). This is particularly pressing to resolve in cases of AI algorithms such as Deep Learning (DL) that have the character of a ‘black box’ for the user.
For this reason, it is important to make AI-based findings comprehensible and explainable to human decision-makers, and, on the other hand, to prevent soldiers from confirming recommendations for action without weighing them up themselves, simply on the basis of some kind of “trust” in the AI-based system. Especially for FCAS, engineers must develop comprehensible and explainable methods. The overall goal of the ‘FCAS Ethical AI Demonstrator’, a training application currently being developed, is to let soldiers experience the use of AI in a military scenario with all associated aspects of psychological stress as realistically as possible. Selected features of the AI Demonstrator, such as automated target recognition for decision-making in air combat enable interaction with a actual AI developed for military use in order to enable a realistic view of the possibilities, limits, ethical implications, and engineering demands of this technology in practice.
First Steps Towards Realization
An exploratory study by industry, a start-up company, and Fraunhofer is currently paving the way towards such a demonstrator. Discussions with the German Air Force clarified the scenarios considered. This way ahead is unprecedented in large European armament projects and therefore is still experimental in nature. Nevertheless, it is a ‘quantum leap’, i.e., a tiny and actually the smallest possible step. However, a real leap it is.
One of the missions envisaged for FCAS is the elimination of enemy air defense by remote carriers with electro-optical and signal intelligence sensors that collect data on positions of enemy air defense supporting equipment. The (simplified) steps in such a case proceed as follows:
- The user will detect, identify, and track enemy vehicles in different scenarios with AI support, by exploiting control of multiple sensor systems on a remote carrier.
- The AI system will graphically highlight relevant objects accordingly and enrich them with basic context information (e.g., type of detected vehicle, certainty level).
- The user, who is in the role of a virtual payload operator of the remote carrier flying ahead, has the task of recognizing and identifying all relevant objects.
- To facilitate the user’s ability to perform this task, optional confirmation dialogues will provide information for all individual objects recognized or preselected by the AI system at a much greater level of detail.
This dialogue will enable the following:
- to take a magnified image of the object in question to confirm the target by visual address, and to understand in the magnified section by means of appropriate highlighting of the Explainable AI (XAI) which has recognized elements of the object being tracked;
- to enhance sensor data fusion with additional data sources, to understand which sensor technology, if any, has “tipped the scales” for classification as a hostile object, and to visualize corresponding levels of confidence for the respective sensor category;
- to check compliance with the rules of engagement for the object in question, insofar as a deterministic algorithm can provide support here; to confirm compliance with the rules of engagement as checked.
Ostensibly, such a dialogue should provide a more unambiguous identification of an object as hostile.
Responsible Controllability
Any responsible use of technology requires continuous controllability. In some applications, occasional malfunction of AI and automation may have no consequences. In its military use, however, rigorous requirements must be guaranteed with all legal consequences. The use of technically uncontrollable technology is immoral per se. We stress selected aspects to be considered in designing AI-based weapon systems responsibly.
- The notion of ‘meaningful human control’ needs to be interpreted more broadly than the concept of ‘human-in/on-the-loop’ suggests. Formulations such as “For unmanned aerial vehicles, the principle of human-in-the-loop and thus the immediate possibility of operator intervention must be ensured at all times” (source) in official documents are misleading.[17] A greater specification should be seriously considered. More fundamental is ‘responsibility.’ The use of fully automated effectors on unmanned platforms may well be justifiable, even necessary in certain situations, if appropriately designed.
- Certification and qualification of AI-based automation are key issues. Robust military systems will comprise both data-driven and model-based algorithms, where data-driven AI could be controlled by model-based reasoning – ‘AI in the Box’. Predictable system properties, insensitivity to unknown effects, adaptivity to variable usage contexts, and graceful degradation must be verified in any certification and qualification process, as well statistical testability and explainability, which are essential prerequisites for critical components. Finally, compliance to a code of conduct is to be guaranteed by design and proved in such a process.
- Finally, with a view to the working group on the Responsible Use of New Technologies in an FCAS, we suggest that comprehensive analyses of technical controllability and personal accountability typically accompany digitization projects in a publicly visible, transparent, and verifiable manner. Otherwise, the grave consequences of the digital paradigm shifts and the large material efforts associated with AI in defense will hardly be politically, societally, and financially implemented.
Sapere Aude as the Key
Situation pictures are ‘fused’ from sensor and context data that never amount to an accurate reflection of the reality. They are always imperfect, inaccurate, ambiguous, unresolved, corrupted or deceptive, difficult to formalize, or even contradictory. Probabilistic algorithms, however, enable responsible action even on the basis of imperfect data. In many cases, reliable situational pictures can be inferred in a much more precise, complete, and faster way than humans could ever have hoped to replicate. Nevertheless, these methods also have their limitations, and decision-makers must be made aware of them.
Moreover, data integrity is fundamental to any use of AI: Are valid sensor and context data even available? Are they produced, distributed, verified, evaluated, and fused reliably? Do the inevitable deficits correspond to the underlying error models? In “naïve” systems, violated integrity easily turns data fusion into confusion and management of resources into mismanagement. Moreover, algorithms always generate artifacts that do not exist in reality, or have ‘blind spots,’ i.e., do not show what is actually there. In the case of cyber attacks, enemies may take over sensors or subsystems, which then produce deceptive data or generate unwanted action.
“Mature” AI comprises detection of such deficits, which is the basis for making them resilient toward hostile interference. The capability of ‘artificial self-criticism’ in this sense requires naturally intelligent critical capabilities on the part of decision-makers vis-à-vis AI. Otherwise, there is a danger of uncritical acceptance of what is offered, and ultimately a refusal to actually bear responsibility. AI-based systems must therefore be technically designed in such a way that they train their users to be vigilant and convey to them how the machine solutions were created – AI must not ‘dumb down’ its users.
Only alert Natural Intelligence (NI) is able to assess plausibility, develop understanding and ensure control. “The uncontrolled pleasure in functioning, which today is almost synonymous with resignation to technical automatism, is no less alarming [than the dashing, pre-technical feudal traditions] because it suggests the unscrupulous, maximum use of power and force,” von Baudissin observed in the 1950ies.”[18] These words ring true not only for the soldierly ethos. Don’t all of us need a new enlightenment for dealing with AI in a mature, ethical and naturally intelligent way, i.e., “man’s release from his self-imposed immaturity”? “Sapere aude—Have the courage to use your own intellect!“ (source)[19]
Hippocratic Oath – An Analogy?
Only if based on an Image of Man that is compatible with responsible use of technology, can digital assistance support morally acceptable decisions. “It is the responsibility of our generation, possibly the last to look back to pre-digital ages and into a world driven by artificial intelligence, to answer the question of whether we continue to recognize the integrity of the human person as a normative basis,” (source) thoughtfully observes the political theologian Ellen Ueberschär (b. 1967).[20]
Bearing in mind such an Image of Man and Innere Führung towards this conception is a task that might be characterized as military pastoral care. It would be worth considering whether the swearing-in ceremony, which was considered indispensable when the Bundeswehr was founded, should be viewed with a fresh eye, in the spirit of the Hippocratic Oath, regarded as a symbol of a professional ethic that is committed to responsibility. For von Baudissin it is “one of the essential tasks of the military clergy to point out the sanctity of the oath, as well as the vow, to show the recruit the seriousness of the assumption of his official duties on his own conscience, but at the same time also the limits set by God for everyone and also for this obligation.”[21]
In this spirit, Konrad Adenauer said farewell to the Bundeswehr in 1963 “as the most visible expression of the reconstruction of Germany, as the restoration of order, as proof of the integration into the front of free nations.” Two days before this speech, David Ben-Gurion (1886-1973), the first Prime Minister of modern Israel, had recognized Adenauer’s motives: „When I met Adenauer in New York three years ago, I obtained personal assurance of the correctness of his appreciation, and through the correspondence I conducted with him I realized his moral and political greatness.“[22]
Adenauer’s motives may also apply to the ‘why’ and ‘how’ of an FCAS: “Soldiers, if we had not created our armed forces, we would have lost freedom and peace long ago. So you, soldiers, through the work you have done, have in truth given and preserved peace for the German people.” (source) [23] “It may even be that one day we will wish Adenauer back,”[24] wrote Heinrich Böll (1917-1985), awarded the Nobel Prize for Literature and patron of an influential political foundation in Germany. May Adenauerian spirit guide us today in the Age of AI when balancing required effectiveness of defense against nefarious enemies with proper responsibility that does betray the precious jewels of our “common heritage of Western culture, personal freedom and the rule of law”.
[1] W. von Baudissin, Soldat für den Frieden. Entwürfe für eine zeitgemäße Bundeswehr [Soldier for Peace. Drafts for a Contemporary Bundeswehr]. München: Pieper, 1969, p. 205.
[2] See also: W. Koch, On Digital Ethics for Artificial Intelligence and Information Fusion in the Defense Domain, IEEE Aerospace and Electronic Systems Magazine, vol. 36, no. 7, pp. 94-111, July 1, 2021, doi: 10.1109/MAES.2021.3066841.
[3] A. Kramp-Karrenbauer, Second Keynote Speech by German Federal Minister of Defense. Hamburg: Helmut Schmidt University, November 19, 2020. Online: https://www.bmvg.de/en/news/second-keynote-speech-german-minister-of-defense-akk-4503976.
[4] The phrase ipsa scientia potestas est first occurs in Bacon’s Meditationes Sacrae (1597) and was made popular by Hobbes, who was a secretary to Bacon as a young man. ”The end of knowledge is power […]. Lastly, the scope of all speculation is the performing of some action, or thing to be done.” In Th. Hobbes, The English Works of Thomas Hobbes of Malmesbury, Volume I. London: John Bohn, 1839, p. 7. Online: https://archive.org/details/englishworkstho21hobbgoog.
[5] “All kinds of instruments are turned into weapons. […] We love the world of Kant, but must prepare to live in the world of Hobbes. Whether you like it or not.” In: Th. Gutschker, „Europa ist in Gefahr“ [Europe is in Danger], Frankfurter Allgemeine Zeitung, November 11, 2021. Online: https://www.faz.net/aktuell/politik/ausland/josep-borrells-neues-konzept-fuer-die-eu-verteidigungspolitik-17627660.html?GEPC=s3&premium=0xe00a4e4d822f8e58092c6c3b569f634c.
[6] Benedict XVI, Address to the German Parliament. Berlin: Bundestag, September 22, 2011. Online: https://www.bundestag.de/parlament/geschichte/gastredner/benedict/speech.
[7] „I ask you: Do you want total war? If necessary, do you want a war more total and radical than anything that we can even imagine today?“ In Sportpalast speech of Nazi propaganda minister Joseph Goebbels (1897-1945) on February 18, 1943.
[8] Baudissin, p. 174.
[9] Erster Bericht zur Digitalen Transformation [First Report on Digital Transformation]. Berlin: MoD, 10/2019. Online: https://www.bmvg.de/resource/blob/143248/7add8013a0617d0c6a8f4ff969dc0184/20191029-download-erster-digitalbericht-data.pdf.
[10] „Kennzeichnende Merkmale militärischer Führung sind […] die persönliche Verantwortung der militärischen Führerin bzw. des militärischen Führers und die Durchsetzung ihres bzw. seines Willens in jeder Lage.“ In Konzeption der Bundeswehr [Concept of the Bundeswehr]. Berlin: MoD, 2018, p. 83. Online: https://www.bmvg.de/resource/blob/26544/9ceddf6df2f48ca87aa0e3ce2826348d/20180731-konzeption-der-bundeswehr-data.pdf.
[11] The Responsible Use of New Technologies in a Future Combat Air System. Online: www.fcas-forum.eu.
[12] R. Spaemann et al., Natürliche Ziele [Natural Ends]. Stuttgart: Klett-Cotta, 2005, p. 51.
[13] K. Adenauer, Aufnahme der Bundesrepublik Deutschland in die NATO [Admission of the Federal Republic of Germany to NATO]. Paris: Palais de Chaillot, May 9, 1955. Online: https://www.konrad-adenauer.de/quellen/reden/1955-05-09-rede-paris.
[14] Konzeption der Bundeswehr, p. 50.
[15] H. Dierkes, Ed., Global Warriors? German Soldiers and the Value of Innere Führung, Ethics and Armed Forces, no. 2016/1, Jan. 2016, p. 46. Online: http://www.ethikundmilitaer.de/en/full-issues/2016-innere-fuehrung/.
[16] Konzeption der Bundeswehr, p. 84.
[17] Militärische Luftfahrtstrategie 2016 [Military Aviation Strategy]. Berlin: MoD, 2016, p. 23. Online: https://www.bmvg.de/resource/blob/11504/3e76c83b114f3d151393f115e88f1ffb/c-19-01-16-download-verteidigungsministerium-veroeffentlichtmilitaerische-luftfahrtstrategie-data.pdf.
[18] Baudissin, p. 180.
[19] I. Kant, An Answer to the Question: What is Enlightenment? (1784). Online: http://donelan.faculty.writing.ucsb.edu/enlight.html.
[20] E. Ueberschär, Von Friedensethik, politischen Dilemmata und menschlicher Würde – eine Skizze aus der Perspektive theologischer Ethik [Of Peace Ethics, Political Dilemmas and Human Dignity – a Sketch from the Perspective of Theological Ethics], Opening Speech at the first meeting of the working group The Responsible Use of New Technologies in a Future Combat Air System. Bad Aibling, September 27, 2019. Online: https://www.fcas-forum.eu/publications/Skizze-zur-theologischen-Ethik-Ueberschaer.pdf.
[21] Baudissin, p. 181.
[22] D. Ben Gurion, The Greatness of Adenauer, The Jerusalem Post, October 14, 1963, p. 6. Cited in D. u. W. Koch, Konrad Adenauer. Der Katholik und sein Europa [The Catholic and His Europe]. Kißlegg: fe, 3rd edition 2018, p. 239.
[23] K. Adenauer, Ansprache in Wunstorf, Germany, Parade of the Bundeswehr, October 12, 1963. Online: https://www.konrad-adenauer.de/quellen/reden/1963-10-12-ansprache-wunstorf.
[24] „Es mag sogar sein, dass wir uns noch nach Adenauer sehnen werden [It may even be that one day we will wish Adenauer back].“ In: H. Böll, „Keine so schlechte Quelle [Not such a bad source],“ Der Spiegel, December 12, 1965, nr. 49/1965, p. 155.
This text is published as part of the German-Israeli Tech Policy Dialog Platform, a collaboration between the Israel Public Policy Institute (IPPI) and the Heinrich-Böll-Stiftung.
The opinions expressed in this text are solely that of the author/s and do not necessarily reflect the views of the Israel Public Policy Institute (IPPI) and/or the Heinrich-Böll-Stiftung.
Share this Post
Automated Decision-Making: A Hidden Blessing For Uncovering Systemic Bias
As automated analysis of data (via a suite of tools known as machine learning) becomes more prominent, examples…
What is Digital Constitutionalism? A View from Europe
The Rise of European Digital Constitutionalism In the last twenty years, the policy of the European Union in…
Privacy by Design and Competitive Advantage: a Whatsapp Exodus?
Authors: Eduardo Magrani and Lucas van Hattem The start of the new year might not have taken the turn WhatsApp was hoping…