Artificial intelligence (AI) has begun to play a significant role in warfare, transforming both strategy and tactics on the battlefield. Here are some of the most notable uses and relevant topics:
AI Applications in War Conflicts:
- Autonomous Drones: Equipped with AI to perform reconnaissance, surveillance, and, in some cases, combat missions without direct human intervention.
- Data Analytics and Prediction: AI tools that collect and analyze large volumes of data to predict enemy movements or detect threats more accurately.
- Cybersecurity and Cyberattacks: Use of AI to protect critical military systems against cyberattacks or even to carry out sophisticated counterattacks.
- Simulations and Training: AI technologies used to train soldiers through realistic and adaptive simulations.
- Military Robotics: AI-guided land or maritime robots that can perform logistics, reconnaissance, or combat support tasks.
Ethical Challenges and Controversies:
- Autonomy in Lethal Decisions: The possibility that AI-based systems could make life-or-death decisions without human intervention raises serious ethical concerns.
- Technological Escalation: An AI-driven arms race could increase the risk of more destructive global conflicts.
- Privacy and Control Dilemmas: Advanced surveillance and AI-based spying tools may compromise individual rights and national sovereignty.
International Humanitarian Law (IHL) establishes restrictions on the means and methods of combat during hostilities. Although IHL initially did not address the challenges posed by artificial intelligence (AI) in this area, today the rapid evolution of this technology, along with algorithms and their emerging military use, poses a significant challenge to humanitarian law. This challenge manifests itself in three key areas: technical, legal, and ethical.
It is true that AI, in its current state, allows algorithm-based computer programs to perform tasks in complex and uncertain environments, often with greater accuracy than humans. However, it is critical to recognize that no technology can make a machine act like a human, who has the capacity to judge the legality of an action and decide not to follow a programmed course, always prioritizing the protection of victims.
States must implement verification, testing, and monitoring systems as part of the process for establishing and enforcing limitations or prohibitions, in accordance with the essential principles of distinction and proportionality established by IHL for the use of weapons in armed conflicts, whether international or not. From both a legal and ethical perspective, the human person must remain at the center of this issue, as responsibility for the use of force cannot be delegated to weapons systems or algorithms and remains an inherent human responsibility.
INTRODUCTION
In recent decades, artificial intelligence (AI) has experienced significant growth in various fields, including the development of military technologies that can be used in armed conflict. This evolution raises questions about concepts such as attribution, control, and responsibility, which are closely related to human beings and to the obligation of States to respect and enforce International Humanitarian Law (IHL) and Human Rights. Both normative frameworks aim to protect human life and dignity, both in war and peacetime. This responsibility extends to the development, acquisition, use, or transfer of AI-based military technology. The purpose of this study is to identify, first, the special capabilities and technological advances that AI brings to the context of warfare, as well as the potential adverse effects that could arise in relation to the application of IHL rules.
To begin, I will provide a brief introduction to the rules of International Humanitarian Law (IHL) that regulate hostilities. In essence, there are three fundamental principles that must be respected during the conduct of hostilities: distinction, proportionality, and precaution in attacks, in order to avoid causing superfluous damage or unnecessary suffering.
Secondly, the concept of artificial intelligence (AI) and its evolution will be discussed, characterized by the fusion of image recognition, natural language processing, and neural networks. I will then present the applications of these technologies in armed conflicts, as well as their use in other fields, such as humanitarian assistance.
Finally, an analysis will be made of the application of IHL to this new technology, highlighting the need to focus attention on the responsibilities and obligations of human beings in decisions related to the use of armed force.

1. IHL AND THE CREATION OF TOOLS AND TECHNIQUES FOR THE MANAGEMENT OF ARMED CONFLICTS
The International Humanitarian Law (IHL) constitutes a set of legal norms that regulate armed conflicts, with the purpose of limiting the methods and means of combat, as well as protecting victims, property, and the environment that may be affected. The "means" refer to weapons and their use, while the "methods" encompass the conduct of participants in hostilities. According to Swiss jurist Jean Pictet, international humanitarian law is defined as the set of international legal provisions, whether written or customary, that ensure respect for human dignity. This framework includes the Hague Law, which establishes the rights and duties of belligerents in war and limits the choice of means of combat, as well as the Geneva Law, which focuses on the protection of combatants who are not engaged in combat and of persons not taking part in hostilities. The year 1864 is considered the birth of this law, marked by the creation of the first multilateral instrument of International Humanitarian Law: the Geneva Convention of August 22, 1864, which seeks to improve the conditions of wounded military personnel in the field.
Limitations and prohibitions on the use of certain weapons have roots that date back several centuries. Matthius Maas notes that "the advancement and proliferation of new military technologies have enabled unprecedented brutality in various systematic wars, which has driven significant developments in international law." The earliest rules regulating warfare were established to control the use of specific weapons, with the goal of avoiding disproportionate harm and suffering. In the 19th century, the 1898 St. Petersburg Declaration, which sought to prohibit the use of certain missiles in armed conflicts, established for the first time that the legitimate objective of war was "to weaken the military forces of the enemy," and that this objective would be compromised if weapons were used that caused unnecessary suffering to wounded combatants or led to inevitable death, their use being considered contrary to the laws of humanity (St. Petersburg Declaration, November 29/December 11, 1868, Preamble).
Based on the principles of distinction and precaution in attack to avoid superfluous and unnecessary damage, International Humanitarian Law (IHL) prohibits, both conventionally and customarily, the use of indiscriminate weapons—that is, those that cannot be directed at a specific military objective or whose effects cannot be limited, affecting both military objectives and civilians and civilian objects. This prohibition includes weapons systems that, due to their technology and the purpose for which they were designed, are expected to cause excessive collateral damage to the civilian population.
These principles were already applied in the conduct of hostilities before the formalization of IHL and have served as the basis for the creation of several treaties regulating the use of specific weapons. Some of these have been explicitly prohibited by international conventions, such as: expanding dum-dum bullets (Hague Declaration, 1899); biological weapons (1972 Convention on the Prohibition of Bacteriological and Toxin Weapons); Chemical weapons (Geneva Protocol, 1925, and Chemical Weapons Convention, 1993); blinding laser weapons (Protocol IV to the Convention on Certain Conventional Weapons, 1995); anti-personnel mines (1997 Anti-Personnel Mine Convention); cluster munitions (2008 Convention on Cluster Munitions); and nuclear weapons (2017 Treaty on the Prohibition of Nuclear Weapons). The provisions of these international agreements are based on the need to balance military exigencies with humanitarian considerations.
States do not have the option of using any means or method of combat, either because they are explicitly prohibited or because their destructive capacity is likely to cause extreme human suffering and cause "damage greater than would be unavoidable in the pursuit of legitimate military objectives" (International Court of Justice, Legality of the Threat or Use of Nuclear Weapons (Advisory Opinion), 1996, para. 97). Furthermore, the prohibition on the use of weapons that cause superfluous damage or unnecessary suffering, as well as those whose effects are indiscriminate, is considered to be rules of customary law and, therefore, applicable to all States.
In any case, pursuant to Article 36 of Additional Protocol I, which relates to the Protection of Victims of International Armed Conflicts, States have the responsibility to ensure, when developing, researching, or adopting new weapons, that their use, whether under specific conditions or under any circumstances, is not prohibited by the rules of international humanitarian law (IHL), whether conventional or customary.
This places a restriction on the significant technological advances of recent decades, including artificial intelligence, since all new warfare technology must be used in compliance with applicable IHL rules. Article 36 of Additional Protocol I, 1977—which establishes that each State Party must verify whether the use of the means or methods of combat it studies, develops, acquires, or adopts are compatible with the rules of international law—establishes that the obligation to review legality applies to all new weapons. However, Protocol I does not specify how the legality of weapons, means, and methods of warfare should be determined, so it is the States Parties that must adopt the necessary administrative, legislative, and regulatory measures to ensure compliance with the essential principles of distinction and proportionality. The International Committee of the Red Cross (ICRC) has developed a guide detailing the material scope and functional aspects of the review that States must follow to establish or improve procedures for determining the legality of new weapons, in accordance with Article 36 of Additional Protocol I to the 1949 Geneva Conventions.
This review includes procedural issues covering: the identification of the national authority responsible for the assessment, the institutional bodies that should be involved in the process, the mechanisms related to decision-making, and the record of assessments. The mechanism adopted, according to this guide, should be based on an impartial and multidisciplinary approach to legal reviews of new weapons, and States should exchange information on the procedures. The guide should be used to determine the legality of weapons in the context of conventional and customary prohibitions and restrictions, as well as the general rules of IHL, to assess their effects on the civilian population, combatants, health, and the environment. Artificial intelligence used in the conduct of hostilities cannot circumvent this legal and technical review process.
2. WHAT IS MEANT BY ARTIFICIAL INTELLIGENCE?
Artificial intelligence (AI) is a field of study that focuses on developing systems capable of performing tasks that require human intelligence, using algorithms and models that learn from data. According to the European Commission:
"The concept of 'artificial intelligence' refers to systems that exhibit intelligent behavior, as they can assess their environment and act autonomously to achieve specific goals. AI systems can be simple software (such as virtual assistants, image analysis programs, search engines, and voice and facial recognition systems), or they can be integrated into physical devices (such as advanced robots, autonomous vehicles, drones, or Internet of Things applications)."
For the vast majority of current applications, AI consists of algorithms that form the basis of pattern recognition software. When combined with high-performance computing power, data scientists can search and find sources of information in massive collections of data (big data).
Neural networks enhance algorithms' ability to recognize and classify patterns in data by training them to associate specific patterns with desired outcomes. The algorithms perform successive comparisons to reduce the number of coincidences in repeated cycles. Multi-layer neural networks, known as deep learning, are fundamental to current methods of machine learning, supervised learning, and reinforcement learning. In machine learning (ML), algorithms manage information from the environment in such a way that computers learn to make decisions without the need for explicit programming. Through supervised learning algorithms, a predictive model is generated based on input and output data; these are organized from a previously labeled and classified data set, which implies starting from a set of samples from which the group, value, or category of the examples is already known. Using this data set, called training data, the proposed initial model is adjusted. Thus, the algorithm learns to classify input samples by comparing the model's output with the sample's actual label, making the necessary corrections to the model based on each error in the result estimation.
Unsupervised learning algorithms operate similarly to supervised learning algorithms, but differ in that the latter only adjust their predictive model based on the input data, without considering the output results. An example of a system that uses unsupervised learning is topographic mapping systems, which receive aerial images as input and produce maps as output.
On the other hand, reinforcement learning algorithms focus on creating models and functions that seek to maximize the effectiveness of responses through a feedback process. This approach is similar to behavioral psychology in humans, as it is based on an action-reward model, where the goal is for the algorithm to learn to perform actions that allow it to obtain better rewards.
Artificial intelligence (AI) is applied in various fields, such as knowledge representation, which aims to establish mechanisms to symbolize knowledge and encode human thinking for computational use; heuristic search, which includes techniques that optimize search in problem-solving; Natural language processing, which investigates how machines can communicate with humans using natural languages; and, of course, machine learning.
As mentioned, one of the approaches that AI promotes is innovation in machine learning methods through the use of data analysis that employs algorithms to develop computer programs that improve their predictive capabilities. One of the fundamental needs is to provide the autonomous machine with context about its environment and a clear objective. For an autonomous vehicle or intelligent robot to operate effectively in a specific environment, it is crucial to integrate AI technologies with other disciplines. For example, to obtain information about the environment, appropriate sensors are required; otherwise, even powerful algorithms will be unable to interpret the information to make decisions. Furthermore, without power sources, the machine cannot operate autonomously. This combination of technologies is what enables the machine to fulfill its function in a given application area.
Advances in the field of artificial intelligence have made it possible to classify its applications into four main approaches: systems that think rationally, which include expert systems; systems that think like humans, which encompass neural networks; systems that act rationally, which refer to intelligent agents; and systems that act like humans, which relate to robotics. Each of these approaches has applications in the military field, but robotics is of particular interest when considering the implications of AI in warfare.
It is crucial to distinguish between intelligent robots and industrial robots. The most comprehensive definition of an industrial robot comes from the International Organization for Standardization (ISO), which in its 2012 revision of standard 8373 states that an industrial robot is "a multipurpose, reprogrammable, and automatically controlled manipulator, programmable in three or more axes, which may be fixed or mobile for industrial automation applications." This definition is accepted by the International Federation of Robotics. On the other hand, an intelligent robot is equipped with information and artificial intelligence technologies, ranging from drones and autonomous vehicles to software robots, such as chatbots.
Lethal autonomous robots, which generate controversy in the context of International Humanitarian Law (IHL), are defined as "robotic weapon systems that, once activated, can select and engage targets without the need for additional intervention from a human operator." The distinctive feature of a lethal autonomous weapon is that the robot has the ability to decide, without human intervention, whether or not to select a target and whether to use lethal force; that is, the decision to attack is based on the processing of sensor data, not on human intervention. The International Committee of the Red Cross (ICRC) defines autonomous weapon systems as "any weapon system that operates autonomously in its critical functions, meaning that it can select (search, detect, identify, track, target) and engage (use force against, neutralize, damage, or destroy) targets without human intervention." Human Rights Watch (HRW) refers to these systems as Human-out-of-the-Loop weapons, which include fully autonomous weapons systems, lethal autonomous weapons, and killer robots.
The Stockholm International Peace Research Institute, in a report on lethal autonomous weapons systems, classifies these technologies into several categories. The first is based on the command and control relationship between humans and machines, including weapons that, once activated, can identify and engage targets without human intervention. The second category refers to the capabilities of these machines, considering them "weapons that can interpret an advanced level of intent and direction" and act accordingly to achieve a desired result without the need for human supervision. The third category is defined based on legal criteria, emphasizing the nature of the tasks performed autonomously, which involves replacing humans in critical functions such as the use of force, target selection, neutralization, damage, or destruction.
In this context, the United States Department of Defense classifies autonomous weapons according to the degree of human intervention, within the first category mentioned, into three types:
a) weapons that require human command to select and engage targets (referred to as semi-autonomous);
b) weapons that, while autonomously selecting and attacking targets, do so under the supervision of a human operator (known as supervised autonomous weapons);
c) weapon systems that can select and attack targets without any human control.
However, these systems have faced opposition from civil society groups. In November 2019, the Campaign to Stop Killer Robots presented a document on the essential elements for a treaty on fully autonomous weapons, underscoring the importance of human control and rejecting any form of fully autonomous weaponry. In this analysis, we argue that implementing a combat system that dispenses with continuous human control is not feasible. These considerations must also be considered from the perspective of human rights protection, as will be discussed later.
3. ARTIFICIAL INTELLIGENCE AND NEW TECHNOLOGIES IN MILITARY CONFLICTS
As mentioned, artificial intelligence has advanced significantly in recent years, driven by key technological developments such as the availability of large datasets, the increase in computer processing power, and innovation in machine learning techniques. AI has applications in various critical aspects of conducting armed conflicts. In this area, it is used for military target recognition, surveillance, communication, logistics, information manipulation, and the development of new weapons and combat tactics. However, this same technology is also applied in humanitarian contexts, helping to protect victims. In the terrestrial domain, for example, autonomous devices are used for vehicle tracking and obstacle prevention, coordinating unmanned systems with those that are manned. In the maritime sector, this technology allows unmanned surface vessels, which are lightweight and compact, to be equipped with tools for detecting sea mines.
In the aerial area, autonomous aircraft systems are employed for humanitarian purposes, such as surveillance and reconnaissance. Regarding information networks, access to a large amount of data and knowledge in cyberspace provides individuals and groups with immediate access to strategic resources. The military use of AI, however, raises several essential questions regarding the data analysis method used by algorithms: Where does this data come from? What biases might it have? And how might those biases affect the model's effectiveness? Furthermore, in the context of an armed conflict, it becomes necessary to quickly make perceptual judgments based on large amounts of visual content, where the data used to train the algorithm may sometimes differ from the actual data, or even be missing altogether.
In certain circumstances, situations arise that require complex judgments, making it difficult to encode International Humanitarian Law (IHL) concepts into algorithms. Defining terms such as armed conflict, civilian population, combatant, prisoner of war, or the use of civilian objects for military purposes can be problematic, and this ambiguity can lead to misidentification. If biased data is introduced into the system from the outset, it will not only affect future recommendations but will also be amplified as these recommendations are fed back into the system. Converting legal norms into codes necessarily involves specific decisions about the interpretation of the law, which may be influenced by factors outside the legal field that affect software developers. Ultimately, legal norms are created by humans and must be interpreted and applied by them, not by machines.
Therefore, we reaffirm a previously mentioned concept: any new military technology applied in armed conflict must be used in accordance with the rules of IHL, respecting the principles of distinction and proportionality. This is an essential requirement. These principles apply to all means and methods of warfare. It is true that the unique characteristics of new warfare technologies, as well as the circumstances of their use and their humanitarian consequences, may raise doubts about the adequacy of current standards, which may require clarification or supplementation. Given these concerns, I suggest that military applications of emerging technologies are not inevitable, and we must adopt a cautious approach. These decisions are the responsibility of States, which must comply with existing standards and consider the potential repercussions for civilians and combatants no longer taking part in hostilities, as well as take into account broader considerations of "humanity" and "public conscience." The possibility of these machines operating completely autonomously raises serious concerns regarding the principles of humanity and the dictates of public conscience. The International Committee of the Red Cross (ICRC) emphasizes that humanitarian principles require compassion and the capacity to protect, emotional characteristics that are difficult to instill in a machine.
There are highly relevant aspects in the use of artificial intelligence (AI), such as identification, reliability, and predictability, which today cannot be guaranteed to be met by a machine in situations involving the use of armed force. During the operation of a weapon that selects targets without human intervention, identification requires a level of human oversight, intervention, and deactivation. Predictability involves estimating the functioning and consequences of using this type of weapon. Finally, reliability is determined by the probability of failure. Taking these variables into account, can AI help provide a more complete and accurate picture of the advantages it brings in cases of military attacks or expected damage to civilians or civilian objects?
The answer to this question is not simple. Using AI for offensive autonomous weapons requires careful analysis. The following aspects must be considered from legal and ethical perspectives:
a) semi-autonomous weapons must allow military commanders to exercise effective and full control through human judgment;
b) the responsibility of military commanders or subordinates cannot be excluded in the use of semi-autonomous weapons;
c) companies engaged in the development of AI for weapons must be subject to strict government oversight and review, and all AI programs designed for hostile purposes must be supervised by trained researchers who confirm their compliance with regulations.
It is essential to differentiate between fully autonomous weapons and semi-autonomous ones, the latter being prevalent in today's military engagements. This distinction is somewhat relative, as no weapon system currently in existence possesses the full characteristics of a fully autonomous weapon; thus, human involvement in their operation remains a necessity. The author's argument is based on the premise that weapons do not come into existence independently but are the result of human design and intervention. Therefore, the application of AI in the development of weaponry should not be approached in the same manner as some human intelligence is sometimes utilized: to bypass regulations or to determine—purely from a utilitarian perspective—that ignoring the rules of International Humanitarian Law (IHL) may aid in achieving the primary goal of defeating an adversary. This issue is relevant to both semi-autonomous and fully autonomous weapons, indicating that both carry similar risks.
At the current level of AI technology development, and in accordance with IHL regulations, the use of fully autonomous decision-making systems in combat scenarios is unacceptable. In the context of armed conflict, many decisions are critical, with the potential to result in loss of life, serious injuries, damage to property, or violations of individual rights. It is crucial to uphold the fundamental role of human beings in these decisions to prevent unpredictable consequences for both civilians and combatants.
Additionally, not only within the framework of IHL but also within the global human rights protection system, the limitation of these emerging technologies is indirectly supported by adherence to Article 6 of the International Covenant on Civil and Political Rights. This is articulated in paragraph 12 of General Comment No. 36 on Article 6, which addresses the right to life, submitted by the Human Rights Committee in 2017:
States that utilize existing weaponry and engage in the research, development, acquisition, or deployment of new weapons and methods of warfare must consistently consider their implications for the right to life. For example, the creation of new lethal autonomous robots for military use, which lack human judgment and empathy, raises complex legal and ethical dilemmas regarding the right to life, including questions of accountability for their deployment. [The Committee thus asserts that such weapon systems should not be developed or operationalized, whether in times of war or peace, until a regulatory framework is established to ensure their compliance with Article 6 and other relevant international legal standards.]
As previously highlighted, one of the most extensive and impactful uses of artificial intelligence (AI) and machine learning is in the realm of decision-making. These technologies facilitate the comprehensive collection and analysis of data from various sources to identify individuals or objects, evaluate behavioral or life patterns, formulate military strategies or operations, and predict future actions or scenarios. A particularly troubling aspect of AI's application in military contexts is its ability to create misleading information as a tactic of warfare, making it increasingly challenging to differentiate between real and fabricated data. The deployment of such systems by conflicting parties to enhance traditional propaganda techniques and influence public opinion could have significant ramifications on the battlefield.
These decision-support systems are an evolution of the intelligence, surveillance, and reconnaissance capabilities enhanced by AI. Their potential applications are vast, ranging from determining military strategies—such as selecting targets for attack, deciding whom to detain, and for what duration—to efforts aimed at predicting or anticipating adversarial actions. It is crucial to connect these applications to the principles of proportionality, good faith, predictability, and reliability. These principles, which machines cannot grasp, must be integral to decision-making processes, as they establish the international accountability of a State for breaching enforceable legal norms and committing internationally wrongful acts, as well as individual criminal responsibility for war crimes. In the context of international criminal law, a key element is the presence of guilt and punishment, concepts that cannot be applied to lethal autonomous weapon systems. The United Nations Human Rights Council, in the Report by Special Rapporteur Christof Heyns, asserts: "Robots lack the capacity for moral discernment, so if they cause loss of life, they cannot be held accountable, as would normally be the case if the decisions had been made by humans. Who then bears the responsibility?" The same report indicates that legal liability may rest with various parties, including computer programmers, equipment manufacturers or vendors, military commanders who authorize their use, subordinates who deploy these systems, and political leaders.
To further explore the issue of a commander's accountability concerning a lethal autonomous weapon, it is pertinent to question whether the commander could be viewed as a subordinate. In relation to a human subordinate, a superior is liable when they fail to prevent a crime or impose appropriate sanctions on the perpetrator. A military superior is accountable for their actions when a lethal weapon system commits a war crime. Is it feasible for the military superior to fully comprehend the technical implications and consequences of this lethal weapon to the extent that they could avert its deployment? Considering the rules governing liability for unlawful acts, this responsibility could indeed be assigned to the State; however, in terms of criminal liability, such violations may go unpunished. Thus, it could be argued that if criminal liability cannot be enforced for the repercussions of a weapon's use, then its deployment should be deemed illegal.
When creating decision support systems that leverage AI and machine learning, it is essential to understand that human beings are the ultimate decision-makers in the context of military operations. While these systems can enhance accuracy and lower the risks to civilian lives in certain situations, there is no assurance that they will not inadvertently lead to violations of International Humanitarian Law (IHL) due to potential technological shortcomings, such as unpredictability and reliability issues. This concern is particularly relevant if an AI system is employed to autonomously initiate an attack, rather than simply providing analysis to inform human decision-makers. In such cases, a lawful military action cannot be reversed if civilians are accidentally involved.
Furthermore, the use of AI in automated decision-making regarding the detention of individuals during armed conflict raises critical legal and ethical considerations, particularly concerning accuracy and bias. The International Committee of the Red Cross (ICRC) has highlighted that these AI tools could contribute to a more personalized approach to warfare, as they compile personally identifiable information from various sources—such as databases, communications, biometric data, and social reports—to create algorithmically generated profiles of individuals, assessing their status and potential for targeting, or predicting their future actions. IHL provides specific regulations governing detention decisions in armed conflict, primarily aimed at preventing mistreatment, ensuring access to justice, and protecting all categories of victims recognized by IHL. If intelligent systems are trained on data that is predominantly biased towards certain attributes—such as gender, race, or ethnicity—there is a significant risk of contravening these regulations.
It is therefore vital to preserve the critical role of human judgment in these processes and to maintain oversight to avoid unpredictable outcomes for both civilians and combatants.

4. THE IMPACT OF ETHICS ON AI
AI has been recognized for its potential to enhance the methods of capturing and utilizing digitized information in the realm of digital humanitarian assistance. Humanitarian organizations, like the Argentine Forensic Anthropology Team, are initiating pilot projects that focus on using biometrics to identify individuals who are missing. This biometric approach involves confirming a person's identity by digitally analyzing specific physical features and comparing them to records in a database. Furthermore, drones are being employed to transport aid to isolated areas and assess the extent of damage to civilian populations.
The International Committee of the Red Cross (ICRC) has developed "environmental scanning consoles" that harness AI and machine learning to gather and analyze vast amounts of data, which supports its humanitarian efforts in targeted operational settings. This includes the use of predictive analytics to identify humanitarian needs, such as food, water, medical care, and shelter. Consequently, AI can be viewed as a constructive intelligence, emphasizing the human dimension and promoting ethical acceptance.
Clearly, as articulated by numerous governments, the progress of artificial intelligence should be aligned with values that uphold human dignity, rights, and freedoms. The European Commission's High-Level Expert Group on Artificial Intelligence has stressed the importance of "human agency and oversight" in AI systems. In a similar vein, the United States Department of Defense, in its 2018 AI Strategy for Defense, promotes a thoughtful, responsible, and human-centered approach to AI adoption. Meanwhile, the French Ministry of Defense has pledged to utilize AI in accordance with three guiding principles: adherence to international law, ensuring adequate human control, and maintaining continuous command accountability. This signifies that states, in their national regulations, must prioritize the essential aspect identified by International Humanitarian Law (IHL) regarding AI deployment: human oversight.
From the perspective of the International Committee of the Red Cross (ICRC), it is crucial to preserve human control over operations and human judgment in decisions that may have serious consequences for human lives, as this is vital for maintaining a degree of humanity in warfare. The ICRC has underscored the necessity of retaining human discretion in the application of force during armed conflict, a viewpoint that is rooted in broader ethical considerations concerning humanity, moral responsibility, human dignity, and the principles of public conscience.
Upholding human oversight and decision-making is crucial for ensuring compliance with legal frameworks and addressing the ethical challenges posed by certain uses of AI and machine learning.
Those engaged in the deployment of AI-related combat techniques must assess a variety of situations, considering the principles of International Humanitarian Law (IHL), which include:
- Safeguarding potential victims from the effects of military operations by taking all necessary precautions during attacks to ensure the protection of civilians.
- Holding states accountable for the actions of individuals, whether they are members of armed forces or non-state actors, who employ AI-enabled weaponry, as previously discussed in this article.
- Continuously evaluating the functions of all elements of AI-integrated weapon systems before and after their use through ongoing oversight.
- Considering the practicality of automatic shutdown thresholds and/or additional reviews for AI-related combat methods. Analyzing the safety of the continued application of AI technologies in warfare.
- Recognizing the potential for unintended adaptations in the use of AI-driven combat methods.
- Gaining a thorough understanding of the computational components involved, specifically the configuration of models and their elements concerning the deployment of AI-related weapons and combat strategies.
- Assessing the biases that may emerge from the computational components utilized in warfare involving AI technologies.
It is imperative to maintain human control in the military application of AI technologies to comply with established IHL rules governing the use of lethal force, ensuring that any techniques that contravene these provisions are eliminated or ultimately banned, similar to the prohibition of other combat methods that failed to adhere to essential humanitarian principles such as distinction and proportionality, like anti-personnel mines or nuclear weapons.

CONCLUSION
There is no doubt that technological progress has led to remarkable improvements in our society. However, it is essential to recognize that human oversight and judgment must remain central in making decisions that could significantly impact people's lives. This is particularly critical in situations of armed conflict, where it is imperative to uphold human decision-making regarding the application of force. The trajectory of technological advancement is irreversible, and the challenge we face is not to turn back the clock, but to engage in thoughtful reflection on how to ethically navigate this progress to ensure the greatest benefit for humanity while preventing unnecessary suffering. As machine learning systems evolve and establish their own guidelines, their growing autonomy poses a risk of straying from the principles that govern the responsible use of armed force.
In reality, speaking of an autonomous system not only refers to independent decision-making; this autonomy also implies having the power to determine and recognize compliance with legal norms, and if these are breached, to assume specific responsibilities: this is fundamental to human dignity. During the course of an armed conflict, decisions cannot be placed in the hands of intelligent systems that could further aggravate the victimhood of those affected.
But we must recognize that when referring to an ethics of intelligent systems, we cannot fail to pay special attention to how to guide human use of these systems ethically.