Position Paper of the People’s Republic of China on Regulating Military Applications of Artificial Intelligence (AI)
2021-12-13 13:00

The rapid development and wide applications of AI technology has profoundly changed the way people work and live, bringing great opportunities as well as unforeseeable security challenges to the world. One particular concern is the long-term impacts and potential risks of military applications of AI technology in such aspects as strategic security, rules on governance, and ethics.

AI-related security governance is a common challenge for mankind. With wide applications of AI in various fields, there are widespread concerns regarding the risks of military applications and even weaponization of AI.

As world peace and development are confronted with multifaceted challenges, countries should embrace a vision of common, comprehensive, cooperative and sustainable global security, seek consensus on regulating military applications of AI through dialogue and cooperation and establish an effective governance regime, in order to prevent serious harms or even disasters caused by military applications of AI to mankind.

We need to enhance the efforts to regulate military applications of AI with a view to forestalling and managing potential risks. Such efforts will help promote mutual trust among countries, safeguard global strategic stability, prevent arms race and alleviate humanitarian concerns. It will also contribute to building an inclusive and constructive security partnership and striving for the vision of building a community with a shared future for mankind in the AI field.

China welcomes governments, international organizations, technology enterprises, research institutes, social organizations, individuals and other actors to uphold the principle of extensive consultation, joint contribution and shared benefits, and work together to promote security governance in the AI field.

In this vein, China calls for the following:

In terms of strategic security, countries, especially major countries, need to develop and apply AI technology in the military field in a prudent and responsible manner, refrain from seeking absolute military advantage, and prevent the deepening of strategic miscalculation, undermining of strategic mutual trust, escalation of conflicts, and damaging global strategic balance and stability.

In terms of military policies, while developing advanced weapon systems and enhancing their legitimate defense capabilities, countries need to bear in mind that military applications of AI shall never be used as a tool to start a war or pursue hegemony. We oppose moves to undermine the sovereignty and territorial security of other countries by using advantages in AI technology.

In terms of law and ethics, countries need to uphold the common values of humanity, put people’s well-being front and center, follow the principle of AI for good, and observe national or regional ethical norms in the development, deployment and use of relevant weapon systems. Countries need to ensure that new weapons and their methods or means of warfare comply with international humanitarian law and other applicable international laws, strive to reduce collateral casualties as well as human and property losses, and prevent misuse and malicious use of relevant weapon systems, as well as indiscriminate effects caused by such behaviours.

In terms of technological security, countries need to keep improving the security, reliability and manageability of AI technologies, and enhancing capacities for the assessment, management and control of AI security. Relevant weapon systems must be under human control and efforts must be made to ensure human suspension at any time. The AI data security must be guaranteed, and the military use of AI data should be restricted.

In terms of research and development, countries need to enhance self-restraint on AI research and development activities, and implement necessary human-machine interaction across the entire life cycle of weapons after taking into full consideration of the combat environment and the characteristics of weapons. Countries need to adhere to the principle of regarding human as the final subject of responsibility, establish accountability mechanism for AI and conduct necessary training for operators.

In terms of risk management and control, countries need to strengthen regulation over AI’s military applications, implement tiered and categorized regulation in particular, and avoid the premature use of technologies that may cause serious consequences. Countries should conduct research on AI’s potential risks and take necessary measures to mitigate the proliferation risks of military applications of AI. 

In terms of rules-making, countries need to adhere to the principle of multilateralism, openness and inclusiveness. In order to track technology development trends and prevent potential security risks, countries need to conduct policy dialogue and exchanges, strengthen communication with international organizations, science and technology enterprises, technology communities, social organizations and other actors to enhance understanding and coordination, work together for the joint regulation of military applications of AI and the establishment of a universal international regime, and contribute to formulating AI governance frameworks and norms based on broad consensus.

In terms of international cooperation, developed countries should assist developing countries in strengthening governance capacity. In light of the dual-use nature of AI technology, while strengthening regulation and governance, we need to oppose drawing ideological lines or overstretching the notion of national security, remove man-made barriers in the field of science and technology, and ensure the rights of all countries to technological progress and peaceful uses.