ok

Swarming and Machine Teaming – Defence Future Technologies DEFTECH

A workshop in Thun (Switzerland) to assess the
state-of-the-art technology and research

Chiara Sulmoni reports.

On Wednesday, 21st November 2018 armasuisse S+T (Science and Technology) organised a day-long international workshop for military personnel, researchers, specialists and company representatives, addressing the subject of ‘swarming and machine teaming’.

The event is part of a series which armasuisse Science and Technology organises on a regular basis under the trademark DEFTECH (Defence Future Technologies). These high-profile meetings allow military and civilian experts to share hindsights, anticipate technology trends and make informed decisions in the field of security.

Swarming indicates the deployment of low-cost, autonomous elements acting in coordination with one another to carry out a specific task. Generally, it’s small drones or robots.

Swarms are common in nature, where many species -birds and fish, for instance- move or operate in vast groups. Research into ‘artificial’ swarming often starts with the observation and study of animal behaviour.

Swarming is dual-use, meaning that it can take shape in the civilian environment -for instance, with commercial drones flying in formation- or the military -where it’s principally a battlefield tactics and is associated to the issue of lethal autonomous weapons (LAWS), whose ethical aspects are discussed at UN-level-. Given the rapid development of technology -and the lack of an efficient defence system should a swarming attack take place- armasuisse wished to gain a better understanding of the challenges and risks related to it. The workshop was therefore aimed at getting to know the ‘state of the art’ within this domain. Experts from different fields were called in to provide their perspectives. What follows, is a brief report of some key points which have been touched upon during this meeting, which was organised under the supervision of Quentin Ladetto, Research Programme Manager at armasuisse S+T, and introduced by Director Thomas Rothacher.

Switzerland is a global leader in drone technology. Markus Hoepflinger from Swiss Drones and Robotics Centre (affiliated to armasuisse S+T) was keen to underline right from the start that not only domestic and foreign media dub it the “Silicon Valley of robotics” or “drones centre of the world”. The very same Federal Department of Foreign Affairs is an eager proponent of Swiss expertise (for more information, visit www.homeofdrones.org). Swiss research involves both academic and technical institutes in all regions, and the industry. Today’s environment is mainly mobile robotics with the strongest capability being autonomous flights. There are however a series of potential future military applications which are being looked into, with a view to enhancing search and rescue operations for instance, or for engineering work. Markus Hoepflinger also explained that swarming in the future could dominate war, with experiments underway in Russia, China, the US, Israel and to a lesser extent the EU (e.g. Royal Air Force and Airbus). But drone warfare is not yet happening, despite what has been (questionably) described as the first swarm attack in Syria against a Russian base in January 2018.

Despite rapid progress in all fields, Vincent Boulanin of the Stockholm International Peace Research Institute (SIPRI) emphasized how misconceptions and myths around autonomous systems and artificial intelligence represent a problem, insofar they tend to make policy discussion improductive and blind us from the true possibilities and limitations of these technologies. Programming machines for general tasks is difficult, as they can not generalise from previous situations: while they do process images and understand words (consider the mobile phone application ‘Siri’, for instance) common sense is not one of their assets. On the other hand, autonomous navigation is very context-dependent, with air or underwater environments presenting less obstacles compared to land. ‘Teaming’ is an important aspect of swarming, as machines must communicate with each other and their operator; these systems can share information, perform collaborative tasks (like flying together, complete surveillance assignments, inspect buildings in uncomplex environments). But there’s no symmetrical machine-human communication and finding the right ratio can also be complex. As pertains to the military field proper, Boulanin pointed out how targeting remains the most critical application of autonomy as systems do not have the ability to make a distinction between a civilian and a military target. In the end, autonomy is much easier to achieve for commercial applications.

Martin Hagström from the Swedish Defence Research Agency underlined how having ‘many of something’ does not constitute a problem; the objective is to be able to deploy cheap, efficient sub-systems and a reduced number of ground operators. He also recalled that the antagonist perspective is considerably different from the civil perspective. Swarms rely on satellite navigation (GPS) and are therefore vulnerable to attacks by adversaries who can master a high technological command and could disrupt communication in a contested environment. ‘Robust’ systems are quite expensive and Hagström is therefore persuaded that it might take some time before swarming can be adopted in the military. Other issues to take into account when thinking of flying objects are flight safety rules and policies (air space is not free) and last but not least, the complexity of testing. Stability and predictability are paramount in military applications and because a system acts within its own designed space, autonomy is to make that design space very large, so that it may include many potential events. But outside of (software-based) simulation, testing a system remains hard.

Georg Dietz works for German group IABG mbH and focuses on military airborne platforms. The expert explained that air operations today are increasingly complex for a number of different reasons: the sheer amount of players in the world, the fastest conflict dynamics, the speed of technological advances and information exchange, the rapid growth of sensor ranges and so on. Capabilities like platforms or systems can be insufficient while costs are high, with each new fighter aircraft, by instance, being at least twice as expensive as its predecessor. Future combat air systems will be designed as a system of systems (SoS) consisting of a variety of different components both manned and un-manned, enabling swarming operations. Design and control open up a series of questions though, as to the number and type of platforms needed, the degree of autonomy and technology gaps; on communication in highly contested areas; on human – machine interface and so on. Nevertheless, swarming represents a nearing future in air operations.

Jean-Marc Rickli from the Geneva Centre for Security Policy (GCSP) expounded the concept that swarming is the 5th evolution of military strategy and together with autonomy, it represents a key characteristic of the battlefield of the future. Other strategies (or ways to use force) are ‘denial’ -whose main target is the military; punishment -which hits civilians and infrastructure to exert indirect pressure (terrorism features as punishment); risk -consisting in threatening an escalation, e.g. US-USSR Cold War; and decapitation -which relies on technology like drones to eliminate enemy leadership-. But a large number of small units with sensory capabilities, easy to manouver, able to act in coordination -such is the description of a functioning swarm- can concentrate firepower, speed, forces in a way previously unseen. Swarming tactics is a means to wage asymmetric wars and cyber manifestations of it have already been encountered. 3D-printing of gun components and drones will have important implications, explained the expert. In 2017 in Mosul several Iraqi soldiers were killed by drones operated by ISIS in what was the first instance of the West losing tactical aerial supremacy. Should swarming become a mainstream strategy, we should expect a more conflictual international environment, concluded Rickli.

Marco Detratti from the European Defence Agency (EDA) underlined how, according to estimates, the market for autonomous systems’ products and technology in non-military sectors will be in the order of €100Bn by 2025, with defence playing only a minor part. But swarms have disruptive potential in many fields and while defence is not yet impacted, it nevertheless expects to be in the future. In defence (non-offensive perspective) swarms can change and improve capabilities. Specifically, they can offer ubiquity, resilience and invisibility and are therefore taken into consideration in all tasks and for all domains: land, air, maritime and cyber. From swarms, the military expects cost reduction, decrease in manpower and risk, technical advantages.  Since 2010, EDA has been trying to identify scenarios where swarm and multirobot systems could ‘deliver’; it started a series of projects accordingly. Despite technical evidence of feasibility and noteworthy research, problematics and challenges persist: Detratti went on to explain that there are no real autonomous systems in operation; systems are not resilient enough (consumption); they are not ‘smart’ enough; more progress is needed in testing the unpredictable (to be sure, for instance, that things continue to work when communication is interrupted, that information is not manipulated). There are also non-technical issues to take into account, like the need for a big shift in terms of military culture, doctrine and training; public perception; and ethics.

Autonomous (lethal) weapons have been raising ethical issues for years. George Woodhams gave a hindsight into the discussions and initiatives taking place at UN-level and within UNIDIR (UN Institute for Disarmament Research), which has been dealing with UAVs (un-manned aerial vehicles) since 2015. A specific concern regards the use of Reapers and Predators (drones). The Institute has been encouraging the international community to consider what new challenges may emerge from the proliferation of this technology and it also looks into strategic implications of un-manned systems. An issue for the UN to consider in the long term, is whether due to their low risk and cost of deployment, these systems might lead to problematic military practices. Woodhams went on to illustrate lines of debate within the frame of the Convention on Certain Conventional Weapons, a UN-negotiating body designed to address weapons systems with implications on international humanitarian law. A Group of Government Experts to address Lethal Autonomous Weapons systems (LAWS) was established in 2014, with military advisors regularly invited in. It focuses on what is called ‘meaningful human control’ and its ethical foundations, like retaining human agency in decisions over the use of lethal force, preserving human dignity and ensuring human accountability. Talks can be difficult, as the 84 States which are involved in discussions have different military capabilities and levels of hindsight, but everybody seems to agree on the need to identify best practices and practical measures for improving compliance with international law. Though swarming has not been mentioned specifically over the last four years, concluded Woodhams, it’s the one area of autonomy that catches the imagination the most.

From all the implications derived from the concept of swarming, to the practical side of understanding the many ways in which it can take shape. There’s a flurry of exciting and ground-breaking research going on in laboratories, aimed at addressing limitations and constraints, with a view to developing a higher degree of autonomy and coordination.

We already mentioned how research takes ‘inspiration’, so to say, from nature. In introducing his  line of work, Nicolas Bredeche from Pierre and Marie Curie University explained that methods used to study natural systems (like animal behaviour) can also be used to study artificial systems; and solutions for artificial systems are often a simplified version of what can be observed in nature. Bredeche oversees research on ‘adaptive mechanisms for collective decision-making in populations of simple individuals’ (such as insects or small animals). Simply put, he tries to understand the principles of collective behaviour, see how single members adapt to group strategies, and try and reproduce it in the lab in a way that is useful for artificial intelligence. With tigerfish and collective hunting as models, his studies reveal the importance of symbiotic behaviour and lead to conclude that a version of natural selection, with the ‘fittest’ individual winning over the rest of the population, can be transferred into robotics as well.

Dario Floreano from the Swiss Federal Institute of Technology in Lausanne described how animals in a swarm use different types of sensors -like vision, magnetic compass for orientation, and noise; they can also make use of local information, unlike drones which rely on information from ‘vulnerable’ GPS. The question is: can we have swarms that, despite resorting to available technology like GPS, will also follow their own rules instead of being controlled by a computer on the ground?  Floreano recalled how computer graphics’ rules for the animation of swarms with a certain degree of autonomy have already been laid down in the ‘80s by Craig Reynolds. Briefly put: when a drone is too close to the others, it will move far way (repulsion); when a drone is flying in a different direction with respect to the rest of the flock, it will tend to align to the others; when a drone is too distant, it will be attracted. But other variables like the ability to communicate, power capabilities (batteries), agility (quadcopters vs. fixed-wing drones) can greatly affect swarming and continue to be actively researched. Most importantly, one strand of Floreano’s research (commissioned by armasuisse and related to rescue drones’s ability to operate without GPS) has confirmed that sensor-based flight is possible and deserves attention.

Cooperation and teaming (human-robot-dog) in the field of rescue operations in rescue disaster areas is also a line of research at Dalle Molle Institute for Artificial Intelligence (Lugano). Within this context, maintaining connectivity -either within the swarm and among drones and people- is crucial. Researcher Alessandro Giusti explained how another important strand of work focuses on interaction between humans and robots; specifically, it’s about exploring ways in which to exert control over a drone. The lab came up with the idea of pointing at it -an easy, quite natural gesture for people-; the technological options for implementing this solution are wearable interface like bracelets, laser pointers, or a smart watch, which make it possible to direct the robot to performing its task by moving one’s arm. Vision-based control is also being actively tested.

From human-robot interaction to situational awareness. This is the project Titus Cieslewski (University of Zurich) is involved in. The motivational question being: how can drones know where they are, in a hypothetical situation where there’s a team of agents in an unknown environment, they can’t see each other directly (unlike in classic swarms!) and the further they move in exploration, the harder it becomes to communicate? GPS, explained Cieslewski, does not work indoors, can be reflected in cities and is subject to jamming and spoofing in a military context (jamming and spoofing are part of electronic warfare and consist respectively in disrupting your enemy’s wireless communication and sending out wrong positioning). Computer vision can offer a way out, maintained the researcher; through the images captured by their cameras, drones can build ‘sparse visual maps’ resulting from processes like place recognition, pose estimation and optimisation. What Titus Cieslewski is currently bent on, is trying and reduce the amount of data exchanged in the process, which would translate into the possibility of enlarging the team of robots.

 


Artificial Intelligence and the evolution of warfare

Report on 8th Beijing Xiangshan Forum 

by Claudio Bertolotti 

The 8th Beijing Xiangshan Forum unfolded in China from 24th to 26th October.

The event is organised on a yearly basis by the host government’s Ministry of Defence, which invites international partners and representatives to discuss global security issues.

In 2018, the Italian delegation appointed by Defense Minister Elisabetta Trenta was led by Fabrizio Romano (Minister Plenipotentiary), Maurizio Ertreo (Director of the Military Centre for Strategic Studies – CeMiSS) and Claudio Bertolotti (Head of Research at CeMiSS).

The present article reports on the 4th session which took place on 25th October, and which focused on Artificial Intelligence and its impact on the conduct of war.

A previous entry summarized some of the speakers’ views regarding the military applications of AI.

Here, we will examine the role of AI in the next phase of Revolution in Military Affairs (RMA -in other words, the evolution of warfare-) which bears direct consequences on the very same concept of war and the decision making process. «A true revolution» – according to ANM Muniruzzaman, President of the Bangladeshi Institute of Peace and Security Studies – «a revolution to the deadly detriment of those who do not adjust to AI’s offensive and defensive capacities».

Maj. Gen. Praveen Chandra Kharbanda, a researcher with the Indian Center for Land Warfare Studies introduced his speech by emphasizing AI’s potential in imposing a radical change onto RMA.

For instance, it can aptly support the decision-making process by providing a prompt analysis of all primary and secondary factors that could affect strategic and operational planning. Furthermore, the combination of electronic warfare and cyber capacity grants an extraordinary offensive and defensive military leverage, as it allows a thorough monitoring of enemy targets without exposing one’s own pilots and recognition assets to risks and threats.

The same thing applies to critical infrastructures, whose security and safety can still be guaranteed with limited resources, be it in terms of soldiers or equipment. Within this context, the deployment of (partially or totally) remote controlled or AI controlled robots, without entirely replacing troops on the battlefield, nevertheless becomes instrumental in supporting them; and represents a technological and cultural development which, in asymmetric conflicts above all, can still safeguard the human component’s primacy.

On the virtual level, an ever more realistic wargaming activity takes place, which greatly benefits from AI in terms of both training and planning. And as yet another dimension of the contemporary battlefield, the social media represent a great opportunity for surveillance and analysis, in spite of the looming threat of mass control. The speaker concluded his interventions by underlining how, with specific reference to wargaming, the private sector plays a fundamental role.

Supremacy in the intelligence sector is what separates winners from losers on the global battlefield. And this is where AI makes a difference.

In his intervention Zeng Yi, Vice-Director General of China North Industries Group Corporation Limited (NORINCO GROUP) explained that the traditional, combat ‘mechanic system’ is undergoing great and rapid developments thanks to AI, while cyberwarfare also grows in efficiency. As a consequence, command and control systems will increasingly be influenced by AI technology and capabilities, thus also requiring a regular updating in the field of military affairs. Automatic systems will also increasingly play a leading role, particularly in training and direct combat. It’s now clear, according to Zeng Yi, that «what separates winners from losers on the global battlefield is supremacy in the intelligence sector, where the support of Artificial Intelligence is becoming paramount».

«Artificial Intelligence is about to play its part in combat. But is it up to the task?» Such is the vexata quaestio that Zafar Nawaz Jaspal, Professor at the School of Politics and International Relations of Quaid-I-Azam University (Pakistan) indirectly puts to his audience. His analysis went on to focus on the evolution of intelligence, and the forthcoming, tactical role of AI (which essentially tanslates as ‘battlefield-bound’); as far as the strategic and operational ones, we are not there just yet, despite progress being made. The speaker then reminded his audience that,  should a direct, ground confrontation between two actors with equal military capabilities take place, AI would cease to represent a crucial factor. In his conclusions, Zafar Nawaz Jaspal called for further, urgent and permanent development of AI through investments, research and testing.

Artificial Intelligence can affect social behaviour by influencing and altering social structures and functions.

Leonid Konik, CEO of Russian company COMNEWS Group outlined how AI made two key contributions to the military and intelligence fields: in the first place, it represents a launching platform for future, autonomous weapons; secondly, it’s fundamental in problem solving and decision making processes.

Focusing his speech on the social implications of AI, the speaker illustrated how Artificial Intelligence could potentially be used to influence and alter social structures and functions, and to induce a change in individuals’ attitudes and opinions: an issue which clearly paves the ground for a critical analysis on ethical issues linked to certain applications of AI within RMA.

According to Konik, AI’s diffuse application does indeed induce changes in the social behaviour of populations which are subjected to remote-controlled surveillance. And it doesn’t make a difference whether such control is exercised by an external actor (like an enemy or an influencer) or by one’s own government: citizens simply adapt their behaviour to the new situation. In the same way, AI can bring about shifts in the enemy’s attitude, specifically in operational and tactical terms; the taleban in Afghanistan for instance, reshaped their techniques and tactics as a result of the deployment of drones.

Can we figure out the impact of robots in asymmetric wars, in Iraq or Afghanistan for instance? How would that affect the mind of the enemy and of the local populations?

The degree of development and deployment of Artificial Intelligence is contingent upon an individual actor’s ethical issues and constraints -concluded the Russian speaker-. But it’s those who overlook ethics and push the boundaries of AI, who will take the lead in the battlefield.


The military applications of Artificial Intelligence

A focus on the 8th Beijing Xiangshan Forum (24-26 October 2018)

by Claudio Bertolotti

The Beijing Xiangshan Forum which unfolds yearly in China is a venue where, upon invitation by the host government, international partners and representatives discuss global security issues.

In 2018, the Italian delegation appointed by Defense Minister Elisabetta Trenta was led by Fabrizio Romano (Minister Plenipotentiary), Maurizio Ertreo (Director of the Military Centre for Strategic Studies – CeMiSS) and Claudio Bertolotti (Head of Research at CeMiSS). The meeting took place from 24th to 26th October and included an interesting session on the subject of Artificial Intelligence with its impact on the conduct of war.

While a separate article on ‘Artificial Intelligence and the new phase of Revolution in Military Affairs (RMA)’ –a topic which examines the evolution of warfare- will soon follow, I herewith present a summary of interventions which dealt with military applications of AI.

NATO’s Vice-Secretary Rose E. Gottemoeller -along with several other participants- emphazised the increasingly tight interconnection between intelligence and AI. She also pointed out how countering contemporary asymmetric threats will progressively require a sound use of AI which can help, by instance, determine the size and position of troops and armaments belonging either to allies or enemies; evaluate the feasibility of military actions; alter the conduct of operations depending on the evolving battlefield context. Gottemoeller then mentioned how a fundamental pillar of the Atlantic Alliance -article 5- calls for mutual assistance also in case of a cyber attack against a member State, as ratified in 2010 Lisbon Summit and the 2015 Wales Summit. Last but not least, the Vice-Secretary stressed the importance of AI in civilian contexts -i.e. in the identification of victims of terror or military attacks on the one hand, and of natural disasters on the other.

 “Skynet”, which was launched in 2005, is today made up of no less than 170 million security cameras. By 2020, another 600 million are expected to be in place.   

Lu Jun, a scholar from the Chinese Academy of Engeneering made reference to the central role played by AI within the frame of information systems -specifically, for facial recognition purposes, and with a view to preventing and thwarting terrorist threats. He also recalled the paramount function of AI in supporting the development of unmanned aerial, surface or underwater vehicles’ technology.

Though the reader did not mention such aspect, it’s relevant to notice how both these applications are of direct concern to the Chinese security industry whose expansion is based on “Skynet”, a surveillance and facial recognition system which was launched in 2005 in Beijing and soon extended to cover the whole nation. The system is today made up of no less than 170 million CCTV cameras; another 600 million are expected to be in place by 2020. Basically, this amounts to one camera every two persons.

US researcher Gregory Allen, who’s affiliated to the Centre for a New American Security, emphasized the role of AI in supporting intelligence processes -from data gathering to analysis- and reiterated in his turn that never before has the military been so tightly supported by AI. Specifically, he underlined how the increasing deployment of aircraft technology can be rewarding indeed for investors, as it affords them a decisive, battlefield superiority.

Moderator Xu Jie, computing lecturer at Leeds University (GB) underlined how terrorists will also increasingly employ AI thanks to the circulation of technological know-how.

The role of AI in supporting intelligence processes -from data gathering to analysis- is fundamental 

Athsushi Sunami, President of Japanese Ocean Policy Research Institute of Far Eastern Studies – Sasakawa Peace Foundation also agrees on the essential role played by AI in the broader context of intelligence. In Beijing, he focused his own intervention on the main applications of AI in the military and security fields. One other aspect he touched upon, refers to so-called ‘social life intelligence’, which gathers information on the individuals’ preferences, interests, personal choices, political tendencies, opinions etc… and upon which data governments can determine and enact policies (with respect to societies at war, or their own people).

Sunami also hinted at the potential of AI when specifically applied to delimitated areas, such as airports or other targets, or wider areas such as urban zones, and which can be further enhanced by means of integrated systems at the national or transnational level.

Last but not least, the reader further discussed how military power can greatly benefit from the integration of weapons’ systems with AI, and the latter’s support in successful management of emergencies and natural disasters.

Further development of the private sector remains paramount.
Sunami made specific reference to the role played by those start-ups which have been active in the business of game development software, and which helped create a whole new branch of research. We can therefore aptly understand how the diffuse application of AI allows us to aknowledge the potentialities of high-tech in contexts where dual-use (civil-military) is synonimous with effectiveness and long-term, financial sustainability.

(translation: Chiara Sulmoni)


Latest from the ‘5+5 Defence Initiative’

ILLEGAL IMMIGRATION AND CRIMINAL NETWORKS TOOK CENTRE STAGE IN 2018

Tunis, 5th October

The ‘5+5 Defence Initiative’ wrapped up their latest 2018 research meeting in Tunis on 5th October.

The international study group, bent on identifying shared security preoccupations, focused their work on the threat posed by illegal immigration, organised crime and terrorist groups in the Mediterranean. A year-long, in-depth analysis resulted in an internal research document suggesting approaches and solutions to try and contain criminal networks. Libya and the consequences of domestic instability gained specific attention.
The ‘5+5 Defence Initiative’ regroups appointed researchers from Algeria, France, Italy, Libya, Malta, Morocco, Mauritania, Portugal, Spain and Tunisia which in 2018 were coordinated by Dr. Andrea Carteny from CEMAS -Università la Sapienza – Roma.
Italy was represented by CeMiSS’ Strategic analyst Dr. Claudio Bertolotti, who is also START InSight’s Executive Director.
Official research documents emerging from these regular, joint meetings pave the way for discussions among Defence Ministers. The latest paper is being delivered next December.