Terrorism and Drones: next challenge
by Ginevra Fontana, Osservatorio ReaCT
original article published on Osservatorio Strategico Ce.Mi.S.S. 4/2019
Remotely piloted aircrafts (RPAs) witnessed an exponential increase in sales during recent years. Small-sized RPAs for photography and videography purposes can now be afforded by an average consumer, as price is akin that of digital cameras with nearly identical specifics. At factory settings, an RPA is equipped with an advanced image-capturing system and presents higher payload capacity, which allows for better stability and manoeuvrability.
At non-altered status it could already be used to infringe privacy and/or no-fly-zone legislation. In the United States, the problem of identifying people responsible for breaking flight law already manifested; in Europe, the cases of drones hovering over Gatwick and Heathrow airports has highlighted their inherent potential to cause confusion and inconveniences.
Geofencing systems, with which RPAs are equipped, have proven fallible. Removing GPS components appears to be a simple process – the drone can then be manoeuvred solely basing on what the integrated image-capturing system transmits. It is also possible to disable the geofencing software through programmes which can be found online – in some cases, it even appears to be virtually non-existent.
In the Syrian and Iraqi theatres of operations, the first reports regarding use of ‘drones’ or ‘armed drones’ by ISIS date back to 2014. They were used to spy on US and Kurdish lines’ movements during the 2014-2017 battles, to drop explosives, and as ‘kamikaze drones’. Multiple reasons have allowed ISIS to include RPAs in its arsenal: notably, simplicity in purchasing second-hand products online. Their dimensions and flight altitude rarely trigger radars or protective shields; for the same reasons, they are also difficult to spot or engage by personnel on the ground. Tampering with them is uncomplicated, and they can be weaponised in various ways. Last but not least, they can provide images of their activities: videos can be used for propaganda purposes, as has already been the case in late 2017.
RPAs for videography and photography purposes are the type exhibiting the biggest potential to turn into a national security issue. The role of the Defence is fundamental in identifying possible solutions for the short-, medium- and long-term to guarantee the protection of the civilian population. A concerted approach from Armed Forces and Law Enforcement would be desirable.
Identifying potentially sensible objectives –harder to determine than critical infrastructure– is one of the first problems arising. ‘Covering’ the whole national territory with anti-drone systems is an objective currently lying out of reach for timing, costs and level of technology.
The necessity materialises to develop an integrated, fully automated search, find and ID system basing on two main motivations. The technologies presently available on the market do not present a satisfying cost-benefit ratio, considering the investment needed to acquire them; secondly, a fully automated system has the capacity to resist saturation by removing the man-in-the-loop element, pre-envisioning future attacks conducted by swarms. Particular attention should be paid to rapidity of reaction and intervention, which interconnects with the engagement question.
The long-term objective should be the development of systems acting upon control algorithms so as to “steal” the drone and land it in a safe zone. The danger, in fact, lies in an RPA armed not only with explosives, but CBRN charges as well. Protocols including the creation of a quarantine zone are needed to safeguard both civilian population and specialised personnel.
Since the development of this system is not achievable in the short-term, existing possibilities need to be analysed on a costs-and-benefits basis. In conducting said analysis, problems relating to the type of command used for the drone (whether remotely controlled or with a pre-set route) and the type of armament (whether the release of the charge is activated through the remote control; or automatically when the drone is above certain pre-set coordinates; or with a timer).
There are four possible outcomes. Having lost connection to the remote control, the drone incurs in a mid-air stalemate (is essentially frozen), automatically goes back to the last known remote-control position, or lands. If a failsafe system is not in place, it crashes to the ground: in this case, if armed with explosive charges, it could detonate; if armed with CBRN ones, it could contaminate the area.
A first option of anti-drone technology might regard the use of jammers, translating their established use as counter-IED systems in conflict areas. Impact on civilian technologies and infrastructures, if used in urban environment, remains to be evaluated on a case-by-case basis. Considering the relatively short distance and duration of drone flights with malevolent potential, absence of a jamming system in loco, whether portable or fixed, could implicate missed engagement. Fixed systems in urban environments present problems regarding background noise, though.
A second hypothesis would be the use of conventional ballistic weapons with the intent of shooting the drone down, or eventually armed with net projectiles. This should only be considered as a last resort option, because of the aforementioned risks concerning typology of armament. Undoubtedly, a danger for the civilian population persists if the menace materialises in crowded areas.
A third option would be using predator birds. The reactivity of these animals and their economic impact make them a competitive short-term solution. A falconry nucleus is estimated to have a maximum cost of around fifty thousand euros – that would mean that, with a budget of three million euros, the installation of circa sixty falconry nuclei could be feasible. The costs for maintaining a single nucleus appear not to be above the few tens of thousands of euros per year.
A fourth option would involve weapons emitting radio frequencies. The specifics of an American-manufactured system appear quite interesting, yet it falls within the category requiring prior authorisation by the Federal Communications Commission to be sold or rented to non-federal users.
Lastly, direct-energy systems are increasingly attracting interest: an example would be the Counter Unmanned Aerial System (C-UAS) provided to the Italian Air Force’s Fucilieri, 16° Stormo during Russian President Vladimir Putin’s visit to Rome in July 2019. It was described as a detection system equipped with devices to electronically interdict flight.
In the short term, the most feasible solution would be the constitution of a pilot experiment using a falconry nucleus to monitor exceptional situations involving a high concentration of people and, if need be, intervene – for example, Sunday Mass at the Vatican or the future Winter Olympics in Milano-Cortina 2026.
Information exchange amongst Armed Forces, intelligence and Law Enforcement should be accentuated. To predict possible future trends, attention should be on alterations, which can be found online, defined as “feasible” by hobbyists, enthusiasts and/or ill-intentioned actors. One should avoid the reasoning by which a possible modification, being it non-functioning, does not represent a future menace: when an idea concerning malevolent use of an RPA is put out, it should be considered as feasible, either in the short-term or in a more distant future.
 Colloquially referred to as “drones” and also known as UAVs (Unmanned Aerial Vehicles), UASs (Unmanned Aerial Systems), or – in the Italian version – APRs (Aeromobili a Pilotaggio Remoto).
 European Commission (2014), “Remotely Piloted Aviation Systems (RPAs) – Frequently Asked Questions”, p. 2 Link: https://bit.ly/2J2gmX9
European Aviation Safety Agency (2016). “Explanatory Note”, Prototype Commission Regulation on Unmanned Aircraft Regulation, p. 13. Link: https://bit.ly/2IZFpKq
 RPA Classification
||Operative radius (km)
||Flight height (m)
||Flight duration (h)
||150 – 300
 An entry-level Canon reflex, like the EOS 1300D, is sold on the Canon official website at € 470,99. In the same price range fall also cameras from other labels, such as the Nikon D3400. DJi, the Chinese company leader in drone-making, prices its drones from the Phantom and Mavic series between 500 and 1.200 dollars. They are the most sold worldwide. In 2017, DJi held more than 36% of the North American market.
Chandler, C. (2017). “For China’s high-flying drone maker, the sky’s the limit”, Fortune. Link: https://bit.ly/2vt9BWr
Glaser, A. (2017). “DJi is running away with the drone market”, Recode. Link: https://bit.ly/2nNIhkd
 An example is Casey Neistat’s case in Manhattan.
P.A. Aitken (2017) “Copy of FAA message sent. Casey Neistat investigation lacks conclusive evidence”, Taitkenflight. Link: https://bit.ly/2W2f5SY;
Andy (2017) “EXCLUSIVE: Details of Casey Neistat’s FAA investigations”, Andy’s Travel Blog. Link: https://bit.ly/2TfKoli.
 BBC (2018), “Gatwick airport: Drones ground flights”, BBC. Link: https://bbc.in/2EvX5uW
BBC (2019), “Heathrow airport drone investigated by police and military”, BBC. Link: https://bbc.in/2Hs4768
BBC (2019), “Heathrow airport: Drone sighting halts departures”, BBC. Link: https://bbc.in/2RokRAL
 “Geo-fencing is the concept of restricting drone access by designating specific areas where the drone’s soft- ware and/or hardware is designed not to enter, even if the pilot, without intent, instructs the drone to go” European Aviation Safety Agency (2015), “Concept of Operations for Drones…”, ibidem.
 Ryan Whitman (2017) “Russian Company Is Selling Mods to Bypass DJI Drone Safety Features”, Extreme Tech. Link: https://bit.ly/2YCHFj6
 Interviews conducted with drone enthusiasts have highlighter how, at the moment in which a DJi drone flies in a no-fly zone proximity, the operator is alerted through a pop-up alert. Accepting the alert, the drone continues to function, and it is yet to be verified if and how the geofencing system would behave. The text of the pop-up alert appearing when using a DJi drone can be found here below —
“No-Fly Zones. There are 1 Authorization Zone(s) nearby. Authorization zone type: Military Facility(Military Zones). Your aircraft may experience RTH interruption, hovering, or Intelligent Flight Mode cancellation. Please fly with caution. Do you wish to apply for Self-Unlocking to access these zones? No / Yes”
 “[ISIS] è un progetto politico di lungo termine con confini mobili […] Frutto delle idee di Abu Musab al-Zarqawi, proclamato “Califfato” il 29 giugno 2014 da Abu Bakr al Baghdadi, ha ridisegnato la geografia del Medio Oriente cancellando i confini di Iraq e Siria prodotti dagli accordi di Sykes Picot del 1916. Si proietta contro gli stati postcoloniali che sorgono all’interno della mappa di “Bilad al Sham”, la leggendaria nazione araba del Levante che corrisponde agli attuali territori di Iraq, Siria, Giordania, Libano, Israele e Autorità nazionale Palestinese”, cit. M. Molinari (2015), “Il Califfato del terrore. Perché lo Stato Islamico minaccia l’Occidente”, Rizzoli, pp. 10-11.
 Peter Bergen & Emily Schneider (2014) “Now ISIS has drones?”, CNN. Link: https://cnn.it/2SMwMWm
Ben Watson (2017) “The Drones of ISIS”, Defense One. Link: https://bit.ly/2YmIus0
Mike Peshmerganor (2018), Blood Makes the Grass Grow: A Norwegian Volunteer’s Fight Against the Islamic State, Independently Published.
 L. E. Davis et al. (2014) “Armed and Dangerous? UAVs and U.S. Security”, RAND Corporation. Link: https://bit.ly/2LMqWUu
 The video hereby referred was circulated on the internet through ISIS-affiliated Amaq agency and spread by ABC News (https://ab.co/2Ybr6en). It showed a drone dropping munitions over a Syrian arms depot. Although the author is skeptical regarding the authenticity of the images themselves, the potential for propagandistic use of these technologies remains undeniable. Link to the video: https://bit.ly/2Yxz9BH
 Only two actors appear to be – at the time of writing – equipped with jamming systems in Italy: central Police services, as they are the ones involved in cases of specific necessity; and offices where classified information is discussed, which undergo periodic checks.
 Legislative Decree n. 61, 11 April 2011, in actualization of Directive 2008/114/CE concerning individuation and designation of European critical infrastructures and evaluation of the necessity to implement their protection.
Legislative Decree, in Italian: https://bit.ly/2NRjMQj
European Directive: https://bit.ly/2Y6pUZ8
 “Human-in-the-loop (HITL). A model that requires human interaction.” Cit. USA Department of Defense (1998), “DoD Modeling and Simulation (M&S) Glossary”, DOD 5000.59-M, p. 124 (emphasis in the original).
 “UAV swarms, inspired mainly by the swarms of insects, are groups of small independent unmanned vehicles that coordinate their operations through autonomous communications to accomplish goals as an intelligent group, with or without human supervision. It may be a heterogeneous mix of machines with dissimilar tasks but contributing synergistically to the overall mission objectives”, cit. Puneet Bhalla (2015), “Emerging Trends in Unmanned Aerial Systems”, Scholar Warrior, Autumn 2015, p. 89.
 Chemical, Biological, Radiological and Nuclear.
 Definitions of “fail-safe” —
(American English): adj. “[D]esignating, of, or involving a procedure designed to prevent malfunctioning or unintentional operation […]”.
(British English): adj. “Something that is fail-safe is designed or made in such a way that nothing dangerous can happen if a part of it goes wrong”.
Collins Dictionary, link: https://bit.ly/2Y98T1i
 In summer 2019, during a drone race in Turin, a hacker attack to the organisers’ Wi-Fi made the operators lose control of their drones. This was caused by the fact that all remotely-controlled APRs were operating on the same Wi-Fi network, offered by the organisers – therefore, attacking this infrastructure was a cyberattack which had no direct effect on the drones (it did not intervene on them), but rather broadly speaking on their wireless communication. The causes of the reported “going crazy” of the APRs are to be found in the fact that these were homemade race drones, presumably with no fail-safe system, already launched at high speed at the time they were disconnected from their remote controllers.
Alessandro Contaldo (2019), “Attacco hacker alla drone race: i quadricotteri fuori costretti ad atterraggi di emergenza”, La Repubblica. Link: https://bit.ly/2NPVGv
 Here below follow a few definitions —
“An improvised explosive device (IED) is a type on unconventional explosive weapon that can take any form and be activated in a variety of ways. They target soldiers and civilians alike. In today’s conflicts, IEDs play an increasingly important role and will continue to be part of the operating environment for future NATO military operations. NATO must remain prepared to counter IEDs in any land or maritime operation involving asymmetrical threats, in which force protection will remain a paramount priority.” in NATO (2018), Improvised explosive devices, www.bit.ly/2Ykd4qb.
“Electronic Warfare: The use of electromagnetic (EM) or directed energy to exploit the electromagnetic spectrum. It may include interception or identification of EM emissions (es.: SIGINT), employment of EM energy, prevention of hostile use of the EM spectrum by an adversary, and actions to ensure efficient employment of that spectrum by the user-State. An example of electronic warfare is radio frequency jamming” in Michael N. Schmitt, editor (2016), Tallinn Manual 2.0 on the international law applicable to cyber operations, Cambridge University Press, p. 565 (enfasi nel testo).
 The use of (civilian) jammers is legal in Italy, as long as the limits set by law concerning emissions and exposure are respected and they do not cause an interruption of public service (art. 340, Italian Penal Code). Armed Forces and Law Enforcement can use them in exceptional cases, hence when they operate in deroga (lit. notwithstanding the current regulation), e.g. for public safety reasons, protection of personalities, public order et simili.
 As could be, e.g., the Wilson handgun jammer.
 COMFOTER SPT (2018), “Sperimentazione antidrone del COMACA”, Esercito. Link: https://bit.ly/2HeeZnR
Stato Maggiore Esercito (2018), “Sperimentazione antidrone del COMACA”, Difesa Online. Link: https://bit.ly/32Xf9b5
Maurizio Tortorella (2019), “Abbattete quel drone”, Panorama. Link: https://bit.ly/2GwHUBF.
According to indiscretions, these exercises have been conducted using a Beretta rifle, caliber 12.
 This amount has been chosen on purpose. Apparently, the Israeli ‘Drone Dome’ system, used at Gatwick airport – against the drone which caused the stop of air traffic – costed the United Kingdom 2.6 million pounds (at the moment of writing, equivalent to nearly 2.9 million euros).
Joe Pinkstone (2018), “The £2.6m Israeli ‘Drone Dome’ system that the Army used to defeat the Gatwick UAV after the technology was developed to fight ISIS in Syria”, Daily Mail Online. Link: https://dailym.ai/2T4PKXb
 As experts estimated during interviews.
 Reference is hereby made to the DronekillerTM, a product of IXI Technology. Company website: https://bit.ly/30ZSOaU
IXI Technology, document on Dronekiller specifics: https://bit.ly/2Ykc5ax
 “[…] sistema radar di rilevamento munito di dispositivi e ottiche diurne e notturne per l’interdizione elettronica del volo”. Cit. Ministero della Difesa / Stato Maggiore della Difesa (2019), “Le Forze Armate concorrono alla cornice di sicurezza per la visita del Presidente Putin”, Difesa. Link: https://bit.ly/2YzxkF4
All web links indicated in the present document have been last accessed on September 27, 2019.
Swarming and Machine Teaming – Defence Future Technologies DEFTECH
A workshop in Thun (Switzerland) to assess the
state-of-the-art technology and research
Chiara Sulmoni reports.
On Wednesday, 21st November 2018 armasuisse S+T (Science and Technology) organised a day-long international workshop for military personnel, researchers, specialists and company representatives, addressing the subject of ‘swarming and machine teaming’.
The event is part of a series which armasuisse Science and Technology organises on a regular basis under the trademark DEFTECH (Defence Future Technologies). These high-profile meetings allow military and civilian experts to share hindsights, anticipate technology trends and make informed decisions in the field of security.
Swarming indicates the deployment of low-cost, autonomous elements acting in coordination with one another to carry out a specific task. Generally, it’s small drones or robots.
Swarms are common in nature, where many species -birds and fish, for instance- move or operate in vast groups. Research into ‘artificial’ swarming often starts with the observation and study of animal behaviour.
Swarming is dual-use, meaning that it can take shape in the civilian environment -for instance, with commercial drones flying in formation- or the military -where it’s principally a battlefield tactics and is associated to the issue of lethal autonomous weapons (LAWS), whose ethical aspects are discussed at UN-level-. Given the rapid development of technology -and the lack of an efficient defence system should a swarming attack take place- armasuisse wished to gain a better understanding of the challenges and risks related to it. The workshop was therefore aimed at getting to know the ‘state of the art’ within this domain. Experts from different fields were called in to provide their perspectives. What follows, is a brief report of some key points which have been touched upon during this meeting, which was organised under the supervision of Quentin Ladetto, Research Programme Manager at armasuisse S+T, and introduced by Director Thomas Rothacher.
Switzerland is a global leader in drone technology. Markus Hoepflinger from Swiss Drones and Robotics Centre (affiliated to armasuisse S+T) was keen to underline right from the start that not only domestic and foreign media dub it the “Silicon Valley of robotics” or “drones centre of the world”. The very same Federal Department of Foreign Affairs is an eager proponent of Swiss expertise (for more information, visit www.homeofdrones.org). Swiss research involves both academic and technical institutes in all regions, and the industry. Today’s environment is mainly mobile robotics with the strongest capability being autonomous flights. There are however a series of potential future military applications which are being looked into, with a view to enhancing search and rescue operations for instance, or for engineering work. Markus Hoepflinger also explained that swarming in the future could dominate war, with experiments underway in Russia, China, the US, Israel and to a lesser extent the EU (e.g. Royal Air Force and Airbus). But drone warfare is not yet happening, despite what has been (questionably) described as the first swarm attack in Syria against a Russian base in January 2018.
Despite rapid progress in all fields, Vincent Boulanin of the Stockholm International Peace Research Institute (SIPRI) emphasized how misconceptions and myths around autonomous systems and artificial intelligence represent a problem, insofar they tend to make policy discussion improductive and blind us from the true possibilities and limitations of these technologies. Programming machines for general tasks is difficult, as they can not generalise from previous situations: while they do process images and understand words (consider the mobile phone application ‘Siri’, for instance) common sense is not one of their assets. On the other hand, autonomous navigation is very context-dependent, with air or underwater environments presenting less obstacles compared to land. ‘Teaming’ is an important aspect of swarming, as machines must communicate with each other and their operator; these systems can share information, perform collaborative tasks (like flying together, complete surveillance assignments, inspect buildings in uncomplex environments). But there’s no symmetrical machine-human communication and finding the right ratio can also be complex. As pertains to the military field proper, Boulanin pointed out how targeting remains the most critical application of autonomy as systems do not have the ability to make a distinction between a civilian and a military target. In the end, autonomy is much easier to achieve for commercial applications.
Martin Hagström from the Swedish Defence Research Agency underlined how having ‘many of something’ does not constitute a problem; the objective is to be able to deploy cheap, efficient sub-systems and a reduced number of ground operators. He also recalled that the antagonist perspective is considerably different from the civil perspective. Swarms rely on satellite navigation (GPS) and are therefore vulnerable to attacks by adversaries who can master a high technological command and could disrupt communication in a contested environment. ‘Robust’ systems are quite expensive and Hagström is therefore persuaded that it might take some time before swarming can be adopted in the military. Other issues to take into account when thinking of flying objects are flight safety rules and policies (air space is not free) and last but not least, the complexity of testing. Stability and predictability are paramount in military applications and because a system acts within its own designed space, autonomy is to make that design space very large, so that it may include many potential events. But outside of (software-based) simulation, testing a system remains hard.
Georg Dietz works for German group IABG mbH and focuses on military airborne platforms. The expert explained that air operations today are increasingly complex for a number of different reasons: the sheer amount of players in the world, the fastest conflict dynamics, the speed of technological advances and information exchange, the rapid growth of sensor ranges and so on. Capabilities like platforms or systems can be insufficient while costs are high, with each new fighter aircraft, by instance, being at least twice as expensive as its predecessor. Future combat air systems will be designed as a system of systems (SoS) consisting of a variety of different components both manned and un-manned, enabling swarming operations. Design and control open up a series of questions though, as to the number and type of platforms needed, the degree of autonomy and technology gaps; on communication in highly contested areas; on human – machine interface and so on. Nevertheless, swarming represents a nearing future in air operations.
Jean-Marc Rickli from the Geneva Centre for Security Policy (GCSP) expounded the concept that swarming is the 5th evolution of military strategy and together with autonomy, it represents a key characteristic of the battlefield of the future. Other strategies (or ways to use force) are ‘denial’ -whose main target is the military; punishment -which hits civilians and infrastructure to exert indirect pressure (terrorism features as punishment); risk -consisting in threatening an escalation, e.g. US-USSR Cold War; and decapitation -which relies on technology like drones to eliminate enemy leadership-. But a large number of small units with sensory capabilities, easy to manouver, able to act in coordination -such is the description of a functioning swarm- can concentrate firepower, speed, forces in a way previously unseen. Swarming tactics is a means to wage asymmetric wars and cyber manifestations of it have already been encountered. 3D-printing of gun components and drones will have important implications, explained the expert. In 2017 in Mosul several Iraqi soldiers were killed by drones operated by ISIS in what was the first instance of the West losing tactical aerial supremacy. Should swarming become a mainstream strategy, we should expect a more conflictual international environment, concluded Rickli.
Marco Detratti from the European Defence Agency (EDA) underlined how, according to estimates, the market for autonomous systems’ products and technology in non-military sectors will be in the order of €100Bn by 2025, with defence playing only a minor part. But swarms have disruptive potential in many fields and while defence is not yet impacted, it nevertheless expects to be in the future. In defence (non-offensive perspective) swarms can change and improve capabilities. Specifically, they can offer ubiquity, resilience and invisibility and are therefore taken into consideration in all tasks and for all domains: land, air, maritime and cyber. From swarms, the military expects cost reduction, decrease in manpower and risk, technical advantages. Since 2010, EDA has been trying to identify scenarios where swarm and multirobot systems could ‘deliver’; it started a series of projects accordingly. Despite technical evidence of feasibility and noteworthy research, problematics and challenges persist: Detratti went on to explain that there are no real autonomous systems in operation; systems are not resilient enough (consumption); they are not ‘smart’ enough; more progress is needed in testing the unpredictable (to be sure, for instance, that things continue to work when communication is interrupted, that information is not manipulated). There are also non-technical issues to take into account, like the need for a big shift in terms of military culture, doctrine and training; public perception; and ethics.
Autonomous (lethal) weapons have been raising ethical issues for years. George Woodhams gave a hindsight into the discussions and initiatives taking place at UN-level and within UNIDIR (UN Institute for Disarmament Research), which has been dealing with UAVs (un-manned aerial vehicles) since 2015. A specific concern regards the use of Reapers and Predators (drones). The Institute has been encouraging the international community to consider what new challenges may emerge from the proliferation of this technology and it also looks into strategic implications of un-manned systems. An issue for the UN to consider in the long term, is whether due to their low risk and cost of deployment, these systems might lead to problematic military practices. Woodhams went on to illustrate lines of debate within the frame of the Convention on Certain Conventional Weapons, a UN-negotiating body designed to address weapons systems with implications on international humanitarian law. A Group of Government Experts to address Lethal Autonomous Weapons systems (LAWS) was established in 2014, with military advisors regularly invited in. It focuses on what is called ‘meaningful human control’ and its ethical foundations, like retaining human agency in decisions over the use of lethal force, preserving human dignity and ensuring human accountability. Talks can be difficult, as the 84 States which are involved in discussions have different military capabilities and levels of hindsight, but everybody seems to agree on the need to identify best practices and practical measures for improving compliance with international law. Though swarming has not been mentioned specifically over the last four years, concluded Woodhams, it’s the one area of autonomy that catches the imagination the most.
From all the implications derived from the concept of swarming, to the practical side of understanding the many ways in which it can take shape. There’s a flurry of exciting and ground-breaking research going on in laboratories, aimed at addressing limitations and constraints, with a view to developing a higher degree of autonomy and coordination.
We already mentioned how research takes ‘inspiration’, so to say, from nature. In introducing his line of work, Nicolas Bredeche from Pierre and Marie Curie University explained that methods used to study natural systems (like animal behaviour) can also be used to study artificial systems; and solutions for artificial systems are often a simplified version of what can be observed in nature. Bredeche oversees research on ‘adaptive mechanisms for collective decision-making in populations of simple individuals’ (such as insects or small animals). Simply put, he tries to understand the principles of collective behaviour, see how single members adapt to group strategies, and try and reproduce it in the lab in a way that is useful for artificial intelligence. With tigerfish and collective hunting as models, his studies reveal the importance of symbiotic behaviour and lead to conclude that a version of natural selection, with the ‘fittest’ individual winning over the rest of the population, can be transferred into robotics as well.
Dario Floreano from the Swiss Federal Institute of Technology in Lausanne described how animals in a swarm use different types of sensors -like vision, magnetic compass for orientation, and noise; they can also make use of local information, unlike drones which rely on information from ‘vulnerable’ GPS. The question is: can we have swarms that, despite resorting to available technology like GPS, will also follow their own rules instead of being controlled by a computer on the ground? Floreano recalled how computer graphics’ rules for the animation of swarms with a certain degree of autonomy have already been laid down in the ‘80s by Craig Reynolds. Briefly put: when a drone is too close to the others, it will move far way (repulsion); when a drone is flying in a different direction with respect to the rest of the flock, it will tend to align to the others; when a drone is too distant, it will be attracted. But other variables like the ability to communicate, power capabilities (batteries), agility (quadcopters vs. fixed-wing drones) can greatly affect swarming and continue to be actively researched. Most importantly, one strand of Floreano’s research (commissioned by armasuisse and related to rescue drones’s ability to operate without GPS) has confirmed that sensor-based flight is possible and deserves attention.
Cooperation and teaming (human-robot-dog) in the field of rescue operations in rescue disaster areas is also a line of research at Dalle Molle Institute for Artificial Intelligence (Lugano). Within this context, maintaining connectivity -either within the swarm and among drones and people- is crucial. Researcher Alessandro Giusti explained how another important strand of work focuses on interaction between humans and robots; specifically, it’s about exploring ways in which to exert control over a drone. The lab came up with the idea of pointing at it -an easy, quite natural gesture for people-; the technological options for implementing this solution are wearable interface like bracelets, laser pointers, or a smart watch, which make it possible to direct the robot to performing its task by moving one’s arm. Vision-based control is also being actively tested.
From human-robot interaction to situational awareness. This is the project Titus Cieslewski (University of Zurich) is involved in. The motivational question being: how can drones know where they are, in a hypothetical situation where there’s a team of agents in an unknown environment, they can’t see each other directly (unlike in classic swarms!) and the further they move in exploration, the harder it becomes to communicate? GPS, explained Cieslewski, does not work indoors, can be reflected in cities and is subject to jamming and spoofing in a military context (jamming and spoofing are part of electronic warfare and consist respectively in disrupting your enemy’s wireless communication and sending out wrong positioning). Computer vision can offer a way out, maintained the researcher; through the images captured by their cameras, drones can build ‘sparse visual maps’ resulting from processes like place recognition, pose estimation and optimisation. What Titus Cieslewski is currently bent on, is trying and reduce the amount of data exchanged in the process, which would translate into the possibility of enlarging the team of robots.
Artificial Intelligence and the evolution of warfare
Report on 8th Beijing Xiangshan Forum
by Claudio Bertolotti
The 8th Beijing Xiangshan Forum unfolded in China from 24th to 26th October.
The event is organised on a yearly basis by the host government’s Ministry of Defence, which invites international partners and representatives to discuss global security issues.
In 2018, the Italian delegation appointed by Defense Minister Elisabetta Trenta was led by Fabrizio Romano (Minister Plenipotentiary), Maurizio Ertreo (Director of the Military Centre for Strategic Studies – CeMiSS) and Claudio Bertolotti (Head of Research at CeMiSS).
The present article reports on the 4th session which took place on 25th October, and which focused on Artificial Intelligence and its impact on the conduct of war.
A previous entry summarized some of the speakers’ views regarding the military applications of AI.
Here, we will examine the role of AI in the next phase of Revolution in Military Affairs (RMA -in other words, the evolution of warfare-) which bears direct consequences on the very same concept of war and the decision making process. «A true revolution» – according to ANM Muniruzzaman, President of the Bangladeshi Institute of Peace and Security Studies – «a revolution to the deadly detriment of those who do not adjust to AI’s offensive and defensive capacities».
Maj. Gen. Praveen Chandra Kharbanda, a researcher with the Indian Center for Land Warfare Studies introduced his speech by emphasizing AI’s potential in imposing a radical change onto RMA.
For instance, it can aptly support the decision-making process by providing a prompt analysis of all primary and secondary factors that could affect strategic and operational planning. Furthermore, the combination of electronic warfare and cyber capacity grants an extraordinary offensive and defensive military leverage, as it allows a thorough monitoring of enemy targets without exposing one’s own pilots and recognition assets to risks and threats.
The same thing applies to critical infrastructures, whose security and safety can still be guaranteed with limited resources, be it in terms of soldiers or equipment. Within this context, the deployment of (partially or totally) remote controlled or AI controlled robots, without entirely replacing troops on the battlefield, nevertheless becomes instrumental in supporting them; and represents a technological and cultural development which, in asymmetric conflicts above all, can still safeguard the human component’s primacy.
On the virtual level, an ever more realistic wargaming activity takes place, which greatly benefits from AI in terms of both training and planning. And as yet another dimension of the contemporary battlefield, the social media represent a great opportunity for surveillance and analysis, in spite of the looming threat of mass control. The speaker concluded his interventions by underlining how, with specific reference to wargaming, the private sector plays a fundamental role.
Supremacy in the intelligence sector is what separates winners from losers on the global battlefield. And this is where AI makes a difference.
In his intervention Zeng Yi, Vice-Director General of China North Industries Group Corporation Limited (NORINCO GROUP) explained that the traditional, combat ‘mechanic system’ is undergoing great and rapid developments thanks to AI, while cyberwarfare also grows in efficiency. As a consequence, command and control systems will increasingly be influenced by AI technology and capabilities, thus also requiring a regular updating in the field of military affairs. Automatic systems will also increasingly play a leading role, particularly in training and direct combat. It’s now clear, according to Zeng Yi, that «what separates winners from losers on the global battlefield is supremacy in the intelligence sector, where the support of Artificial Intelligence is becoming paramount».
«Artificial Intelligence is about to play its part in combat. But is it up to the task?» Such is the vexata quaestio that Zafar Nawaz Jaspal, Professor at the School of Politics and International Relations of Quaid-I-Azam University (Pakistan) indirectly puts to his audience. His analysis went on to focus on the evolution of intelligence, and the forthcoming, tactical role of AI (which essentially tanslates as ‘battlefield-bound’); as far as the strategic and operational ones, we are not there just yet, despite progress being made. The speaker then reminded his audience that, should a direct, ground confrontation between two actors with equal military capabilities take place, AI would cease to represent a crucial factor. In his conclusions, Zafar Nawaz Jaspal called for further, urgent and permanent development of AI through investments, research and testing.
Artificial Intelligence can affect social behaviour by influencing and altering social structures and functions.
Leonid Konik, CEO of Russian company COMNEWS Group outlined how AI made two key contributions to the military and intelligence fields: in the first place, it represents a launching platform for future, autonomous weapons; secondly, it’s fundamental in problem solving and decision making processes.
Focusing his speech on the social implications of AI, the speaker illustrated how Artificial Intelligence could potentially be used to influence and alter social structures and functions, and to induce a change in individuals’ attitudes and opinions: an issue which clearly paves the ground for a critical analysis on ethical issues linked to certain applications of AI within RMA.
According to Konik, AI’s diffuse application does indeed induce changes in the social behaviour of populations which are subjected to remote-controlled surveillance. And it doesn’t make a difference whether such control is exercised by an external actor (like an enemy or an influencer) or by one’s own government: citizens simply adapt their behaviour to the new situation. In the same way, AI can bring about shifts in the enemy’s attitude, specifically in operational and tactical terms; the taleban in Afghanistan for instance, reshaped their techniques and tactics as a result of the deployment of drones.
Can we figure out the impact of robots in asymmetric wars, in Iraq or Afghanistan for instance? How would that affect the mind of the enemy and of the local populations?
The degree of development and deployment of Artificial Intelligence is contingent upon an individual actor’s ethical issues and constraints -concluded the Russian speaker-. But it’s those who overlook ethics and push the boundaries of AI, who will take the lead in the battlefield.
The military applications of Artificial Intelligence
A focus on the 8th Beijing Xiangshan Forum (24-26 October 2018)
by Claudio Bertolotti
The Beijing Xiangshan Forum which unfolds yearly in China is a venue where, upon invitation by the host government, international partners and representatives discuss global security issues.
In 2018, the Italian delegation appointed by Defense Minister Elisabetta Trenta was led by Fabrizio Romano (Minister Plenipotentiary), Maurizio Ertreo (Director of the Military Centre for Strategic Studies – CeMiSS) and Claudio Bertolotti (Head of Research at CeMiSS). The meeting took place from 24th to 26th October and included an interesting session on the subject of Artificial Intelligence with its impact on the conduct of war.
While a separate article on ‘Artificial Intelligence and the new phase of Revolution in Military Affairs (RMA)’ –a topic which examines the evolution of warfare- will soon follow, I herewith present a summary of interventions which dealt with military applications of AI.
NATO’s Vice-Secretary Rose E. Gottemoeller -along with several other participants- emphazised the increasingly tight interconnection between intelligence and AI. She also pointed out how countering contemporary asymmetric threats will progressively require a sound use of AI which can help, by instance, determine the size and position of troops and armaments belonging either to allies or enemies; evaluate the feasibility of military actions; alter the conduct of operations depending on the evolving battlefield context. Gottemoeller then mentioned how a fundamental pillar of the Atlantic Alliance -article 5- calls for mutual assistance also in case of a cyber attack against a member State, as ratified in 2010 Lisbon Summit and the 2015 Wales Summit. Last but not least, the Vice-Secretary stressed the importance of AI in civilian contexts -i.e. in the identification of victims of terror or military attacks on the one hand, and of natural disasters on the other.
“Skynet”, which was launched in 2005, is today made up of no less than 170 million security cameras. By 2020, another 600 million are expected to be in place.
Lu Jun, a scholar from the Chinese Academy of Engeneering made reference to the central role played by AI within the frame of information systems -specifically, for facial recognition purposes, and with a view to preventing and thwarting terrorist threats. He also recalled the paramount function of AI in supporting the development of unmanned aerial, surface or underwater vehicles’ technology.
Though the reader did not mention such aspect, it’s relevant to notice how both these applications are of direct concern to the Chinese security industry whose expansion is based on “Skynet”, a surveillance and facial recognition system which was launched in 2005 in Beijing and soon extended to cover the whole nation. The system is today made up of no less than 170 million CCTV cameras; another 600 million are expected to be in place by 2020. Basically, this amounts to one camera every two persons.
US researcher Gregory Allen, who’s affiliated to the Centre for a New American Security, emphasized the role of AI in supporting intelligence processes -from data gathering to analysis- and reiterated in his turn that never before has the military been so tightly supported by AI. Specifically, he underlined how the increasing deployment of aircraft technology can be rewarding indeed for investors, as it affords them a decisive, battlefield superiority.
Moderator Xu Jie, computing lecturer at Leeds University (GB) underlined how terrorists will also increasingly employ AI thanks to the circulation of technological know-how.
The role of AI in supporting intelligence processes -from data gathering to analysis- is fundamental
Athsushi Sunami, President of Japanese Ocean Policy Research Institute of Far Eastern Studies – Sasakawa Peace Foundation also agrees on the essential role played by AI in the broader context of intelligence. In Beijing, he focused his own intervention on the main applications of AI in the military and security fields. One other aspect he touched upon, refers to so-called ‘social life intelligence’, which gathers information on the individuals’ preferences, interests, personal choices, political tendencies, opinions etc… and upon which data governments can determine and enact policies (with respect to societies at war, or their own people).
Sunami also hinted at the potential of AI when specifically applied to delimitated areas, such as airports or other targets, or wider areas such as urban zones, and which can be further enhanced by means of integrated systems at the national or transnational level.
Last but not least, the reader further discussed how military power can greatly benefit from the integration of weapons’ systems with AI, and the latter’s support in successful management of emergencies and natural disasters.
Further development of the private sector remains paramount.
Sunami made specific reference to the role played by those start-ups which have been active in the business of game development software, and which helped create a whole new branch of research. We can therefore aptly understand how the diffuse application of AI allows us to aknowledge the potentialities of high-tech in contexts where dual-use (civil-military) is synonimous with effectiveness and long-term, financial sustainability.
(translation: Chiara Sulmoni)
Latest from the ‘5+5 Defence Initiative’
ILLEGAL IMMIGRATION AND CRIMINAL NETWORKS TOOK CENTRE STAGE IN 2018
Tunis, 5th October
The ‘5+5 Defence Initiative’ wrapped up their latest 2018 research meeting in Tunis on 5th October.
The international study group, bent on identifying shared security preoccupations, focused their work on the threat posed by illegal immigration, organised crime and terrorist groups in the Mediterranean. A year-long, in-depth analysis resulted in an internal research document suggesting approaches and solutions to try and contain criminal networks. Libya and the consequences of domestic instability gained specific attention.
The ‘5+5 Defence Initiative’ regroups appointed researchers from Algeria, France, Italy, Libya, Malta, Morocco, Mauritania, Portugal, Spain and Tunisia which in 2018 were coordinated by Dr. Andrea Carteny from CEMAS -Università la Sapienza – Roma.
Italy was represented by CeMiSS’ Strategic analyst Dr. Claudio Bertolotti, who is also START InSight’s Executive Director.
Official research documents emerging from these regular, joint meetings pave the way for discussions among Defence Ministers. The latest paper is being delivered next December.