Category: 07

  • Sustainable Security

    RC_long_logo_small_4webThis article is part of the Remote Control Warfare series, a collaboration with Remote Control, a project of the Network for Social Change hosted by Oxford Research Group.

    This article by Esther Kersley, Katherine Tajer and Alberto Muti originally appeared on openDemocracy on 7 November 2014.

    Cyber space is a confusing place. As current discussions highlight the possibility of “major” cyber attacks causing a significant loss of life or large scale destruction, it is becoming harder to determine whether these claims are hype or are in fact justified fears. A new report by VERTIC, commissioned by the Remote Control project, offers some clarity on the subject by assessing the major issues in cyber security today to help better inform the debate and assess what threats and challenges cyber issues really do pose to international peace and security.

    How much of a threat are cyber attacks?

    Cyber attacks have been identified as one of the greatest threats facing developed nations. Indeed, the US is spending $26 billion over the next five years on cyber operations and building a 6,000 strong cyber force by 2016 and the UK has earmarked £650 million over four year to combat cyber threats. This level of investment suggests that states view issues of cyber security as a question of national security. But how much of a threat do cyber attacks pose to national security and how much damage have they caused?

    There is a need for caution when assessing the risk posed to national security by cyber threats. Indeed, although states are heavily investing in cyber security, to date, the majority of cyber incidents that have made the news have not directly impacted a state’s sovereignty, or threatened a state’s survival. For that to happen, an attack would have to significantly affect a government’s ability to control its territory, inflict damage to critical infrastructure or, potentially, cause mass casualties.

    Nevertheless, some notable instances of cyber attacks have had a significant impact on international relations over the past decades. These are ‘Stuxnet’, the cyber attack targeting Iranian uranium centrifuges (allegedly launched by a combined US-Israeli operation), the ‘Nashi’ attacks on Estonian government and private sector websites and web-based services, and the many instances of cyber-espionage that form the so-called ‘Cool War’ currently taking place between China and the US. Furthermore, cyber attacks have also been used as instruments of war in conjunction with conventional military operations, for example during the Russo-Georgian conflict in 2008 and most significantly during in the Israeli air raid against a nuclear reactor facility in Syria in 2007.

    However, to date no attack has led to large scale destruction or fatality, suggesting that the potential for this is unlikely. This is due to the great amounts of technological expertise, material resources and target intelligence required to carry out such an attack. These resources are currently only in the hands of states, that might hesitate in using cyber attacks in such a way, when other means are available. This could of course change, especially if different political actors acquired the necessary means.

    What should we be concerned about?

    This is not to say we have nothing to be concerned about. Although a large scale cyber attack that inflicts mass casualties is unlikely to occur in the near future, cyber activities can still affect civilian lives in other ways. The hyperbolic language used to describe the potential consequences of cyber attacks, combined with a lack of reliable, concrete information on the real risks posed by cyber threats has contributed to the ‘securitisation’ of the debate around cyber security issues. It is feared that this process will lead to possible dangers being overestimated, and vulnerabilities cast as national security threats of immediate concern. States’ reactions to these perceived risks may cause negative implications on both citizens and international peace and security.

    Already we are seeing a potential consequence of securitisation as governments turn to surveillance as a preventative measure against cyber attacks. In addition, the difficulty of attributing cyber attacks, as well as the widespread fear that other countries will constantly engage in cyber espionage, has led some to claim that the ‘cyber realm’ favours the attacker. This, in turn, may lead states to engage in a ‘cyber arms race’, as well as foster a ‘Cool War’ dynamic of continuous attrition and escalation between states. This erosion of trust between states, as well as the diminishing of civil liberties, are two serious concerns with regards to the militarization of cyber space.

    Cyber attacks also pose serious transparency and accountability issues due to the above-mentioned technical complexities of cyber attack attributions, as well as the ambiguous relationship between state and non-state actors (in the ‘Nashi’ attack in Estonia for example, the relation between the youth group responsible for the attack and the Russian government remains an ambiguous one).  The lack of legal clarity in this area is also worrying, meaning attackers will often not face consequences for their actions.

    The only existing international legislation in the field – the Budapest Convention – solely addresses cybercrime and no further issues (such as military use of cyberspace). The Convention also does not have enough support to provide enforcement of its objectives, has no monitoring regime and has not been signed by Russia or China. Furthermore, an attempt to set out ‘rules’ on the legal implications of cyber war – in The Tallinn Manual – found that the complexities of cyber conflict means there are many instances that do not easily adhere to current legislative standards. The speed of technology evolution further hampers drafting of law and international legislation.

    Growth of remote control warfare

    The rise in cyber activities cannot be examined in isolation. Its growth is part of a broader trend of warfare increasingly being conducted indirectly, or at a distance. This global trend towards ‘remote control’ warfare has seen an increasing use of drones, special forces, private military and security companies as well as cyber activities and intelligence and surveillance methods by governments in the last decade.

    Indeed the global export market for drones is predicted to grow nearly three-fold over the next decade, and a broader range of states are now using drones, including France, Britain, Germany, Italy, Russia, Algeria and Iran. The US has more than doubled the size of its Special Operations Command since 2001, and private military and security companies are playing an increasingly important role in both Afghanistan and Iraq, with over 5, 000 contractors employed in Iraq this year.

    The idea of countering threats at a distance, without the use of large military forces, is a relatively attractive proposition as the general public is increasingly hostile to ‘boots on the ground’. However, the concerns highlighted in this latest report with regards to cyber activities are echoed in all ‘remote’ warfare methods as their covert nature means there are serious transparency and accountability vacuums. As well as this, wider negative implications have been identified where these methods are in use, from the detrimental impact of drone strikes in Pakistan to instability caused by special forces and private military companies in Sub-Saharan Africa. The militarisation of cyber space is part of this growing trend and, like these other new methods of warfare, increased transparency and accurate information is essential in order to assess the real impact they are likely to have.

     

    Esther Kersley is the Research and Communications Officer for the Remote Control project of the Network for Social Change. The project, hosted by Oxford Research Group and affiliated with its Sustainable Security programme, examines changes in military engagement, in particular the use of drones, special forces, private military and security companies, cyber warfare and surveillance.

    Katherine Tajer is a Research Assistant for the Verification Research, Training and Information Centre (VERTIC).

    Alberto Muti is a Research Assistant for the Verification Research, Training and Information Centre (VERTIC).

     

    Featured image: The command line environment in MS-DOS. Source: Flickr. Available under Creative Commons v2.0.

  • Sustainable Security

    by Elizabeth Minor, Researcher at Article 36

    RC_long_logo_small_4webThis article is part of the Remote Control Warfare series, a collaboration with Remote Control, a project of the Network for Social Change hosted by Oxford Research Group.

    Later this month, governments will meet in Geneva to discuss lethal autonomous weapons systems. Previous talks – and growing pressure from civil society –  have not yet galvanised governments into action. Meanwhile the development of these so-called “killer robots” is already being considered in military roadmaps. Their prohibition is therefore an increasingly urgent task.

    From 13-17 April, governments will meet at the United Nations in Geneva to discuss autonomous weapons – also referred to as killer robots. The week-long meeting will be the second round of multilateral expert discussions on “lethal autonomous weapons systems” to take place within the framework of the United Nations’ Convention on Certain Conventional Weapons (CCW).

    Urgent and coordinated international action is needed to prevent the development and use of fully autonomous weapons systems. Such systems would fundamentally challenge the relationship between human beings and the application of violent force, whether in armed conflict or in domestic law enforcement. Once activated and their mission defined, these systems would be able to select targets and carry out attacks on people or objects, without meaningful human control. As states with high-tech militaries such as China, Israel, Russia, South Korea, the UK, and the US continue to invest in aspects of increased autonomy in weapons systems technologies, consideration of this issue is increasingly urgent. Campaigners are calling on states to tackle this issue by developing a treaty that pre-emptively bans these weapons systems before they are put into operation, by which time it may be too late.

    The issue

    Taranis stealth UAV

    The UK’s Taranis stealth UAV. The Taranis exemplifies the move toward increased autonomy as it aims to strike distant targets “even on other continents”, although humans are currently expected to remain in the loop. Source: Flickr | QinetiQ

    Weapons systems that do not permit the exercise of meaningful human control over individual attacks should be prohibited, due to the insurmountable ethical, humanitarian and legal concerns they raise. The governance of the use of force and the protection of individuals in conflict require control over the use of weapons and accountability and responsibility for their consequences. This principle, rather than any particular piece of technology or format of weapons delivery, is at the heart of the issue of autonomous weapons systems. Some have argued that fully autonomous weapons systems might reduce the risk of conflict or be able to better protect civilians. However, the focus must remain on these systems’ overall implications for the conduct of violence, rather than on a small range of hypothetical possibilities.

    Tasks can be given to hardware and software systems. Responsibility for violence cannot. The process of rendering the world ‘machine-sensible’ reduces people to objects. This is an affront to human dignity. Computerised target-object matching such as shape detection, thermal imaging and radiation detection may enable the identification of objects such as military vehicles, though in complex and civilian-populated environments, not necessarily with accuracy. However, assessment of information about these objects and the surrounding environment, including the presence of protected persons such as civilians or wounded combatants, is also essential to uphold the principles that govern the launching of individual attacks under International Humanitarian Law. These are not quantitative rules, but considerations that require deliberative moral reasoning and contextual decision-making. As such, they could not be translated into software code. Based on the principle of humanity, they implicitly require human judgement and control over the process of decision-making in individual attacks.

    Other concerns about the development of fully autonomous weapons systems include the dangers of proliferation among state and non-state actors, hacking, and the use of these systems in law enforcement or other situations outside of warfare.

    Campaign to Stop Killer Robots campaign launch in April 2013

    Campaign to Stop Killer Robots first NGO conference in April 2013

    A preemptive ban as a solution

    Whilst the Campaign to Stop Killer Robots is calling on states to move with urgency towards negotiations on a treaty to outlaw fully autonomous weapons systems, previous talks in Geneva have not yet galvanised governments into action.

    Some states have suggested that existing law is sufficient to tackle this issue. Existing international law, which was developed prior to any consideration of autonomous weapons systems, implicitly assumes that the application of force is governed by humans. This body of international law is now inadequate as a reliable barrier to the development and use of fully autonomous weapons systems. A pre-emptive ban through an international instrument would not only halt any progress on these systems amongst states parties, but would help to stigmatise development by others.

    Some states have argued that this issue can be dealt with by conducting individual reviews of their weapons technologies to ensure they continue to uphold current international law. States are already obligated to do this however, and whilst it is important, it will not be sufficient in preventing the development of these systems internationally. A clear legal standard and norm needs to be set, and this is best done through new international treaty law.

    A ban based around prohibiting systems that operate without meaningful human control over individual attacks should be the starting point in international discussions among states, and so the elaboration and agreement of the elements of this principle are required as a next step.

    International response so far

    To date, autonomous weapons have been raised at the Human Rights Council in 2013 and considered by governments in dedicated discussions held at expert meetings of the CCW in 2014. The UN Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, called in 2013 for national moratoria to be imposed by all states on the “testing, production, assembly, transfer, acquisition, deployment and use” of these systems, until an internationally agreed framework on their future has been established. The CCW could be a possible venue for developing this, having previously produced a pre-emptive ban on blinding laser weapons. One should note, though, that previous attempts within the CCW to deliver the responses needed to certain weapons systems have occasionally failed, often hampered by operating under the consensus rule and a tendency to defer to military considerations rather than focus on humanitarian or ethical imperatives.

    Promisingly, the need to ensure meaningful human control has already been a prominent feature of the debate at the CCW, with several states recognising the importance of this approach. In upcoming discussions, governments should elaborate their policies for maintaining meaningful human control over existing weapons systems in individual attacks. Such an exchange would advance consideration of how human control can be ensured over future systems. This would in turn help clarify what practices and potential systems must be prohibited and the standards that states must demonstrate that they are meeting in their conduct. Elements to consider could include the need for adequate information to be available to commanders using any weapons system, positive action from a human being in launching individual attacks, and ensuring accountability.

    Few states have elaborated any policy on human control over weapons systems. Current US policy on autonomous weapons systems stresses that there should be “appropriate levels of human judgement over the use of force”, but does not define what these should be. The policy leaves the door open for the development of fully autonomous weapons systems, whilst recognising the harm they could cause to civilians. The UK government has stated that it has no intention to develop fully autonomous weapons and that “human control” over any weapons system must be ensured. However, it has not given sufficient elaboration of what exactly this means and how it will be ensured.

    States may see different types of operating, supervising or overseeing systems to constitute acceptable control. Agreement between states on the concept of meaningful human control is therefore an important element of international progress on the issue of fully autonomous weapons systems.

    Work by states on an international framework should be supported by input from civil society and draw on the views of a range of experts. Ultimately, negotiation processes will determine the definitions of key concepts. If discussions do not advance towards a binding framework within the CCW, a freestanding treaty process may be required, as was the case previously in the processes to outlaw both anti-personnel landmines and cluster munitions.

    The upcoming meeting of experts at the CCW in April is unlikely to result in particular concrete actions due to the nature and format of the meeting. It could pave the way for a decision in November that states continue to discuss this issue in 2016 and put it on the agenda for the CCW’s 2016 Review Conference. At that point it could be flagged as a subject on which States Parties should develop a new binding protocol. No clear group to lead this process has yet emerged. So far Cuba, Ecuador, Egypt, the Holy See, and Pakistan have endorsed a pre-emptive ban on autonomous weapons systems. France secured consensus for the CCW mandate in 2013 that established its work on lethal autonomous weapons systems, and Germany will be chairing the upcoming meeting, with the aim of seeking consensus on further consideration of the subject. However, the development of fully autonomous weapons systems is already being considered in military roadmaps. This makes their prohibition an urgent task.

    Elizabeth Minor (@elizabethminor3) is a Researcher at Article 36, and was previously Senior Research Officer at Every Casualty, and a Researcher for Iraq Body Count (IBC). 

    Featured image: The UK’s Taranis stealth UAV. The Taranis exemplifies the move toward increased autonomy as it aims to strike distant targets “even on other continents”, although humans are currently expected to remain in the loop. Source: Flickr | QinetiQ

  • Sustainable Security

    Nitrogen largeWith nearly 870 million people chronically undernourished, and progress towards the Hunger Millennium Development Goal ebbing since 2008, feeding the world will continue to be a major global challenge. The limitations of arable land availability, water accessibility, and humanity’s increasing population trajectory further compound the problem. Addressing the challenges to global food security while ensuring the sustainability of the planet will require changes to the way we interact with agriculture and a clear understanding of the driving factors behind it.

    Food and Energy Price Volatility

    World-Energy-PricesThe industrialisation of agriculture over the last five decades has contributed to massive gains in productivity, but it has also made food increasingly susceptible to energy supply and price fluctuations. Energy in the form of oil and gas is needed to run industrial farm equipment and to ship food around the world. Fertilizers, the driving factor behind most yield increases, are intimately tied to energy and therefore price volatility. Nitrogen fertilizers are particularly significant and are created through a process that combines natural gas and inert nitrogen from the atmosphere in a high-energy reaction to create ammonia. Fertilizer production is estimated to account for more than 50 per cent of total energy use in commercial agriculture (Woods, et al 2010). While shale gas has had a significant impact on the US natural gas market, globally, energy prices are expected to rise in the long term and become increasingly volatile, as shown by the graph to the right. Fertilizer costs will follow a similar trend, leading to variability in cost and availability. This can be especially difficult for small farmers in developing countries, whose resilience to price fluctuations is low.

    Locking Ourselves In to Volatility

    Natural means of increasing agricultural yields are possible through recycling manures and planting crops that add nutrients to the soil. However, barring a radical change in agricultural practices, globally we are locked into chemical fertilizer use, especially nitrogen fertilizers in the short and medium term. Approximately 45 per cent of the world’s food supply is grown using chemical fertilizers, and that number is growing. Meat consumption, which requires large amounts of grain for animal feed, is on the rise. Consumption of animal protein in Europe and the United states together is double the world average (FAO 2006), and is expected to grow 10 per cent between 2005 and 2030. However, demand in developing countries for animal proteins is projected to increase 60 per cent in the same period (Reay 2011). Pressure from biofuel legislation in Europe and the United States puts further pressure on land and drives up global food prices.

    Global land deals have increased dramatically in the last ten years, with an area of land eight times the size of the UK sold off globally in that time (Geary 2012). In addition to causing landlessness and poverty for local communities, the land is often used to grow large areas of single-species crops such as soy or eucalyptus, which use industrial agricultural methods requiring a high amount of chemical fertilizer, thus increasing dependence on global energy markets and locking new land into fertilizer dependence. Furthermore, nutrients and pesticides can make their way into local water supplies, degrading the environment upon which local communities depend. For example, water contamination from agricultural runoff can force communities to buy bottled or trucked water at higher prices, reducing their resilience to price fluctuations even further.

    Fertilizer as a Means of Reducing Poverty

    But fertilizers are not evil. Increasing yields (either through better access to fertilizers or implementing natural yield improvement practices) can greatly impact poverty and inequality. There are many regions of the world in which more nutrients are urgently needed in order to ensure the land is not degraded. When fertilizer is introduced to degraded soils, it can have enormous trickle down effects for poverty reduction, health, and education. In the early stages of development, when a country is primarily agrarian, the most consistently effective methods to reduce poverty and improve equality involve the agriculture sector, particularly through methods that raise small farm productivity (Berry 2010, Deininger and Byerlee 2011). For example, a recent review of coffee grower data from Mexico and Peru, published in the World Development journal, found that increasing yields are most important for growers (Barham and Weber 2011).

    Nitrogen: The Missing Link

    So where does that leave us? The very thing that reduces poverty and hunger through increasing yields can cause insecurity through energy price volatility. Add increasing pressure from consumption choices, land degradation, population pressure and climate change and we have a situation of increasing food insecurity globally.

    Population-and-Fertilizer-UseThere is no silver bullet answer to this conundrum. However, the solution will likely be a combination of improving the efficiency of chemical fertilizer use and increasing the productivity and adoption of natural methods. Cross-cutting all of these solutions is the main driver of yields: nitrogen. Phosphorous and potash are also important elements of fertilizer, but nitrogen is the nutrient needed in the largest quantities. Just as a basic knowledge of how CO2 impacts climate change is important for developing solutions to the problem, so is knowledge of nitrogen important for developing solutions to food security.

    Nitrogen is critical for all plants and animals to grow. Some plants build it naturally into the soil through a symbiotic process between bacteria and their roots called ‘biological nitrogen fixation’ (beans and clover, for example), but the majority comes from chemical fertilizers and as a by-product of burning fossil fuels.

    For those that remember the nitrogen cycle from science class, we know that 78.1% of the atmosphere is inert nitrogen (N2). In the 20th century, we developed a way to convert this inert, atmospheric nitrogen into a form of nitrogen accessible to plants and animals (known as “reactive nitrogen”). This has enabled food production to roughly keep pace with the explosion of population growth over the last fifty years. Whether through fertilizers or biological fixation, nitrogen will play a key role in meeting the food needs of the future.

    When there is not enough nitrogen in the soil, loss of soil productivity and degradation occur. Because it is small farmers that often lack access to nitrogen, their yields decline year over year, reducing their annual income and thus exacerbating inequality within the global food system. This pushes them further into poverty, and in many cases can force them to purchase food when they cannot grow enough. Degraded land forces them to go in search of new, more fertile land, breaking apart families and communities.

    However, the solution is not as easy as simply adding more nitrogen in areas where there is not enough. Too much nitrogen can cause serious problems for human health and the environment. While nitrogen is required by plants in order to grow, there is a limit to how much any plant can use. Beyond this “critical load”, nutrients that cannot be absorbed by plants will leach into the water and air. Once in the environment, nitrogen can change forms over an extremely long life (average of 120 years) and detrimentally affect many different systems before finally becoming denitrified back into atmosphere. Nitrogen exacerbates climate change, depletes the ozone layer and drives biodiversity loss. It causes low-oxygen zones in water systems that weaken or kill fish and marine habitats (known as eutrophication or hypoxia). Reactive nitrogen can also be very detrimental to human health through air and water contamination. It is a major contributor to smog, which is estimated to take six months off the life expectancy of over half the population in Europe (Sutton et al 2011). It is even worse in areas like China, where the density of air particulates have registered at twice the level considered “dangerous” in metropolitan centres like Beijing. Ingesting high levels of water-borne nitrates has been associated with cancer, diabetes and adverse reproductive outcomes (Ward et al. 2005).

    The graph below shows nitrogen fertilizer application globally. In the red areas of the graph, many of the main water bodies suffer the detrimental effects of too much nitrogen, and the people that live in those areas suffer as a result of nitrogen pollution. Many of the green areas could benefit from more nitrogen to increase soil productivity.

    WorldFertilizerApplication

    The key is balance. On the one hand, improving the efficiency of fertilizer use will maintain crop yields while protecting the ecosystems humans and animals depend upon. On the other hand, developing biological nitrogen fixation methods or pro-poor fertilizer programmes to increase yields for small farmers will improve their situation economically and strengthen their resilience to price shocks and weather events. In both cases, proper nitrogen management will be a crucial part of solving our global hunger crisis while ensuring sustainability for future generations.

    Lisa Dittmar is the CEO and founder of NitrogenWise,  a website that brings together research and straightforward communication to explain the complexities of nitrogen in a meaningful and relevant way.


    Citations

    Barham, B. L., & Weber, J. G. (2011). The Economic Sustainability of Certified Coffee: Recent Evidence from Mexico and Peru. World Development, 1269-1279.

    Berry, A. (2010). What type of global governance would best lower world poverty and inequality? In J. Clapp, & R. Wilkinson, Global Governance, Poverty and Inequality (pp. 46-68). London: Routledge.

    Deininger, K., & Byerlee, D. (2011). Rising global interest in farmland. Washington DC: World Bank. Retrieved November 30, 2012, from http://siteresources.worldbank.org/INTARD/Resources/ESW_Sept7_final_final.pdf

    FAO. (2006). Livestock Report 2006. Rome: Food and Agriculture Organization of the United Nations.

    Geary, K. (2012). Our Land, Our Lives: Time out on the global land rush. Oxford: Oxfam. Retrieved November 2, 2012, from http://www.oxfam.org/sites/www.oxfam.org/files/bn-land-lives-freeze-041012-en_1.pdf

    Reay, D. S. (2011). Societal choice and communicating the European nitrogen challenge. In M. Sutton, The European Nitrogen Assessment (pp. 585-602). Cambridge: Cambridge University Press.

    Sutton, M. (2011). Too much of a good thing. Nature, 472, 159-161

    Ward, M. (2005). Workgroup report: Drinking-water nitrate and health-recent findings and research needs. Environmental Health Perspectives, 113, 1607-1614

    Woods, J., Williams, A., Hughes, J. K., Black, M., & Murphy, R. (2010). Energy and the food system. Philosophical Transactions of the Royal Society B, 2991-3006

    Front page image source: Organic Fertiliser for sugar cane – Shell

  • Sustainable Security

    This concluding part of a two-part article series continues the discussion on the UK’s naval nuclear power programme and its potential impact on Britain’s energy policy. Read part 1 here.

    In Part 1, we described the intensity of UK commitments to new civil nuclear power and why this is so hard to fully explain. The proposed 16GWe of new nuclear capacity is a difficult policy to justify based on economics, energy security and conventional approaches to understanding innovation and technological transitions. There are serious problems with the UK nuclear power programme, including significant delays, rising costs, and uncertainty surrounding essential foreign investment. The UK government’s own figures show renewables, including onshore wind and solar, to be cheaper than nuclear. As the prospects of resolving underperforming nuclear plans get ever more distant and unlikely, increasingly favourable renewable projects remain ever more threatened by cut-backs. This has led to serious problems in that sector. Taken at face value, these patterns are very difficult to explain.

    What drives these counter-intuitive trends? Many factors will be at play, but, as discussed in Part 1, there is a particular major driver that remains almost entirely unexamined in analysis of UK energy policy. This concerns the pressure to sustain UK nuclear submarine infrastructures by maintaining  more general national reservoirs of specialist nuclear expertise, education, training, skills, production, design and regulatory capacities.

    Could these pressures to maintain capabilities, perceived to be necessary for the country’s naval nuclear propulsion programme, be influencing the intensity of UK commitments to new civil nuclear power? We now examine a crucial period in UK civil nuclear policy during which concerns around defence-related nuclear skills came to the fore shortly after a key policy moment when, for the first time since 1955, UK policy was considering an energy trajectory that did not include new nuclear.

    2003–2006: the unexplained nuclear ‘U-turn’

    Image credit: Thomas McDonald/Flickr.

    For a brief period between 2003 and 2006, nuclear energy seemed to fall out of high-level favour in the UK. The nuclear firm, British Energy was bailed out and brought back into state control in 2002 and nuclear privatisation was widely recognised to have failed. The UK civil nuclear industry was dogged by scandals and cases of costs overrunning. . Meanwhile, New Labour’s earlier efforts to democratise decision-making helped free one initially minor policy initiative from the shackles of bureaucratic inertia and industrial interests. For the first time, nuclear energy strategy escaped the domain of the dedicated ministry.

    Approaching energy policy by the indirect route of “resources”, the new Performance and Innovation Unit (PIU) – reporting directly to the Cabinet Office – was charged with undertaking an extensive reappraisal. This marked a significant departure from the traditional practice where energy policy assessments were closely guarded by the relevant ministry. The PIU review was staffed entirely by civil servants, with half of the review team comprised of leading independent energy analysts recruited from outside government. Freed from the incumbent pressures which constrained earlier UK energy reviews, the 2002 PIU study found that unresolved nuclear waste and economic problems meant that the UK should move towards a more decentralised electricity grid based around renewables and energy efficiency. The February 2003 White Paper Our energy future: Creating a low carbon economy upheld these recommendations. While it did not entirely rule out future investment in nuclear energy, it did find nuclear power to be economically and environmentally “unattractive” for Britain.

    What came next was one of the most abrupt policy turnarounds in UK history. For reasons never officially declared, Prime Minister Tony Blair launched another energy review in November 2005. This second review was not conducted in a transparent and independent way like the PIU process. Instead, it was undertaken by a few partially identified individuals inside the Cabinet Office under the leadership of Blair’s close personal associate, John Birt. According to nuclear advocate Simon Taylor, this involved a select group that most other civil servants in the Cabinet Office did not know even existed, working “in secret” to “re-examine” the case for nuclear energy. Managed by the former Atomic Energy Authority, the consultative part of this exercise was much shallower and shorter than before. Amid other widespread criticism, Greenpeace successfully took the Government to the High Court, where this second review was declared “unlawful” and “deeply flawed”. Yet Blair’s reaction was that this court ruling would “not affect policy at all”. With a further round of consultation, again alienating NGOs, the January 2008 White Paper Meeting the Energy Challenge duly announced a British ‘nuclear renaissance’.

    Among those questioning these events was the Parliamentary Environmental Audit Committee, which in March 2006 asked (without receiving an official answer) why a second energy review was deemed necessary so soon after such a comprehensive predecessor. Four months later, the House of Commons Trade and Industry Select Committee branded the second review a “rubber stamping” exercise designed to give legitimacy to a pre-ordained decision rather than being an ‘open’ consultation.

    It still remains unexplained what (or even who) could have driven this rethink. It is in this light that nuclear expert Steve Thomas has highlighted the ambiguities around exactly what ‘the UK nuclear lobby’ consists of.  With the UK civil nuclear engineering industry so weak and historically unsuccessful (as discussed in part 1), it is unclear where in this languishing domestic sector sufficient political-economic capital might have accumulated to force such an unprecedented and poorly justified national policy turnaround.

    Investment and skills concerns around the UK’s Naval Nuclear Propulsion Programme

    This is where the  imperatives around national submarine capabilities comes into play. It is in exactly this same critical juncture between 2003 and 2006 that an unprecedented intensification can be observed in concerns around the UK’s nuclear submarine capability. Significant problems emerged with the construction of British ‘Astute’ class of submarines. Policies related to nuclear submarines were unveiled in rapid succession – with the December 2003 Defence Review White Paper followed by the December 2006 White Paper on the Future of the UK’s nuclear deterrent, leading up to the ‘initial gate’ House of Commons vote to proceed with a replacement to the nuclear-powered Vanguard-class ballistic missile submarines in March 2007. Inconveniently, it was just prior to this marked intensification of activity on the military side, that civil nuclear power was officially acknowledged to be “unattractive”.

    One notable development emerging at the beginning of this period was an intense lobbying campaign started in March 2004. The well-funded Keep Our Future Afloat Campaign (KOFAC) emanated from the Barrow shipyards, BAE Systems’ construction site for all UK submarines. Trade unions, local councils, county councils and KOFAC relentlessly targeted politicians, party conferences and governmental consultations. Closely connected with KOFAC and lobbying in support of the submarine industry at this time was then MP for Barrow-in-Furness and close ally of Tony Blair, John Hutton, also one of the most significant supporters of civil nuclear power. KOFAC’s lobbying campaign was recognised by parliamentarians as being “one of the most effective” ever seen.  Focusing resolutely on how to protect UK nuclear submarine manufacturing interests, KOFAC highlighted the importance of supporting integrated civil and defence-related nuclear capabilities. For its part, BAE Systems was also evidently busy in other ways behind the scenes – positioning itself (rather extraordinarily) in a memorandum of understanding of 2006 with the ailing US civil reactor vendor Westinghouse to extend its own military submarine focus to a role in civil nuclear supply chains.

    Although internal government reactions to this pressure were invisible, the public response was strikingly accommodating. In 2005, the MoD funded the RAND Corporation to conduct an in-depth two-volume report: “The United Kingdom’s Nuclear Submarine Industrial Base”. The report endorsed crucial links between key skills and capabilities relevant both to submarine and civil nuclear industries. A series of Select Committee consultations and reports ensued, with influential stakeholders in the nuclear submarine supply chain raising many concerns. Lead submarine nuclear propulsion contractors, Rolls Royce, claimed that the depletion of nuclear skills in the civil sector would reduce the support network available to the military programmes. The Royal Academy of Engineering noted that “the skills required in the design, build, operation and disposal of Naval Nuclear Propulsion Plant … are in short supply and increasingly expensive… Overall, the decline of the civil nuclear programme has forced the military nuclear programme, and in particular the nuclear submarine programme, to develop and fund its own expertise and personnel in order to remain operational”.

    Recognising that links between the civil and naval sector need to be encouraged” , a key witness to a 2008 Parliamentary Innovation and Skills Select Committee inquiry noted: “The UK is not now in the position of having financial or personnel resources to develop both programmes in isolation”. In a rare acknowledgement of this relationship from the civil energy side, a detailed low-key Government consultancy report later amplified the same message: “the naval and civil reactor industries are often viewed as separate and to some extent unrelated from a government policy perspective. However, the timeline of the UK nuclear industry has clear interactions between the two, particularly from a supply chain development point of view.”  It was apparently in this crucial period 2003-2006 that this longstanding but under-appreciated industrial dependency between military and civil nuclear sectors finally commanded intense – albeit undeclared – attention at the highest political levels.

    It is remarkable that these patterns were so obvious to see on the military side of UK policy making, but so virtually invisible on the energy side. Yet this selective discretion is hardly surprising. There are strong incentives to keep these kinds of links as invisible as possible. As the National Audit Office has ominously noted of the costs of Trident: “[o]ne assumption of the future deterrent programme is that the United Kingdom submarine industry will be sustainable and that the costs of supporting it will not fall directly on the future deterrent programme.” Acknowledging this – and reflecting implied industrial practice in the military sector – a seconded BAE Systems Submarine Solutions employee writing in a 2007 report for the Royal United Services Institute, discussed the desirability and difficulty of absorbing or ‘masking’ costs of submarine construction in ostensibly civilian supply chains.   Connections between civil and military nuclear infrastructures are also sensitive internationally, with serious tensions surrounding global nuclear proliferation regimes. This is why one Parliamentary witness emphasised that civil-military nuclear links must be carefully managed to avoid the perception that they are one and the same”.

    It was arguably for such reasons that the UK Government response to the nuclear policy crisis of 2003-2006 was so fast and energetic – with the reasons well acknowledged on the defence side, but virtually invisible on the energy side. Corresponding with the unprecedented U-turn on civil nuclear power was an equally unprecedented intensification in efforts to preserve nuclear skills for the military sector. In 2006, a key suppliers group was set up by BAE Systems involving firms in both military and civil nuclear supply chains. The following year the Department of Trade and Industry expanded the National Nuclear Laboratory (NNL) and established a new National Nuclear Skills Academy.

    Since then, the UK Government has gone on to reserve key parts of the HPC contracts for Rolls Royce. BAE Systems has consolidated its interest in civil nuclear construction as well as defence. A huge programme of publicly-funded research has been announced in small modular civil power reactors to build on Rolls Royce’s experience with submarines. And most recently – against a backdrop of massive overcapacity among global nuclear power vendors in what is evidently one of the most economically perilous of sectors – Roll Royce has announced an especially remarkable initiative. Notwithstanding strong pressures for international integration in this overcrowded sector – and a national history in this field of sustained industrial failure – Rolls Royce is now seeking to lead an entirely new industrial consortium branded as distinctively British and dedicated to an untested submarine-derived civil power reactor design. Despite the acknowledged incentives for concealment, these clear linkages between submarine and civil nuclear reactor construction interests provide a key missing link to decipher the otherwise unexplained abrupt reversal in UK nuclear power policy in 2006.

    Submerged drivers of UK energy policy?

    So, what is the role of UK military nuclear commitments in driving a national low-carbon energy strategy that is manifestly more costly and less effective than it otherwise could be? The complexity and secrecy in this field inevitably makes it difficult to be definite. Nevertheless, the wealth of official documentation on the military side and the remarkable conjunction of events around and beyond the period 2003-2006 do seem to present a plausible case. The UK Government’s commitments to military nuclear capabilities do seem to be a significant (albeit undeclared) factor in civil energy strategies, and of industrial policy more generally.

    There are broader questions here over what the military influences on wider British Government policy say about the current state of the UK’s democratic system. It is not necessary to invoke simplistic “conspiracies”. Just as iron filings line up in magnetic fields, so these kinds of institutional pressures can – without any single controlling actor – instil exactly these kinds of patterns. If massive UK civil infrastructure investments really are being shaped to the degree implied by these kinds of perceived military imperatives, then the most important issue is why they are almost completely absent from any kind of discussion or scrutiny – let alone accountability – either in energy policy literatures, or in wider political and media debates. If these institutional forces are as powerful and concealed as they seem, then very serious questions are posed for the health of British democracy in general.

    Phil Johnstone is Research Fellow at the Science Policy Research Unit (SPRU),  the University of Sussex. His current research is focussed on disruptive innovation in the energy systems of Denmark, the UK and Germany. Previously Phil worked on the Discontinuity in Technological Systems (DiscGo) project and is a member of the Sussex Energy Group (SEG). 

    Andy Stirling is a professor in SPRU and co-directs the STEPS Centre at Sussex University. An interdisciplinary researcher with a background in natural and social science, he has served on many EU and UK advisory bodies on issues of around science policy and emerging technologies.

  • Sustainable Security

    This post is based on Paul Rogers’ Monthly Global Security Briefings and was originally posted by Oxford Research Group on 29 April, 2014.

    Free Syrian Army rebels fighting Assad militias on the outskirts of the northwestern city of Maraat al-Numan, Idlib - Syria Source: Freedom House (Flickr)

    Free Syrian Army rebels fighting Assad militias on the outskirts of the northwestern city of Maraat al-Numan, Idlib – Syria Source: Freedom House (Flickr)

    The Syrian War is now in its fourth year and the indications are that the regime will survive and consolidate its position in 2014. This is radically different from early last year when many analysts thought it was under serious pressure, and it should be recalled that in mid-2011, a few months into the war, the prevailing view was that the regime would not last to the end of that year. The costs have been huge, with around 140,000 killed, twice that number injured and more than a third of the population displaced, millions of them refugees in other countries.  This article seeks to put this appalling conflict in a longer term regional context as an aid to looking at possible policy options in attempting to bring the war to an end.

    The Regional Context in 2011

    At the start of 2011 the region was struck by remarkable political upheavals as people in a number of countries reacted against autocratic rule and demanded political change. It commenced with the rapid and unexpected fall of the Ben Ali regime in Tunisia on 14 January and was followed on 11 February by the quite startling collapse of the Mubarak regime in Egypt. Across the region there were public uprisings of varying intensities in Oman, Bahrain, Yemen, Libya and Syria and political uncertainty in several countries including Kuwait, Jordan and Morocco.

    In broad terms, those political authorities that did not immediately collapse reacted in different ways that may be summarised as concession or repression or a mixture of both. In Oman, demonstrations were repressed with force but concessions were also offered and the innate wealth of the authorities was available to “buy off” resentment. In Bahrain the royal house opted for repression, aided by army and police support from Saudi Arabia and the UAE.  Saudi Arabia treated Shi’a opponents harshly but distributed many billions of dollars of resources across most of the population.

    In Morocco, King Mohammed sped up the pace of reform with some effect, and across the border in Algeria some economic concessions, including increased food subsidies, were made.  In Libya, Gaddafi used repression but western, and a few Gulf Arab, states intervened on the part of the rebels; a six-month war ended with regime collapse and Gaddafi’s lynching. This has been followed by huge insecurity, including rise of Islamist and local tribal militias.

    The Syrian regime faced extensive nonviolent demonstrations, most commonly after Friday prayers, and faced an escalation in dissent at a time when two regimes in the region had already fallen and in the same week that Saudi and Emirati forces intervened in Bahrain and the UN approved foreign intervention in Libya. The fate of Mubarak was particular striking for the Assad regime given Syria’s long-term historical relationship with Egypt, and it is probable that this meant the regime believed its only course of action was vigorous repression. It became progressively more vigorous and determined in its pursuit of control.

    Underlying Causes

    Although most of the individual anti-government actions across the Arab World were responses to persistent and long-term autocracy, these were in the context of a number of other factors:

    • Outside of a small cluster of oil-rich states, the wealth-poverty divide has become huge, often with the majority of populations marginalised.
    • Even in countries of modest wealth, much of the economic power has been concentrated in the hands of small groups of elites, often less than a tenth of the population. The world economic downturn from 2007 onwards exacerbated these socio-economic divisions.
    • The demographic transition is still in progress across much of the Middle East, meaning that a large proportion of the population is under the age of 30.
    • Although educational standards are highly variable and there is a still a marked gender gap, in most countries most people now go through high school and there is an increasing proportion of graduates among people under 30. There is frequently a serious lack of job opportunities, not least for well-educated young people. At the time of the changes in Tunisia it was reported to have 140,000 unemployed or seriously underemployed graduates out of a population of 11 million.
    • The surge in world grain prices in the late 2000s, not least following China’s harvest difficulties, added to the economic problems for many, not least in Egypt. Syria had a specific problem of drought stretching over many years, leading to an influx of the rural poor into urban areas.

    As a whole, these factors mean that there are trends across the region that point to the risk of longer-term social upheavals. These will persist and must be factored into any policy formulation that might relate primarily to Syria. Instability is highly likely to be a feature of the region in the coming years.

    Syria’s Perspective

    In the light of the regional upheavals, the Assad regime used high levels of violent repression from the start, which led to a transition from nonviolent to violent protest. From the start the regime presented itself as the guardian of stability against opponents that were essentially terrorists. This may have been a travesty of reality at that time, but in the context of the extraordinary upheavals and uncertainties across the region – as well as a keen understanding of the shared sectarian and geopolitical rivalries that tore Lebanon apart within recent memory – the need for a strong regime was more widely accepted within Syria than most diplomats and external analysts appreciated.

    The regime’s stance was aided by internal and external factors. Internally it had the strong support of the Alawi minority but most other Shi’a, Christians and Druze were also willing to accept the regime as guardian of the security of the state. In combination this represented close to a quarter of the population but there was also support from many in the Sunni business community who feared that regional upheavals would spread to Syria. By and large these elements persist, although the great majority of Syrians just want an end to the war.

    Externally, the regime has had support from three quarters. One is the Hezbollah movement in Lebanon that has long been heavily dependent on Syria for weapons and other support.  Hezbollah militias have become a crucial part of the paramilitary support base of the regime.  Second has been the continuing support of Iran, including weapons, training and supplies, and an important sub-set of this has been the increase in paramilitaries from Iraqi Shi’a communities, backed by Iran. Finally there has been the long–term relationship with Russia, with the Putin government seeing Syria as the key centre for remaining Russian influence in the Middle East.  In the past year Russia has been particularly useful in its support for repairing and upgrading military equipment, especially aircraft and related weapons systems.

    The Islamist Dimension

    In the past year, radical Islamist paramilitary groups such as ISIS, the Islamic Front and al-Nusrah have come to the fore among the rebellion, offering the strongest opposition to the regime. There has thus been an element of self-fulfilling prophecy for the regime. In 2014, internal conflicts among the Islamists have weakened them. They may still offer the strongest resistance but their relative decline is one reason why the regime is likely to survive long-term.  Western states, whatever their public stance, would now prefer to see the regime survive than lose control to al-Qaida-linked Islamists. This is clearly the case for Putin, where fear of an Islamist spill-over to the Caucasus is now considered less likely following the safe conclusion of the Winter Olympics and the internal Islamist conflicts within Syria.

    Policy Implications

    In a very pessimistic environment, there are two more positive elements. One is that relations between Iran and Saudi Arabia are showing signs of improvement, including reports of unofficial Saudi/Iranian discussions on Syria. The second is that a number of local ceasefires have been developed, not least in some parts of Damascus.  There may be scope for these to develop further, especially in parts of the country where Islamist groups are not prominent.

    The international community must seek to increase pressure on the UN to enhance multilateral processes, and specifically seek to engage Tehran and Riyadh. In addition, given that this war has many months and possibly years to run, states must commit to improve aid to refugees and to any initiatives that increase the possibility of gaining and embedding local ceasefires – not least by immediate aid for those districts where ceasefires take hold. Approaches to the region must now take a much longer-term view, based on the likely survival of the regime and the fact that the underlying elements behind changes in the region will persist.

    Paul Rogers is Global Security Consultant to Oxford Research Group and Professor of Peace Studies at the University of Bradford.

  • Sustainable Security

     

    Demonstration condemning the ongoing use of weapons by rebel militias inside Tripoli.“As the price of oil goes down, the pace of freedom goes up… As the price of oil goes up, the pace of freedom goes down…” So says New York Times columnist Thomas Friedman, who argues that the first law of ‘petropolitics’ is that the price of oil and the pace of freedom are inversely correlated in countries “totally dependent on oil” for economic growth. Friedman’s attempt to link economic oil dependency and political freedom is an interesting one, which could go some way towards explaining why many of the world’s top oil-exporting countries are governed by heavy-handed authoritarian regimes. However, the correlation between recent oil price spikes and anti-authoritarian action – particularly in the Arab Spring – challenges Friedman’s assessment.

    Rather than being driven by drops in oil revenues for authoritarian regimes, popular unrest and armed resistance  in countries such as Libya may in fact be correlated with the price of oil remaining high. Inward pressure caused by oil price spikes on petroleum-fuelled supply chains for basic commodities can exacerbate already harsh living conditions, galvanising rebel factions to form a unified anti-authoritarian front against a regime that can no longer ensure price stability for essential goods. This seems true of the 2011 uprising in Egypt (the world’s largest wheat importer), as bread prices rose drastically following the doubling of global wheat prices between June 2010 and February 2011. The impact of high oil prices on the production, shipping and distribution of staple commodities such as corn and wheat – both of which saw severe price escalations of near 40% in 2008 – can lead to social unrest and, in the case of Egypt, the toppling of an authoritarian regime.

    High oil prices mean freedom on the rise?

    Since December 2010, when mass protests began gathering steam in Tunisia, oil prices have remained consistently high, hovering at $82 per barrel. Is it a coincidence that in September 2011, when rebels overtook the coastal town of Bani Walid, one of Colonel Gaddafi’s last strongholds, oil was just above $82 per barrel and the FAO food price index had reached a ten-year high? While oil revenues may be a temporary source of political stability for some authoritarian regimes, the pressure of increasing price volatility on supply chains, due to scarcity in supply, can convert to instability downstream as oil prices have a compounding impact on food prices. Indeed, in December 2010 just a week before the self-immolation of Tunisian food vendor Mohamed Bouazizi, New England Complex Systems Institute a Cambridge-based organisation comprised of faculty from Harvard, MIT and Brandeis, warned the US government that global food prices were about to cross a socially dangerous threshold. If anti-authoritarian action is any indication of freedom ‘on the rise’ then high oil prices in oil-dependent states are at least one major factor.

    Of those countries mentioned in the International Energy Agency’s 2011 list of top oil exporters, ten out of fifteen are classed by Freedom House as ‘Not Free’. Freedom House, ‘an independent watchdog organisation dedicated to the expansion of freedom around the world’, base their rankings on two broad categories: political rights and civil liberties. The former they define by a country’s electoral process, degree of political pluralism and level of participation/ functioning of government; the latter by degree of freedom of expression and belief, associational and organisational rights, rule of law, and personal autonomy and individual rights. The irony, according to Friedman, is that Western dependence on oil imports from countries which are ‘Not Free’ has channelled revenues to authoritarian regimes that oppose freedom. This paradox undermines Western credibility as champions of democracy. In a post-9/11 world, where militant extremists reportedly seek safe harbour in oil-exporting states like Saudi Arabia, the consequences of Western oil dependency undermine the West’s long-term security goals. But, when it comes to Friedman’s equation for ‘petropolitics’, the reverse may actually be true. Recent events such as the Arab Spring demonstrate that as the price of oil rises, impacting staple commodity prices, so too does the need for change – change that is blocked by Western dependence on remaining regimes.

    Bottom-of-the-barrel security

    Western countries reliant on fossil fuel imports from nations ruled by authoritarian regimes are suffering from a crisis of legitimacy – a crisis which could render us more insecure in the long term. In Algeria, where the Arab Spring has not resulted in full on revolution, violent extremists recently made their presence felt at the ‘In Amenas’ gas plant, brutally murdering 37 expatriate workers. The plant, which is jointly operated by BP, Norway’s Statoil and Algerian state oil and gas company Sonatrach, is a major supply source for Western markets. Algeria is responsible for roughly 12.2 billion barrels of crude oil reserves. 85% of Algeria’s oil exports are destined for European and North American markets. Under the leadership of Abdelaziz Bouteflika, whose five year executive terms are renewable indefinitely, Algeria certainly does not rate highly on the list of Freedom House ‘Freedom Ratings’. Military and intelligence services strictly monitor and interfere with open elections. But the Arab Spring may not ever reach Algeria precisely because of the stability brought to the country by a Western-funded heavy-handed regime, which goes to great lengths to protect the general population from militant Islamist extremists and pro-democracy activists alike. Saudi Arabia and UAE are governed by similarly oppressive regimes; regimes which subvert democracy in favour of ‘stability’. Both supply oil and gas to the West. Both benefit from revenues gained through Western dependence in spite of their heavy-handedness.

    Interests versus values

    The Arab Spring has been full of unfortunate surprises linking former and current administrations to corrupt leaders. Photos of a smiling Tony Blair, getting up close and personal with much maligned Colonel Gaddafi, were a hit in the mainstream press as well as online following the collapse of his regime. Not long before that, the Bush Family’s close ties to the Saudi royal family did little to lend credence to their Middle East pro-democracy campaigns in the early 90s and 2000s.

    Germany is in a similarly awkward position as the the largest energy consumer in Europe, with oil making up 38% of Europe’s overall consumption in 2011. Germany is Russian state-controlled energy giant Gazprom’s biggest European customer with 34% of total sales volume of Russian ‘blue fuel’ destined for German markets last year. There was therefore more than a hint of hypocrisy in Angela Merkel’s recent remarks during a visit by Vladimir Putin to a trade fair in Hanover that Russia ‘needs more NGOs’. The statement was made in regards to a Russian law passed last year requiring all NGOs that receive overseas funding to register as  ‘foreign agents’. Topless Ukrainian activists from the pro women’s rights group ‘Femen’ made their presence felt at the trade fair, drawing attention to  Russia’s crackdown on civil society groups and independent media organisations. Russia’s authoritarianism is a key element of the Putin government, but the issue arguably receives little mainstream coverage in the West compared to the Middle East.

    Germany and, by extension, Europe’s de facto dependence on Gazprom to meet their energy needs provides yet another example of why Western countries need to seek develop a more sustainable energy security strategy. It is difficult to legitimately champion broad concerns about upholding civil protections, when some of your largest business partners engage in the shadowy practice of denying basic freedoms to their own citizens.

    Renewable energy… and freedom?

    In light of the above we can welcome new approaches to energy security, which are aimed at reducing dependence on fossil fuel imports from authoritarian states. The Obama Administration’s ‘All of the Above’ energy strategy, as well as the pragmatism which the European Union, led by Germany, has shown in pushing forward a low carbon agenda are both steps in the right direction. Obama has pledged to double American energy efficiency by 2030, setting aside $2 billion over 10 years to support research into ‘a range of cost-effective technologies’, including electric vehicles, domestically-sourced biofuels, fuel cells, and domestically-produced natural gas. The plan also includes scope for reducing oil imports, while boosting renewable electricity generation from wind, solar and geothermal sources. Although Obama’s plan is far from low carbon, it shows promise. By comparison the UK Government, which at one time pledged to be the ‘greenest government ever’, has attempted to push forward its nationwide low carbon transition through the establishment of a Green Investment Bank. However, fairly recent public squabbles in the UK between Ed Davey, Secretary of State for Energy and Climate Change and Chancellor George Osborne the UK’s finance minister, have called that agenda into question.

    Friedman’s claim of an inverse correlation between high oil prices and authoritarianism is flawed. But his point about ‘petropolitics’ is still crucial to security, not only because he tries to link oil price fluctuations to authoritarian politics, but also because he highlights how Western dependence on foreign oil provides significant revenue streams on which remaining authoritarian governments can rely. It is also important to point out that as the global price of oil becomes more volatile due to price instability (see: ‘peaky behaviour’) the economic stability of authoritarian regimes that have consolidated their power bases around fossil fuels will almost certainly erode. Moreover, as the impact of oil prices continue to destabilise staple commodity prices, authoritarian regimes will almost certainly come under increasing pressure from their own populations to step down. Western countries that have formed dubious partnerships with these regimes in order to meet their energy security needs will risk further embarrassment when these regimes are toppled by the inevitable anti-authoritarian movements. Western leaders might then stand by and wait to pick a winner – a dubious strategy at best – in order to ensure that supply shipments are not further destabilised. But is this sustainable?

    Renewable energy is not the most obvious factor for bolstering the strength of nations. But it is fast becoming clear that Western dependency on fossil fuel imports from countries governed by heavy-handed regimes cannot go on. The International Energy Agency has recently announced that power generation from renewable sources worldwide will exceed that from gas and be twice that from nuclear by 2016. That’s a positive sign. As for oil, we will have to wait and see. But if the restoration of Western legitimacy as champions of the “free world” is a top priority for Western leaders, then more support for domestic renewable energy growth is essential.

    Phillip Bruner is Founder of the Green Investment Forum and a guest lecturer in global political economy at the University of Edinburgh

    Image source: United Nations Photo

  • Sustainable Security

    Despite being strictly prohibited in international humanitarian law, child soldiering remains a serious global problem. How effective has the international community’s response to this phenomenon been?

    Constituting one of the most egregious child rights violations, many children are currently actively involved in violent conflict as members of armed organizations, states and non-state actors. They can be found on every continent, but sub-Saharan Africa is the epicenter of the phenomenon. These recruited children perform a range of different tasks; they participate in combat, lay mines and explosives, are scouting, spying, and acting as decoys, couriers or guards. Others are used for logistics and supporting functions such as cooking and cleaning.

    The 1977 Additional Protocols to the Geneva Conventions were the first international treaties to try and tackle the problem of child soldiering. They prohibit the recruitment and participation in hostilities of children under the age of 15. The 1989 Convention on the Rights of the Child, which has achieved almost universal ratification, also included the 15 age limit. An optional protocol to this Convention, in May 2000, lifted the age to 18. It insisted that armed groups should not use children under 18 in any circumstances and called on states to criminalize such practices. However, although the use of children by armed groups is prohibited and defined as a war crime, child soldiering remains a pressing global issue.

    A “time bomb”?

    omo_river_valley_img_0463

    Child soldiers in Ethiopia. Image by Vittorio Bianchi via Flickr.

    The most commonly cited figure for the number of children involved in conflicts is 300,000. This estimate is, however, not necessarily the most accurate one as information on child soldier usage is difficult to obtain. Children are often employed in remote conflict zones away from public view and the media, no record is kept of their number and ages, and those who employ them often deny their existence or claim that these were isolated cases. Besides, they often ‘vanish’ after the conflict ends; they are rarely as visible among the demobilised troops as they were among the combatants at the height of hostilities.

    The number of children active in armed groups is clearly nominal when compared to the millions of children who do not participate directly as soldiers but are profoundly affected by war. Nonetheless, this group is a tangible, visible, and dramatic example of the deprivation of the human rights of children. It has been empirical proven that using children as active participants in armed conflict has severe consequences not only for the child and their family, but also for society in general. For instance, at a recent Paris conference on child soldiering, the keynote speaker, the former French foreign minister Philippe Douste-Blazy, warned that the use of child soldiers is “a time bomb that threatens stability and growth in Africa and beyond.” They are “lost children,” he argued, “lost for peace and lost for the development of their countries”. Also, a New York Times editorial stated: “They are walking ghosts, damaged, uneducated pariahs.” Ultimately, if subscribing to these statements, child soldiering may be thought to contribute to the well-known ‘conflict trap’, i.e. they might increase the likelihood that conflict recurs.

    There are at least two avenues that link former child soldiers to conflict recurrence. First, it is argued that former child soldiers have often limited skills besides killing and being able to fieldstrip weapons after the conflict has ceased. This is primarily due to the fact that they experience little to no education while being in the bush. This lack of education impedes their labour market success: they earn less and are less likely to be engaged in skilled work in comparison to those that were not recruited by armed groups. This may significantly raise the willingness to rejoin armed groups again, which might assure them of at least the basic necessities, such as food and perhaps even a bit of money.

    Second, although child soldiers are far from the only ones who are affected as a result of their experiences in war, they suffer the most and have the least capacity to recover. Typically former child soldiers have witnessed, experienced and/or perpetrated shocking and disturbing violent actions during their time with the armed group. This can create great difficulties both for the children and their interface with society. It can lead to both physical symptoms, such as headaches, stomach pains, sleep disorders, and mental symptoms, like depression, anxiety, and extreme levels of pessimism.

    One of the most worrying symptoms connected to children’s war participation is a supposed increase in the child’s level of aggression. Due to the fact that they often do not have the capacities and experiences to disengage themselves from these violent and aggressive behavioral norms established during their time in the armed groups, difficulties arise when peace is restored. For instance, they often display on-going aggressiveness within their families and communities: they also often use physical violence to resolve conflicts, reflecting an absence of adequate social skills.

    These skills are not easily acquired by former child soldiers since they often encounter broken families once they are back that could have provided a better regulation of the use of violence. Hence, some scholars have argued that the phenomenon of child soldiers feeds upon itself: each round of fighting creates a new cohort, traumatized by the war and bereft of economical skills, who then become a potential pool and catalyst for the next spate of violence. Or as Wessells describes it: “A society that mobilizes and trains its young for war weaves violence into the fabric of life, increasing the likelihood that violence and war will be its future. Children who have been robbed of education and taught to kill often contribute to further militarization, lawlessness, and violence”.

    International response

    The response of the international community to counter child recruitment falls usually in two categories: (1) punishing perpetrators by ‘naming and shaming’ practices and by prosecution; and (2) mitigating some of the damage done to children once they leave the armed group by implementing child-centred Disarmament Demobilization and Reintegration (DDR) programs.

    Concerning the ‘naming and shaming’ policies, the United Nations often publishes reports mentioning particular governments and non-state actors that use children. Some have argued that this has an effect, especially on governments, although there is little empirical evidence to back this up. The largest degree of child recruitment is, however, carried out by non-state actors and it seems that media exposure, public pressure, and pressure of international organizations and governments have little to no effect, with perhaps the exception of rebel groups who strive for secession.

    Besides ‘naming and shaming’ campaigns, the international community has also started to shift its focus to the criminalization of child soldier recruitment. Thomas Lubanga Dyilo, a warlord from the Democratic Republic of the Congo who led the Union of Congolese Patriots, was the first rebel leader convicted by the International Criminal Court for the use of children in military operations. More recently, Charles Taylor, the former president of Liberia, was found guilty of conscripting and enlisting children. It is, however, unclear whether this criminalization has a deterrent effect on child soldier recruitment.

    Once children are out of the armed groups, the international community attempts to mitigate the damage done to them and the potential consequences for society by implementing DDR programs. Initially children were often excluded from these programs, as it was argued that they did not pose a post-conflict threat. Moreover, since children cannot be legally recruited, child-centered DDR program elements were not viewed as a routine component of peacemaking. Fortunately, this has changed in recent times, and most DDR programs now have their own imperatives focused on rehabilitating former child soldiers. Usually these programs consist of three components.

    First, former child soldiers are gathered at pick-up points, moved to disarmaments sites, and, whenever necessary, disarmed. During the demobilization part of the program, eligibility for the DDR program is determined through a screening process in which they receive identity and discharge documents. Reintegration is the third component of the DDR program, which starts at care centers – transit facilities which help prepare former child soldiers for going home and give non-governmental organizations time for the preparation of families and communities to receive the children. During their time at the center, emphasis is placed on educational activities, recreational activities, psychological support and counselling, and several different life skills trainings. Once the parents or extended family members are traced, the children will be taken home to their family and will join an appropriate educational program.

    The effectiveness of these programs in reducing recidivism and establishing post-conflict stability is, however, not always affirmed. Some scholars conclude that these programs are generally inefficient at disarming ex-combatants, reducing the likelihood of recidivism, and addressing their economic and security concerns. This lack of supporting evidence might be due to conceptual and operational problems with defining the outcome of these programs (and how to measure this), and a lack of information on the existing DDR programs (money, personnel, mission statements, etc.). But it might also be due to the content of these programs and how they approach child soldiering. Many of these child-centered DDR programs, for instance, are put in place under enormous time pressure, are often disconnected from the perception of local communities, and are based on a one approach fits all children principle. Consequently, some scholars have called for more flexibility within these programs to enhance is effectiveness. Only then can efforts to promote social reconstruction bear its fruits.

    Roos Haer (PhD, University of Konstanz, Germany) is a postdoctoral researcher at the University of Konstanz at the chair of International Relations and Conflict Management. Her current research interests include the role of children in conflict, child soldier recruitment by state and non-state actors, Disarmament Demobilisation and Reintegration programs, and survey methodology in less developed (conflict) countries. Her research is often based on quantitative field research conducted in Africa. She has published in (a.o) the European Journal of International Relations, Conflict Management and Peace Science, Third World Quarterly, and has published a book with Routledge publisher.

  • Sustainable Security

    Author’s Note: This contribution is a shorter version of the article “Assessment of Transboundary River Basins for Potential Hydro-political Tensions” by De Stefano et al. 2017.

    The impacts of new dams and diversions are felt across borders, and the development of new water infrastructure can increase political tensions in transboundary river basins. International water treaties and river basin organizations serve as a framework to potentially deescalate hydro-political tensions across borders.

    The availability of freshwater in the right quantity and quality at the right times for dependent systems is required for human security, environmental security, and economic growth. As populations and economies have grown, water has become scarcer and more variable in certain locations, leading to concerns over how water may lead to conflict. Though violent conflicts over water occur more often at the local level, disputes over water are also possible at the international level, particularly as impacts of water use spill across international borders.

    Dams and other water infrastructure help manage water variability—providing water in times of drought and dampening the effects of floods. With these benefits come ecological impacts as large-scale water infrastructure effects the hydrologic function of the basin in which they are built. This includes altering the timing and/or magnitude of flows, altering aquatic migratory patterns, and preventing sediments from moving downstream. Thus, the construction of large-scale water infrastructure such as dams and water diversions can become significant sources of tension between countries sharing a river basin.

    The significance of new dams and water diversions is increasing across the world as many countries have begun construction on large infrastructure projects in internationally shared river basins. This is evident in places such as the Nile Basin, where the Ethiopian government’s construction of the Grand Ethiopian Renaissance Dam has been occurring without an agreement with downstream Egypt, and the news of its construction has been met with violent protests and strong rhetoric from Egyptian politicians. Water diversions are not the only factor potentially creating tension between countries over shared waters. Other factors including high population growth, urbanization, increasing water pollution, over-abstraction of groundwater, climate change and water-related disasters can contribute to tensions.

    Building institutional capacity (treaties and river basin organizations) is a crucial factor in decreasing the likelihood of conflict over shared waters – particularly if the agreements contain mechanisms that reduce uncertainty and increase flexibility in water management. Past research suggests that a basin will be more resilient to conflict if a basin has international mechanisms able to manage effects of rapid or extreme physical or institutional change. However, the mere presence of institutions does not necessarily indicate that a basin is resilient, nor does it indicate that water-related conflict will be absent.

    Countries can exploit treaties since they are not easily enforceable. Treaties can also be structured in a way that exploits (or worsens) already-existing inequities between countries. Treaties can not only solidify power imbalances, but can also lock out public participation or even become a source of conflict themselves. This can lead to a lack of participating by some countries.

    Previous studies in analyzing potential future conflict in river basins at a global scale have identified basins at future risk through predictive and forecasting methods, treaty analysis, and climate change. Our recent study aims to contribute to those types of analyses through examining multiple issues – stressors on political relationships due to the development of dams and water diversions, how treaties/river basin organizations can mitigate these stresses, and external socio-environmental factors that could exacerbate these tensions in the near future. We integrate these multi-faceted data to map the risk of potential tensions regarding water and politics in transboundary basins across the globe.

    Findings

    We found several basins to be vulnerable to tensions over water, particularly in Southeast Asia, South Asia, Central America, the northern part of the South American continent, the southern Balkans as well as different parts of Africa (Table 1). New dams and diversions is ongoing or planned in at least 57 basins worldwide. The new dams are highly concentrated in very few geographic areas, including regions in Nepal, Brazil, and India. Most international river basins were found to have a moderate risk of tensions over water (see Figure 1). Twenty-two basins were classified as having a very high risk, and 14 basins were classified as having a high risk of tensions. Many basins of higher risk are concentrated in Sub-Saharan Africa and in Central and Southeast Asia. These basins at higher risk are experiencing a combination of factors lending them vulnerable to conflict, including high rates of dam development, limited, weak, or nonexistent treaty coverage, high water variability, and low gross national income per capita.

    Concluding remarks

    The indicator-based analysis (Figure 1) uses a combination of environmental, political, and economic metrics, including high or increased climate-driven water variability, presence of armed conflicts, and low gross national income per capita, to identify vulnerability and resilience to tensions brought forth by water resources development in international watersheds at a global scale. The development of new dams and water diversions is very unevenly distributed.

    Certain basins will be much more impacted than others. Most of the new water infrastructure is in upstream portions of river basins, with many dams being built in emerging or developing economies that require increased hydropower and water regulation to sustain their economic development. Many of these areas still lack well-developed instruments and institutions that would contribute towards transboundary cooperation.

    The ability to understand when (and where) these variables combine to potentially create conflict is critical to managing and transforming future conflict in transboundary basins. Understanding where conflict might occur can contribute towards guiding policy interventions, focusing capacity-building efforts where needed, and actualizing worldwide initiatives of integrated water resources management. This includes achieving the United Nations’ Sustainable Development Goal Target 6.5 (“By 2030, implement integrated water resources management at all levels, including through transboundary cooperation as appropriate.”).

    Jacob D. Petersen-Perlman is a Research Analyst at the University of Arizona Water Resources Research Center. His research areas of interest include transboundary water conflict and cooperation, water security, and water governance.

    Lucia De Stefano is Deputy Director of the Water Observatory of the Botín Foundation and Associate Professor at Complutense University of Madrid (Spain). Her main fields of interest are multilevel water planning, drought management, groundwater governance, transboundary waters, and the assessment of good governance attributes from different disciplinary perspectives.

    Eric Sproles is a hydrologist at the Centro de Estudios Avanzados en Zonas Áridas in La Serena, Chile and a Courtesy faculty member at Oregon State University. His research areas of interest include climate change impacts on hydrology, particularly on mountain snowpack and streamflow, and remote sensing of terrestrial water storage.

    Aaron T. Wolf is a professor of geography in the College of Earth, Ocean, and Atmospheric Sciences at Oregon State University and directs the Program in Water Conflict Management and Transformation, through which he has offered workshops, facilitations, and mediation in basins throughout the world. His research focuses on issues relating transboundary water resources to political conflict and cooperation.

  • Sustainable Security

    Acclaimed military historian Dr. Mark Moyar discusses the history and current use of US special operations forces, America’s most elite soldiers.

    This interview was originally conducted for the Remote Control project.

    Q. Your book Oppose Any Foe was recently published. The book examines the history of U.S. special operations forces. What are the origins of America’s special operations forces and why were they created?

    Most of America’s special operations forces trace their roots to World War II. The Army Rangers were created in 1942 as a means of collaborating with the British Commandos, at a time when the Commandos were a central element of Winston Churchill’s raiding strategy. The Rangers were disbanded after World War II and again after the Korean War, but they were reincarnated in the 1970s and have been a part of the US Army ever since. President Franklin Roosevelt created the US Marine Corps Raiders in 1942 because his son, who was enamored with commando-type forces, convinced him to form Marine special operations forces despite objections from the head of the Marine Corps. Marine special operations forces were dissolved in 1944, not to be reconstituted until 2006, and eventually the new organization took on the Raider name.

    The US Navy fielded Frogmen in WWII as a means of clearing channels for amphibious landings, and retained some of the units after the war. In 1961, some of the Frogmen were converted into members of Sea, Air, Land Teams (SEALs). The Office of Strategic Services, the primary US intelligence agency during World War II, created special operations forces such as the Jedburghs and Operational Groups, which in the 1950s became the model for the US Army Special Forces.

    Q. In the early years, how strategically effective were US special operations forces?

    During both World War II and the Korean War, the United States formed special operations forces for the purpose of raids on enemy “soft spots.” In both cases, the Americans soon discovered that opportunities for such missions were few and far between. Given the need for regular infantry in these wars of grinding attrition, the special operations units were routinely employed in conventional infantry missions. For the purposes of stealth and speed, these units carried less heavy equipment than other line units, which proved to be a major handicap in conventional combat.

    The heavy losses sustained in battle led to the dissolution of most special operations units prior to the ends of both World War II and the Korean War. The special units of the Office of Strategic Services were somewhat more effective in their role of supporting resistance movements behind enemy lines, but for the most part they had little impact on the tide of battle, and they too were disbanded after the war. The US Navy Frogmen were a notable exception to the general trend, as their performance in clearing obstacles prior to amphibious landings was deemed so successful that they were retained after war’s end.

    Q. In your book, you describe how the future of special operations forces at the end of the 1950s looked bleak, but that the Vietnam War seemed to mark a turning point. What roles were US special operations forces used for during the Vietnam campaign and how did this experience effect their organisational structure and future use?

    President John F. Kennedy was more interested in special operations forces than any other US President, before or since. He enlarged the Army Special Forces and created new units in order to counter insurgencies in Vietnam and other third-world countries. The largest Special Forces program, the Civilian Irregular Defense Groups (CIDGs), performed both guerrilla and counterguerrilla missions, as they shifted from defending their villages to attacking infiltrating North Vietnamese Army units.

    In addition, the Special Forces attempted to insert intelligence collectors and saboteurs into North Vietnam, but most of the people they sent were compromised or killed. Special operations units also carried out reconnaissance missions in Laos and Cambodia, advised paramilitary forces, and conducted raids. After the war, conventional forces and special operations forces blamed each other for failures in Vietnam, based largely on inaccurate perceptions of the war, and those accusations would remain a source of friction for decades to come. Because conventional officers had greater clout, the special operations forces suffered the greater loss in resources after the war.

    Q. In the post-Vietnam era, there was a rise in hostage taking by Islamic terrorists which created the need for soldiers who could take out terrorists quickly and effectively without harm coming to hostages. How did this demand change U.S. special operations forces?

    In the post-Vietnam era, as in other post-conflict eras, special operations forces sought new missions to keep them occupied and demonstrate their worth. An upsurge in hostage taking by Islamic terrorists in the early 1970s led to the reconstitution of the US Army Rangers in 1974 and the formation of Delta Force in 1977 and SEAL Team Six in 1980. The Delta Force mission to rescue US hostages in Tehran in April 1980 failed spectacularly, but it led to a series of reforms with far-reaching implications for special operations forces.

    In the aftermath of the abortive raid, the US government formed the Joint Special Operations Command to alleviate the command problems that arose during the operation, as well as the 160th Special Operations Aviation Battalion to prevent recurrence of aviation mishaps. The Iran calamity also gave impetus to the reforms of 1986, which included creation of Special Operations Command, appointment of an Assistant Secretary of Defense for Special Operations, and authorization of a separate funding line for special operations forces. The inception of Delta Force and SEAL Team Six gave special operations forces permanent raiding capabilities, which would be used for different ends in the early twenty-first century.

    Q. Moving into the twenty-first century, the post-9/11 era has seen a significant increase in the use and numbers of US special operations forces. During the Afghanistan campaign, U.S. special operations forces played an important role in the overthrow of the Taliban. How much did the Afghanistan experience and its perceived successes influence the strategic thinking behind the U.S. military campaigns which would follow?    

    The Northern Alliance militias defeated the much larger Taliban armed forces in 2001 thanks to US Special Forces advisers, whose chief task was the guiding of precision munitions onto Taliban targets. It was the first time that American SOF played a role that could be characterized as strategically decisive, and thus encouraged the view that SOF were a strategic instrument. That view in turn fueled decisions to enlarge SOF and employ them in isolation from conventional forces. Efforts to rely primarily or solely on SOF, however, did not yield the anticipated successes.

    The use of SOF to support local actors failed twice in Afghanistan shortly after the fall of the Taliban- at Tora Bora at the end of 2001 and in Operation Anaconda in early 2002. SOF would also come up short when the Obama administration charged them with the task of building an army of Syrian rebels. Both George W. Bush and Barack Obama attempted to achieve strategic success through SOF surgical strike operations against the leaders of insurgent and terrorist organizations, but the elimination of large numbers of leaders failed to destroy these organizations.

    Q. What were some of the reasons for these failures you mention?

    SOF did not achieve their objectives at Tora Bora because their Afghan partners were not as competent or reliable as the Northern Alliance had been. The Afghan militiamen at Tora Bora failed to pursue Bin Laden aggressively, ensuring that he would escape. In Operation Anaconda, the Afghan partners panicked at the first setback and abandoned the battlefield. In the case of Syria, American special operators were unable to recruit substantial numbers of rebels because the White House put unrealistic constraints on recruitment and because most of the moderate rebels had been wiped out by the time the United States was prepared to back them.

    The many tactical achievements of surgical strike operations did not produce strategic success because the enemy was able to replace lost personnel with competent individuals, in part as the result of popular dissatisfaction with the surgical strikes.

    Q. As you previously mentioned, US special operations forces have expanded much since 9/11. Do you think the US is over-reliant on special operations forces and, if so, why has the US become so dependent on them?

    After 9/11, the Bush administration built up special operations forces for “manhunting” operations against extremist leaders, in the hope that extremist organizations could be destroyed through decapitation. Those organizations proved capable of withstanding the precision strikes, which led the United States to the use of special operations forces against lower levels of insurgent groups. Whereas the Bush administration sought to employ the special operators in concert with conventional forces in Iraq and Afghanistan, the Obama administration began seeking ways to use them as low-cost substitutes for large conventional forces.

    The Obama administration also decided to send more special operations forces into failed and failing states such as Somalia, Yemen, and Iraq to support friendly governments or insurgents. There is now general recognition in the US SOF community that the operators have more work than they can handle with their existing manpower base, and hence some of their work must be shifted to other military forces or civilian agencies.

    Since 9/11, the demands for SOF have exceeded the supply, which explains why the stresses on the forces have become unsustainable. Rectifying the problem will require reducing the deployment pace of special operations forces, which means that some tasks will either have to be performed by other forces, or not performed at all. US conventional forces have the capacity to perform some of those tasks, so the best solution is to shift duties to the conventional forces.

    Q. How much transparency and accountability has there been regarding the use of special operations forces in the US? 

    From their inception, US special operations forces have functioned under conditions of greater secrecy than other military forces. The primary reason has been the need to conceal their activities from the enemy–the more that was known about them, the better the enemy could combat them. Secrecy, though, has also shielded special operations forces from the scrutiny of the American public, media, and Congress

    Lack of transparency has at times made it more difficult to hold special operations forces accountable. Congress, which for decades held special operations forces in high esteem, turned against Special Operations Command in the latter part of the Obama administration as a result of the command’s unwillingness to share information with Congress. Ultimately, Congress used its authority over funding to compel greater transparency.

    Q. One of the many interesting things about your book is that it highlights how important certain presidents were in deciding the types of roles that special operations forces were used for. Thus far, has the use of special operations forces under Trump differed from their use under Obama? 

    It is too early to tell how the use of special operations forces will differ under the Trump administration. The Defense Department is still fleshing out strategy, and has yet to fill key positions. Given the heavy involvement of special operations forces in a multitude of pressing tasks, a certain amount of continuity is inevitable.

    About the interviewee

    Mark Moyar is director of the Project on Military and Diplomatic History at CSIS. The author of six books and dozens of articles, he has worked in and out of government on national security affairs, international development, foreign aid, and capacity building. Dr. Moyar’s newest book is Oppose Any Foe: The Rise of America’s Special Operations Forces (Basic Books, 2017), the first comprehensive history of U.S. special operations forces. He is currently writing the sequel to his book Triumph Forsaken: The Vietnam War, 1954-1965. Moyar has served as a professor at the US Marine Corps University and a senior fellow at the Joint Special Operations University and has advised the senior leadership of several US military commands. He holds a BA summa cum laude from Harvard and a PhD from Cambridge.

  • Sustainable Security

    A version of this article was originally published on openSecurity’s monthly Sustainable Security column on 18 November 2014. Every month, a rotating network of experts from Oxford Research Group’s Sustainable Security programme explore pertinent issues of global and regional insecurity.

    RC_long_logo_small_4webThis article is part of the Remote Control Warfare series, a collaboration with Remote Control, a project of the Network for Social Change hosted by Oxford Research Group.

    While the world’s attention has been focused on the US-led military interventions in Iraq and Syria a quieter build-up of military assets has been ongoing along the newer, western front of the War on Terror as the security crises in Libya and northeast Nigeria escalate and the conflict in northern Mali proves to be far from over. In the face of revolutionary change in Burkina Faso, the efforts of outsiders to enforce an authoritarian and exclusionary status quo across the Sahel-Sahara look increasingly fragile and misdirected.

    The New Frontier

    In early August, coinciding with the restructuring of French military operations in the Sahel and the US-Africa Leaders Summit, Oxford Research Group and the Remote Control Project published a comprehensive assessment of counter-terrorist operations targeting jihadist groups in the Sahel-Sahara region of north-west Africa. That report found extensive and growing evidence of combat, intelligence, surveillance and reconnaissance (ISR), training and equipment, abduction and rendition programmes on this new frontier. While France and the US were easily the most active foreign actors, the UK, Canada, the Netherlands and several other NATO states were also found to be increasingly involved in special forces and ISR operations.

    The launch coincided with the onset of air attacks on Islamic State targets, initially by the US in northern Iraq and latterly by a broad coalition of Western and Arab states in Iraq and Syria. In a context of worsening security crises in Libya, Nigeria and northern Mali and Niger since, US and UK ISR activity is increasing, French deployments in Mali have been reinforced, a new configuration of Arab states has provided impetus for foreign intervention in Libya’s civil war and a “black spring” backlash is emerging against the west’s authoritarian allies in the region.

    Libya on the frontline

    Libya is at the core of the security crisis in the Sahel-Sahara. Since the NATO-led military intervention which overthrew the Qaddafi regime in 2011, Libya has become a security and political vacuum and a major exporter of weapons and insecurity in the region. This has included the return home to the Sahel of hundreds of combatants formerly given refuge or employment by the Libyan state.

    Libya’s civil war reignited in May with the launch of “Operation Dignity” by secular forces from eastern Cyrenaica, seeking to wrest control of Benghazi and Tripoli, the two main cities, from Islamist militia. This has been largely a failure. Most diplomatic missions evacuated Libya in late July and Tripoli and its burnt-out international airport fell to militia from Misrata (Libya’s third city) and allied Islamist groups on 23 August. Benghazi has fallen increasingly into the hands of Salafist groups and the nearby city of Derna is run as an Islamic emirate by Ansar al-Shari’a. Much of the rest of Libya is dominated by local tribal leaders or armed factions, beyond any state control.

    Anti-Gaddafi rebel looks to the sky in the oil town of Ras Lanouf, eastern Libya, Sunday, March 6, 2011. Source: Cropped version of BRQ Network image (via Flickr)

    Anti-Gaddafi rebel looks to the sky in the oil town of Ras Lanouf, eastern Libya, March 6, 2011. Source: cropped version of BRQ Network image (via Flickr)

    Indeed, there are now two rival, elected Libyan governments. The one recognised internationally meets in a Tobruk hotel. It controls little beyond this Egyptian border outpost and its electoral mandate was recently invalidated by the Tripoli-based Supreme Court. The revived General National Council in Tripoli governs the capital and north-west and is dominated by an affiliate of the Muslim Brotherhood and other Islamist factions.

    Libya has thus become a new frontline in the proxy war between the international proponents and opponents of the brotherhood. Qatar and the United Arab Emirates (UAE) were the two main Arab sponsors of the anti-Qaddafi rebellion and contributed to the air attacks on Qaddafi’s forces. They now find themselves backing different sides in Libya. On 17 and 23 August, days after the Tobruk parliament called for foreign military intervention, Emirati aircraft based in and refuelled from Egypt launched unclaimed attacks on pro-Islamist militia around Tripoli airport.

    Despite official denials, it appears that air attacks on Salafist groups in Benghazi in mid-October were launched by Egyptian aircraft. Egypt and the UAE accuse Qatar, the primary sponsor of the brotherhood in Egypt, and Sudan, long ruled by a military affiliate of the brotherhood, of funnelling arms to the various Libyan Islamist militias.

    While the US has condemned all post-2011 foreign intervention in Libya, it is likely that it was aware of the movement of UAE aircraft to Egypt, given that fighters presumably left from Al-Dhafra air base in Abu Dhabi, which is shared by US and French squadrons. Emirati refuelling aircraft are based at Al-Minhad in Dubai, where the UK Royal Air Force (RAF) has an expeditionary wing. These aircraft presumably were cleared by Saudi Arabia (another great opponent of the brotherhood) to overfly its territory. The aircraft and weapons used were supplied by the US and/or France.

    France stands apart among Western allies in its advocacy of, and preparedness for, renewed military intervention in Libya. Since the fall of Tripoli, its defence minister, Jean-Yves Le Drian, has several times advocated a UN mandate for intervention against Islamist groups in Libya and hinted that France may need to act unilaterally sooner or later. Whereas Egypt is most concerned about Salafist groups in Derna and Benghazi, France is focused on al-Qaida affiliates in south-west Libya. Already this year it has opened bases near the Niger-Libya and Chad-Libya borders and revived ISR operations from its air base at Faya-Largeau in northern Chad.

    Northern Mali and Niger

    France cares about southern Libya primarily because of its security commitments to Mali, Burkina Faso, Chad and Niger, the latter hosting multi-billion-euro French investments in uranium production. Since France reorganised its forces in the Sahel from the Mali-focused Opération Serval to the pan-Sahel deployments of Opération Barkhane in mid-2014, security in northern Mali has worsened significantly. This relates partly to the decline in French troop numbers there but also to the reorganisation of regional jihadist groups and the deterioration in relations between the Malian state and local armed separatists. Twenty UN peacekeepers from the Multidimensional Integrated Stabilisation Mission in Mali (MINUSMA) have been killed in at least five jihadist attacks in the north of the country since September. In response, France has had to reinforce its deployments in Kidal district, pulling in troops and equipment from its base in Côte d’Ivoire.

    On 9 October, French forces under Barkhane mounted their first publicly acknowledged offensive action outside of Mali, attacking a convoy supposedly transporting militants and weapons from Libya through Niger towards the country. Militants apparently moving from north-eastern Mali attacked Nigerien security forces in Ouallam three weeks later, freeing dozens of Islamist prisoners and attacking a refugee camp. Citing increased activity, the huge Algerian military is also reported to have moved thousands of troops to its borders with Niger and Mali since last month.

    The US has also sought to extend its own ISR deployment in Niger, announcing in early September that it would be moving its two MQ-9 Reaper unmanned aerial vehicles from Niamey airport, where they have been deployed since early 2013, to Agadez, the main town in the desert north. As with French redeployments in 2014, the objective appears to be to bring more of southern Libya into range of ISR assets.

    Humanitarian opportunity

    RAF Panavia Tornado GR4 fighter over Iraq during a combat mission in support of Operation

    RAF Panavia Tornado GR4 fighter over Iraq during a combat mission in support of Operation “Iraqi Freedom”, on 16 August 2004. Source: SSgt. Lee O. Tucker – Official U.S. Air Force Photo no. DF-SD-07-05791 (via Wikipedia)

    Perhaps least analysed of recent military deployments to west Africa have been those associated ostensibly with humanitarian, rather than security, crises. In late August, following Boko Haram’s seizure of territory and declaration of its own caliphate in northern Nigeria, the RAF deployed a number (three is reported) of Tornado GR4 aircraft from the UK to the French air base in N’Djamena, Chad. This base is also used by US drones.

    Unusually, the Ministry of Defence issued almost no comment on this and refuses to disclose how many aircraft were involved, where they operated from or exactly when and why they were deployed. Officially, they were on an ISR mission in support of attempts to locate the more than 200 girls abducted by Boko Haram from a boarding school in Chibok in north-eastern Nigeria in April. All aircraft had officially returned to the UK by 15 October. While the Tornado GR4 is often deployed as a reconnaissance aircraft, it is dual use and its primary role—for example, in Iraq—is as a medium-range strike aircraft.

    Also very little reported was the US Marine Corps’ establishment during September of three new “co-operative security locations” in Senegal, Ghana and Gabon, along the west African coast. These are to be bases permanently prepared and supplied, but not necessarily manned, to support US interventions under the Obama administration’s “New Normal” doctrine, which facilitates defence or evacuation of US interests and citizens under (terrorist) attack in any country. While marines and their V-22 Osprey aircraft may continue to be based in Spain, Italy and Djibouti, these new west African bases are specifically launch pads for future US military interventions. US military contractors have been stockpiling aviation fuel at these and many other African airports for several years.

    Interestingly, the Senegal facility has been specifically referred to as an “interim staging base”—the usual terminology for a Special Purpose Marine Air-Ground Task Force base—in the context of the US military’s humanitarian mission to control the Ebola epidemic in Liberia. As with previous Obama-era deployments against the Lord’s Resistance Army in and around Uganda and in support of the Chibok abductees, the escalation of a US military presence appears to have been achieved under the cover of humanitarian imperatives and initiatives.

    Towards a Black Spring?

    All this matters because if there is one thing that we should have learned since 2001 it is that Western military interventions to oppose terrorism on foreign soil do not work: they tend to destroy the “host” country while amplifying the threat to the “far enemy”. And proxy wars between the Arab states so lavishly armed by the US, France, the UK and Russia tend to end in something worse than tears. Neither the “war on terror” nor the “Arab spring” (counter-)revolution has yet run its course.

    “The Army: National Shame” caption held by protester in Mali against the 2012 coup. Source: Wikipedia

    The political crisis in Burkina Faso, in which the authoritarian president of 27 years, Blaise Compaoré, was overthrown in a popular uprising turned military coup on 31 October, provides ample warning of the toxic relationships Western states are forging in the Sahel-Sahara in the name of counter-terrorism. As in Mali in 2012, the coup leader in Burkina was an ambitious, US-trained officer. French and US special-operations forces will probably retain their semi-secret bases but their political masters have again been embarrassed by their own role as props to the hated old regime.

    Protesters in Burkina Faso—a remarkably civil, peaceful, articulate and internationalist society that belies the Sahel’s reputation for isolation—have talked up the precedent of their revolution for a “black spring” that would sweep away the Western-armed and educated tyrants whose misrule blights the south of the Sahara. They have chosen a very different path to the eschatological nihilism of Boko Haram but their hunger for change is similarly derived from generations of stultifying and systematic marginalisation under a corrupt, militarised and foreign-sponsored elite.

    Like Tunisia before it, Burkina Faso may be the clear-sighted vanguard that has the self-belief and self-discipline to manage a successful transition from autocracy. It is hard to hold such hope for the supposedly firmer pillars of western Sahel strategy, Chad and Mauritania, which have known almost nothing but rule by armed clans. Nor for Algeria—where the “printemps noir” epithet was coined during a forgotten 2001 Berber uprising—the last of whose mid-century revolutionary leaders yo-yos, paralysed and dying, between Algiers and French clinics.

    Sahara bores are wont to remind outsiders that the great desert is a crossroads, not a cul-de-sac, composed far more of enduring rock than shifting sands. The opposite can perhaps be said of the region’s militaries. Viewed within fragile states, military institutions may look rock-strong but they are built on sand and bound to fickle alliances. As in Burkina, it is society that is the bedrock with the power and permanence to anchor a sustainable strategy for peace and stability.

    Trying to contain a revolution in the Sahel-Sahara is not a long-term option but channelling it may be. Change is coming, one way or another.

    Richard Reeve is the Director of Oxford Research Group’s Sustainable Security programme. He has researched African peace and security issues since 2000, including work with ECOWAS, the AU and the Arab League.