With three terror strikes in London and Manchester in last three months that have claimed 34 lives and wounded over 200, Britain’s security services seem to be under siege. After the latest outrage near London Bridge on Sunday that killed seven, Prime Minister Theresa May pointed to the ‘new threat of terrorism breeding terrorism’. In an address from Downing Street, she said: “Perpetrators are inspired to attack not only on the basis of carefully constructed plots ... and not even as lone attackers radicalised online, but by copying one another and often using the crudest of means of attack.” She was referring to Islamists using weapons as rudimentary as a knife or a hired vehicle to cause mayhem, like it was done earlier too near Westminster, or as far afield as in Berlin and Nice. It now transpires that the suicide bomb attacker who killed 22 people in a Manchester concert last month, detoted a shrapnel-laden homemade bomb and was acting largely alone. The Boston Marathon bombing in April 2013 was carried out by two brothers who learnt to fabricate a pressure cooker bomb with ils and ball bearings from an online magazine of the notorious Yemen al-Qaeda franchise. For security agencies in US, Britain, France and several other countries, keeping tabs on suspect individuals or small groups is becoming a nightmare, because they rarely show any prior links with known jihadi networks. Rather than a vast terror organisation with tentacles spread across the world, the damage is being done more by attackers who self-radicalised, absorbed easily available terror content on the Web, and carried out copycat attacks. This explains why Prime Minister May singled out internet companies when she said: “We cannot allow this ideology the safe space it needs to breed – yet that is precisely what the internet, and the big companies that provide internet-based services provide.” Promising tougher anti-terror laws, the British PM also called for agreements between allied democratic governments ‘to regulate cyberspace’, so as to prevent the ‘spread of extremism and terrorism planning’.
While Prime Minister May has raised a matter of serious concern — about a terrorist mindset and ideology that spreads through internet to seek out willing converts in the anonymity of their homes — questions are being posed about her prescription to prevent this sinister phenomenon. Will Britain emulate the Chinese government’s aggressive internet policing and blocking of rogue websites? Does her proposal to regulate the internet and its encryption means that security agencies will be armed with the powers and capability to read everyone’s messages? Of course, there are other components of the British PM’s address that her detractors have locked on to — like the ones about Britons needing to be prepared to have ‘embarrassing conversations’ about extremism, or about universities grappling with a ‘new duty’ to prevent extremist radicalising. Does this presage a surveillance apparatus with its eyes and ears everywhere, like the kind of US Homeland Security President Trump has been talking about with his concept of ‘extreme vetting’ of suspect people? As Prime Minister May warns of ‘extremism online’, civil liberty groups contend that policing the Web will simply drive terror mongers into its far darker, iccessible corners. Meanwhile, mega internet companies like Google and Facebook are sure to face a rising tide of demands to be more proactive and effective in removing terror content from their platforms. Facebook has already responded by pointing out that online extremism can only be tackled via strong partnerships with policymakers, civil society and others in the tech industry. It claims that whenever any such objectioble content is detected by its ‘combition of technology and human review’, law enforcement authorities are alerted at once. But realistically speaking, effective monitoring of the Web to detect terror content is easier said than done, which is why Chi takes recourse to blocking websites wholesale. Photos, videos and writings with obviously violent content may be simpler to mark out. But what about subtler content? Besides, is there any effective filter to distinguish between the online campaigns run by terror outfits and various genuinely oppressed groups? Clearly, the artificial intelligence (AI)-based filters used by internet companies will need to be far better to do the job. These companies will also have to frame an adequate strategy when various tiol governments ask them to turn over unencrypted material, or even data about users searching out terror-related content on the Web. This is relevant for India too, with its own terror threat perception.