The UK prime minister is urging world leaders to put pressure on tech companies to do more in the fight against extremism. AI may be the answer, but it will come with a cost.
Mrs May will be a newbie, as will Mr Trump, Mr Macron and the Italian prime minister, Paolo Gentiloni. The G7 is meeting up, and of the seven leaders, only Mrs Merkel, Canada’s Pierre Trudeau and Shinzo Abe of Japan are old hands at these summits.
The G7 may not be the force it used to be, but its voice is still heard across the world. And Mrs May is using the summit to call upon world leaders to urge social media and technology companies such as Facebook, Google and Apple to do more to help fight terrorism.
The big techs are not great employers. Relative to their turnover, their staff count is tiny – maybe they could employ more staff to read through all the postings on Facebook, to identify people who are some-kind of a terrorist risk.
But that is an impossible task. There are just shy of a billion hours of video on YouTube, 1.94 billion active users on Facebook, 83 million fake profiles, and 510,000 comments are posted every minute.
To employ staff that can monitor all activity on its site, Facebook would need to take on several million people. But even that would fail to do the job. You can’t tell much just by reading comments, what you need to do, is combine this with all the other information you have.
No, the only way it can be done is via AI.
The data that is generated by Facebook comments, data that is generated by web browsing history, data that is generated by mobile phone records is not merely enormous, such a word fails to do it justice.
But AI technology is developing so that within a few years analysis of such data will be possible, but it will take ‘state of the art’ in more ways than one. State of the art in data collection, state of the art in neural networks, in machine learning and in the subset of machine learning – deep learning.
It will take the finest minds at Google’s AI division, DeepMind, which developed the computer that is trouncing the world Go champion. It will take many fine minds, and then computers that surpass those fine minds.
Even so, such technology is not far off.
It is often said that Facebook knows more about us that we realise. Well, the latest terrorist atrocities tell us that it knows less than we realise, at the moment.
In a perverse way, those who fear the encroachment of their privacy can take sick-comfort from the Manchester Terrorist attack, we have more privacy than is generally supposed.
Incidentally, Facebook also claims to be working on technology to read our minds. The planned application is innocent enough, the idea is that it will be able to dictate direct to our computer from our minds, bypassing the need to type. But let’s wind the clock forward ten or so years, and say that mind reading technology is common – that’s an awful lot of data that could be analysed.
Larry Page, the CEO of Google parent company Alphabet has said that one of the objectives of its search engine is to feed us search results before we know we want them.
That is mind reading of sorts.
And if Google can do that, won’t that mean it will also know if we are planning some kind of atrocity?
AI can help win the war against terrorism as it stands at the moment – although we have no way of knowing how the dark web may respond to such technology.
But the dangers are obvious – all too obvious.
If we pander to the threat posed by a tiny minority of extremists, and create tools that mean none of us can keep anything secret, are we not in some way pandering to the wishes of the extremists who want to create an overreaction in the West?
Never lose sight that terrorists want to sow discontent in the West, if they can create distrust between Christian and Muslim people – the vast majority of whom find the idea of violence abhorrent – then they will succeed.
And if fears of terrorism mean we enter an age when we have no secrets – even our thoughts can be revealed – well, is that such a good thing?
Maybe we do. Maybe the world would be a better place if there was 100 per cent transparency, all the time. But what we do not want to do is stumble into such a world, there should at least be a debate first.
Instead, we risk creating it by default.