Forget North Korea and nuclear weapons, the thing you really have to fret about is North Korea building quantum computers, and forget fears over nuclear proliferation, the next war will be fought by AI, say Elon Musk and Vladimir Putin. Maybe the crowd is a bigger threat even than even that.

Tune into radio stations in the US, and you will hear one listener after another ringing in with the view that the US needs to bomb North Korea. Nuclear weapons and AI are indeed threats to humanity, but sometimes, when you hear some of the views rattling around on the blogosphere, or on the radio, one wonders if the biggest threat of the lot is the crowd. Do we really want to bomb a country with nuclear weapons – really? And can someone alert the US and those who call for bombing North Korea into submission, to the reality that it is a perceived military threat from the US that is keeping Kim Jong-un in power. How else do you imagine that Kimmy is able to remain so popular in a country where government policy is responsible for mass poverty?

For that matter, does Trump really want to fight North Korea by trying to ostracise its trading partners, that’s countries such as India, Russia Germany and of course China. Declare trade war on China and the result will be global recession, push China into the cold and the odds of military conflict between the US and China increases many fold.

Assuming we don’t see crass stupidity – which Trump, egged on by his electorate may be capable of – North Korea poses a much bigger threat to the world for a quite different reason. It has a unique combination of circumstances: very few of its citizens are connected to the Internet and yet it boasts some of the best computer hackers in the world. It can happily hack into computer systems across the world – remember it may well have been behind the WannaCry attack – without fear of reappraisal, because it doesn't have any computer systems worth hacking into.

And the single biggest feature of quantum computers, when they are developed, is that they will be able to hack into anything, pretty much instantaneously. The only defence against quantum computers may well be quantum computers – so it is in quantum computing that the next arms race may occur.

Or maybe it will be in AI.

Vladimir Putin thinks so.

In Russia, yesterday, it was Knowledge Day – we could do with more days like that in the West – and speaking to school students, Mr Putin said: “AI is the future, not only for Russia, but humankind.” He added, "the one who becomes the leader in this sphere will be the ruler of the world.”

He continued, “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.”

Some wag on twitter pointed out that Russia and China are pouring state money into AI, while Mr Trump invests more money into coal.

Back in July, China presented a plan to create a domestic AI industry worth $150 billion. According to The New York Times: “The world’s second-largest economy will be investing heavily to ensure its companies, government and military leap to the front of the pack in a technology many think will one day form the basis of computing.” Apparently, the victory by the Google AI system, designed by British subsidiary DeepMind over the world’s best player at Go, had a profound impact on China’s leadership.

The trouble, is the Chinese investment comes at a time when the Trump regime is planning to cut back on the funding of science – whoops.

As for Elon Musk, he has clarified what he means when he warns that killer robots and AI pose the greatest threat to humanity.

In a series of Tweets, he said: "Competition for AI superiority at national level most likely cause of WW3 imo."

In another Tweet he stated: “Govts don't need to follow normal laws. They will obtain AI developed by companies at gunpoint, if necessary."

He also said that the biggest threat lies "If it [AI] decides that a pre-emptive strike is most probable path to victory."

So, what is the answer?

Maybe it is in creating open standards. If the leading AI developers in the world were to jointly create an open standard, then that would lead the world in the advancement of AI. And if the experiences of Linux is anything to go by, parties will only be able to participate in the open standard if other parties trust them – the suspicion that they will use the open standard to develop a proprietary system will lead to them being ostracised by the rest of the community. IBM learnt that the only way to become an expert on Linux was to totally embrace the non-propriety nature of it. The same will apply to an open standard on AI.

As for quantum computers hacking into just about anything, it seems there are only three possible solutions. Solution one: we migrate to blockchain, which may provide a defence. We opt for total transparency – no point in stealing information if it is not a secret – as opposed to doing a CIA and keeping the information that led to the WannaCry attack secret, or we rewind the clock, and ditch the digital age – although the prospects of that happening are not good, after-all, we have totally failed to reverse the move to the age of nuclear weapons.