As AI develops it’s time to start making smart choices. Rohit Talwar, Steve Wells, Alexandra Whittington, April Koury, and Helena Calle draw on key messages from their recent book Beyond Genuine Stupidity – Ensuring AI Serves Humanity to highlight five critical issues and resulting choices facing us as we prepare for the full impacts of AI on the economy.
Technological Unemployment and The New Jobs Landscape
The AI technology vendors are struggling to hold a consistent line. On one hand they are selling the return on investment case – predicated on headcount reductions. As this has become contentious the new line is that AI will free people from routine tasks for more creative work and problem solving. Will employers follow that path? Evidence suggests most are going for cost base reduction.
The challenge for governments is to model a range of scenarios, including extreme ones. They can then start assessing the tax implications of different unemployment levels, explore policy options and identify necessary actions they should take because they are valid under all scenarios.
Reskilling the Workforce and Transforming Education
Generally, the provisions for retraining/lifelong learning are woeful. However, facilities exist in schools/colleges, and there is no shortage of trainers. Exponential change requires an exponential increase in retraining – the cost of inaction will be higher unemployment costs, rising mental health issues, and skill shortages.
For young children the bulk of the jobs they’ll do probably don’t exist yet. We need to equip them with the skills to take up new opportunities: greater emphasis on social and collaborative skills, conflict resolution, problem solving, scenario thinking and accelerated learning.
Universal / Guaranteed Basic Incomes
There will inevitably be employment casualties from automation. How will people afford goods and services if they no longer have jobs? Many argue for provision of a guaranteed basic income (GBI). Countries including Canada, Finland, India and Namibia have been experimenting with different GBI models.
Governments will need to work together on different experiments and see the impacts on funding costs, economic activity, the shadow economy, social wellbeing, crime, domestic violence, and mental health.
New Employer’s Responsibilities – Robot Taxes, Total Employment Responsibility, and Deferred Redundancy
As AI and other disruptive technologies are introduced many issues will arise from choices made by employers. Will they retain the staff freed up by technology or release them? If unemployment costs rise, or GBI schemes are introduced – who will pay for them? One option is “robot taxes”, where firms pay a higher rate of taxes on the profits derived from increased automation.
Opponents of GBI schemes and robot taxes have yet to offer substantive alternative policies. Two options suggested are 1) the notion of a total employment responsibility. If your prior year business turnover was one millionth of national GDP, you’d be responsible for ensuring the employment of one millionth of the workforce. 2) deferred redundancy. Workers stay on your payroll at full pay until they find another job.
It is easy to oppose such ideas but large employers and governments need to think now about policy alternatives for a world possibly needing a smaller workforce.
Ethics, Governance, and Ownership of the Technology
Is AI is too important to leave its evolution to the private sector? Voluntary ethical charters are starting to emerge to govern the development and application of AI and robotics.
The challenge is that AI is recognised as a critical future technology by leading industrial nations such as China, Korea, Taiwan, and the USA. It is an economic battleground – ethics may not be a prime consideration in the race for AI superpower status.
In response, there is a growing argument for state regulation and oversight of AI. This would probably require the capabilities of a regulatory AI to conduct such a governance role as, in the relatively near future, the capabilities and reasoning of most AIs is likely to outstrip humans’ abilities to monitor them.
Given all these challenges, an argument is also being made for governments to nationalise the ownership of AI intellectual property and licence it to the firms that deploy it.
In reality the pace at which AI is advancing has far outstripped our ability to identify the potential impacts, assess the possible implications, and try out potential solutions. A genuinely stupid strategy would be to hope the problem goes away or gets resolved by omnipotent market forces. A more forward-thinking option is to undertake serious assessment of radical possible outcomes, developing policy options for the worst case scenarios, and implementing actions now which we know will be beneficial however the game plays out.
ABOUT THE AUTHORS
Rohit Talwar, Steve Wells, Alexandra Whittington, April Koury, and Helena Calle are futurists with Fast Future – a professional foresight firm specializing in delivering keynote speeches, executive education, research, and consulting on the emerging future and the impacts of change for global clients. Fast Future publishes books from leading future thinkers around the world, exploring how developments such as AI, robotics, exponential technologies, and disruptive thinking could impact individuals, societies, businesses, and governments and create the trillion-dollar sectors of the future. Fast Future has a particular focus on ensuring these advances are harnessed to unleash individual potential and enable a very human future. See: www.fastfuture.com