Anything you can do, AI can do better. Or can it?
By Dr. Taryn Morris
Rapid technological development will displace millions of workers this decade. This could be the opening line of any one of hundreds of articles currently doing the rounds regarding the fear of job loss due to artificial intelligence (AI).
This is in fact the first sentence of a piece published in the Washington Post on 20 December 1981 by Warren Brown. The article, titled “Job growth in ‘80s linked to computer” fingered computers as the culprit for predicted reductions in employment growth during the 1980s. The fear that machines will replace the need for men and women in the workplace, however, is far older than one might think.
Almost 100 years ago, in the 1920s, John Maynard Keynes, one of the most influential economists of the 20th century and the father of Keynesian economics, described the technological revolutions of the time as “a disease” that would result in widespread “technological unemployment”. While as far back as the early 1800s, during the first industrial revolution, Luddites protested the introduction of technology they believed threatened their jobs by smashing any new machinery introduced into their textile industry.
With growing fear that AI will jeopardise millions of current day jobs as we know them, it seems we are riding a new wave of an old fear. These sentiments were clear in a recent article that cited a 2018 PwC report on the predicted impact of AI and related technologies on job loss. The article reports that more than 10 million workers in the UK were at “high risk of being replaced by robots within 15 years”. What the article didn’t report, however, was that the predicted loss of jobs displaced by AI was likely to be balanced out by the additional jobs that will be created, resulting in a broadly neutral net effect of AI on employment.
It is not only the fear of technological unemployment that is resurgent, but also the great debate between pessimists and optimists regarding the effects that rapid technological change may bring. In the one camp, perceived pessimists believe that the pace at which digital technology is changing is unprecedented, a fact that will leave millions unemployed in the short run as society and global policy play catch up. Jobs that are predicted to be most under threat from automation are those that are routine and easily defined. These include repetitive manufacturing and mining tasks or tasks in the banking, call operations, service and even taxi industries. Extreme pessimists, which include Elon Musk and the late Stephen Hawking, go as far as to fear a rebellion of artificial super-intelligence against the human race, reminiscent of 1973 Michael Crichton’s nail biter Westworld, sparking a different kind of AI-phobia altogether.
Optimists, on the other hand, believe that as we have seen throughout history, technological unemployment is often short-lived and low impact. Employment and societies as a whole will adapt and benefit from technology in the long run. Some predict that the redirection of repetitive, time consuming and often dull tasks to machines will allow humans to focus more time and energy on other areas of work where machines do not perform well. This will include sectors related to human interaction or customer experience or work tasks that require understanding of context, judgment calls, creativity or humour. Skills such as communication, critical thinking, decision-making, creativity and entrepreneurship are likely to be most useful in the future workplace.
George Boole, who in the mid -1800s devised the theory of logic that underpins the binary language of modern computing, once said, “If you spend time doing work that a machine could do faster than yourself, it should only be for exercise”. By letting robots do what robots do best, we might just be able to let humans do what humans do best … and that is to be more human.
Some more interesting timelines on the history of technophobia and of the development of the modern computer can be found here: