The Impact of Artificial Intelligence on Innovation, Jobs, and Future Global Prosperity

Meeting photo

High-Level Expert Group Meeting

14 October 2017

Boston, USA

Chaired by William F. Weld

Introduction

At its 33rd Annual Plenary Meeting in Baku, Azerbaijan, the InterAction Council hosted an Expert Group Meeting on the “World Economy and the Future of Work” chaired by His Excellency, Mr. Olusegun Obasanjo. The Council returned to the impact of innovation on future work and a global economy that works for all at its 34th Plenary Meeting in Dublin, Ireland. The role of technology was a strong component of these discussions and it was decided that a deeper analysis of the role of artificial intelligence (AI) should be sought. Therefore, on 14th October 2017 in Boston, USA, the Council convened a High-Level Expert Group Meeting chaired by former Governor of Massachusetts William F. Weld on, “The Impact of Artificial Intelligence on Innovation, Jobs, and Future Global Prosperity.”

The purpose of this meeting was to consider the impact that technology will have on the way we live, consume, and relate to one another. While technological innovation continues to evolve at an exponential pace, it is imperative that global leaders and citizens alike understand both the associated opportunities and challenges. Questions concerning innovation, jobs and future global prosperity must engage multiple stakeholders to develop policy frameworks that can be applied in multiple national contexts. Indeed, as technology is global in nature, so too must be the solutions.

The Role of Artificial Intelligence in Political Discourse

Artificial Intelligence, as defined by John McCarthy, who coined the term in 1956, is “the science and engineering of making intelligent machines.” More technical definitions of AI commonly describe it as, “the study and design of intelligent agents, where an intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success.”[1]

Public and political discourse related to AI is intrinsically intertwined with a multitude of different policy challenges and specialisations: labour market planning, education, international security, intellectual property, redistribution of wealth, representative democracy, etc.

The election campaign that brought President Trump to the White House was dominated by concerns about unemployment and underemployment by an American public that felt that their jobs had been torn out from under them. Part of the appeal of Trump, therefore, was his commitment to bring back manufacturing jobs and traditional industries like coal. The sources of these changes are difficult for the general public to grasp: Is it the result of free trade? Or is it the result of automation? Either way, the beckoning back to jobs from an earlier era clashes harshly with the reality of the AI revolution, where the worst estimates predict that 50 per cent of the jobs that currently exist in the United States will be supplanted by AI in the next ten years. While AI itself may not dominate elections, its impact on the future nature of work is a hotly contested and emotional issue amongst publics around the world.

Our collective task is to manage the speed and complexity of the transition that AI will bring, a transition that may outpace the sheer speed of the agricultural and industrial revolutions that preceded it. As Kai-Fu Lee, the former head of Google China and a leading Chinese expert on technology tells us, artificial intelligence is the “singular thing that will be larger than all of the human tech revolutions added together, including electricity, the industrial revolution, internet, mobile internet, because AI is so pervasive.” We require collective policies on a global scale to facilitate billions of people transitioning from one era to another, while avoiding a new arms race.

Forward-looking policy proposals are critical so that states can be prepared when the tsunami of technological change, especially driven by artificial intelligence and related technologies, breaks over the existing world economy. The decentralization of powers that new technologies make possible will critically challenge governments in the decades to come. As the private sector becomes the leader on innovation - developing technologies that disrupt markets globally - states will find it harder to adapt. Given the broad range of impacts on innovation, government legislators and regulators must quickly adapt to the fast-changing environment by embracing agile governance. Namely, they must collaborate closely with business and civil society to truly understand what it is that needs to be legislated and regulated.

One of the challenges before us is, how do we translate the complexity and interconnectedness of the challenges and opportunities of AI into a policy discussion that policymakers without a deep background in the specifics of AI can understand?

Understanding Artificial Intelligence

The visualization of AI as a humanoid robot capable of becoming sentient is not how AI should be understood, according to the experts present. Rather, drawing on a multitude of analogies, they likened AI to an “idiot savant”; it can do one incredibly intelligent activity extremely well, but it is not capable of doing anything else. AI is essentially a tool that is able to figure out what a right value should be, based on the datasets that it is given. Consequently, it will not replace humans in all pursuits, nor is AI an end-in-itself; rather it is a tool. There is a high differential between the number of companies currently working on AI, which is growing exponentially, and the number of companies that have AI in production, which is much more limited.

Where AI has been deployed by companies, there have been examples of it generating substantial economic benefits both in terms of revenues and jobs. Starbucks, for example, is considered as a leader in AI, which it credits with generating growth between 15-17 per cent in the past two years with only 8 per cent growth in stores. This growth has also meant that Starbucks was the highest creator of jobs in the fast food industry in 2016, and it expects to create 240,000 jobs by 2021 with over 60,000 of those in the US.

There is also a misunderstanding of what types of jobs are most susceptible to being replaced with AI. Most conceive AI as replacing low-skill manufacturing jobs, however the technology is best poised to replace professions with specific narrow skill-intensive tasks. These are often jobs in the higher wage brackets, such as accountants, radiologists, bankers, lawyers, etc, who will see specific sub-tasks shift to AI. Illustratively, in 2000, Goldman Sachs employed 600 equity traders. Today, they employ two with the support of 200 computer engineers. Here it is also important to highlight the difference between robotic automation and AI and to point out that the revolution in robotics is not developing at the same speed as AI.

Debate persists about whether AI can sustain the accelerating pace at which it has been expanding. Some theorize that the trajectory will peak and plateau. For example, cooling systems have not progressed at the same speed as computational power, which has the potential to limit future increases in computational power. There is also a gap between the AI capability currently developed and how much AI is actually being actively deployed. Because it is such a power tool, the critical issue for global policymakers is how best to optimize the potential positive and minimize the problems.

Morality and Ethics

Evolving technologies have the potential to either be a “destructive creation” or a “creative destruction,” depending on how they are deployed. How will we distinguish “good AI” versus “bad AI?” And how do we harness the benefits of AI, while mitigating the destruction that the technology can have in the hands of those who seek to do harm? Ethics and morality need to be debated in terms of determining how this resource is deployed, which should shape the public policies that are developed to respond.

“Popular tyranny” currently dictates the way content is consumed on the Internet. If a product, article or post is popular, it will be shared and its spread has the potential to be exponential. The programming of AI must reflect not only what is deemed popular by the masses, but must consider ethical, moral, and social implications of what AI is learning. 

The power of algorithmic powered media is based on past actions. For example, companies are able to use AI powered algorithms to increase efficiency and employment. Through supervised learning, right answers from past actions are used to train future actions and outcomes. If a company asks Amazon to sift through a critical mass of job applications, it will look at past hiring trends and propagate those trends. If those trends tended to privilege white males, for example, the algorithm would continue to propagate that bias. As such, AI as a refractor of past human judgement can lead to undesirable outputs unless we intervene to address them.

Technological revolution is challenging how wealth is distributed, concentrating significant wealth in the hands of a few. The social security systems that have been developed in much of the world were designed to meet the challenges of a significantly different labour market and will need to be adapted to the more flexible, changing, and diversified fashion in which work is performed in the future. Some have viewed Guaranteed Income Assistance (also known as Basic Income) as a possible solution. The Canadian province of Ontario and the Government of Finland are currently experimenting with Guaranteed Basic Income, which sees the government provide a payment to eligible families or individuals to ensure a minimum income level, regardless of employment status. Previous Canadian experiments in the 1970s found that with the exception of women providing care to their young children and students continuing their education, providing a basic income did not impact willingness to work, but did produce a number of benefits, such as reduced hospitalization rates, particularly related to mental health.

Global and Individual Security

Legislators and regulators need to consider who will hold accountability for AI systems. Should the creators of AI, as the owners of the Intellectual Property (IP) of that software, be held liable should the AI become destructive? To what extent can a human be responsible for an AI that behaves in a way the human did not intend or foresee? Furthermore, AI is becoming the foundation for basic computer programming and as such there is a lesser need for human coders. It is foreseeable that coding will be driven by AI in the future.

Issues surrounding data privacy and security relate to the way in which data is stored - namely through centralized systems whether that is through cloud computing or physical servers located throughout the world. Over the next five years, data storage in servers will shift to cloud computing on account of the sheer volume of data being produced and the finite capacity of servers today. As such, the risks associated with data privacy and security will continue to evolve and the question of how to protect personal data, while providing the best data sets for AI, will be an area in need of further regulation.

Not only will AI have an impact on personal security, it will have a profound effect on questions of global security. There is anticipated to be a proliferation of automated weapons and an arms race, as militaries seek to keep up with the advances being made by others. It is also anticipated that all future military technologies will be based on AI. How too, does the concentration of these technologies in China and the United States impact geopolitics or global economic development when a small number of countries possess these technologies while others do not?

International criminal syndicates, who are entrepreneurial by nature, risk-takers, and have access to capital, have been successful at using AI. Attracting the individuals with the in-demand skills to combat such criminal activity has been a challenge for law enforcement agencies.

Education and Training

AI has the potential to help us learn incredible new things about the world around us, but AI will also pose significant policy challenges in the field of education and training.

In many of the industries touched by AI there is an employment gap. Jobs are being created, but there are not the workers with the necessary skills (usually in STEM) to fill those positions. This is occurring at the same time that there are significant educational resources – Massive Open Online Courses (MOOCS), online learning, etc. Individual workers require pathways to navigate these resources to effectively receive the benefit of retraining. The tools that we have need to be better amalgamated into a system that is able to match jobseekers with the right type of training to facilitate their re-entry into the workforce. For example, the average person entering their career in 2017 will have at least six careers; it is not viable to be constantly acquiring university degrees. Traditional four-year degree models are not appropriate for each career shift and alternative models must be developed that are flexible and tailored to the individual.

To effectively identify where future jobs are going to be, we need to design better metrics. Existing labour market projections in the United States, for example, rely on a methodology that was developed in 1954 when it was expected that a worker would work in one area for their entire careers. The methodology needs to break down silos to create data that is reflective of the way that work happens today. Consequently, education and training need to be able to shift towards identifying what skills are future-resilient and which are not, putting investment in those skill sets that are.

Recommendations

  1. Governments should address how to thoughtfully introduce ethics training in public and private defence industries, as well as emphasize the role of morality in the development of artificial intelligence and foster civic engagement. The InterAction Council’s Universal Declaration on Human Responsibilities should be updated to examine the ethical considerations posed by emerging technologies.
  2. A “Social Corps” could be developed to transfer these skills to developing nations to mitigate the inequitable effects of these emerging technologies.
  3. Governments should seek to limit the monopolistic power of high technology companies and should explore antitrust regulations to limit their powers, as well as encourage the portability of data[2] and oppose data centralization within a handful of high-technology oligopolies.
  4. Governments and educational institutions should encourage generalist education, emphasizing future-resilient skills, so that individuals are better prepared for the future work economy, as well as adopt systems to personalize education, generate literacy on AI issues, and encourage lifelong learning.
  5. Governments should restrict “micro-targeting” of individuals by firms using AI to influence individuals in how they should vote or make purchase, while at the same time maintaining the flexibility for research on populations.
 

[1] Science Daily. “Reference Terms: Artificial Intelligence,” https://www.sciencedaily.com/terms/artificial_intelligence.htm. (Accessed: 10 November 2017).

[2] Data portability: enabling consumers to carry their data from one platform to another in a machine-readable format