There are no killer robots yet - but, says Tom Standage, regulators must respond to AI ?now.?
Mention artificial intelligence (AI), and the term may bring to mind visions of rampaging killer robots, like those seen in the "Terminator" films. or worries about widespread job losses as machines displace humans. The reality, heading into 2019, is more prosaic: AI lets people dictate text messages instead of typing them, or call up music from a smart speaker on the kitchen counter. That does not mean that policymakers can ignore AI, however. As it is applied in growing number of areas, there are legitimate concern about possible unintended consequences. How should regulators respond?
The immediate concern is that the scramble to amass the data needed to train AI systems is infringing on people's privacy. Monitoring everything that people do online, from shopping to reading to posting on social media, lets internet giants build detailed personal profiles that can be used to target advertisements or recommend items of interest. The best response is not to regulate the use of AI directly, but instead to concentrate on the rules about how personal data can be gathered, processed and stored.?
The General Date Protection Regulation, a set of rules on data protection and privacy introduced by the European Union in May 2018, was a step in the right direction, giving EU citizens, at least, more control over their data (and prompting some internet companies to extend similar rights to all users globally). The EU will further clarify and tighten the rules in 2019 with its ePrivacy Regulation. Critics will argue that such rules hamper innovation and straighten the internet giants, which can afford the costs of regulatory compliance in a way that startups cannot. They have a point. But Europe's approach seems preferable to America's more hands-off stance. China, meanwhile, seems happy to allow its internet giants to other as much personal data as they like, provided the government is grated access.?
As AI systems start to be applied in areas like predictive policing, prison sentencing, job recruitment or credit scoring, a second area of concern is that of "algorithmic bias" - the worry that when systems are trained using historical data, they will learn and perpetuate the existing biases. Advocates of the use of AI in personnel departments (for example, to scan the resumes of job applicants) say using impartial machines could reduce bias. To ensure fairness, AI systems need to e better at explaining how they reach decisions (an area of much research); and they should help humans make better decisions, rather than making decisions of them.?
A third area where AI is causing concern is in self-driving cars. Many companies are now testing autonomous vehicles and running pilot "robotaxi" services on public roads. But such systems are not perfect, and in March 2018 a pedestrian was killed by an autonomous car in Tempe, Arizona p the first fatality of its kind. The right response is to require makers of autonomous vehicles to publish regular safety reports, put safety drivers in their cars to oversee them during testing and install "black box" data recorders so that investigators can work out what happened if something goes wrong.
In short, given how widely applicable AI is ?- like electricity or the internet, it can be applied in almost any field - the answer is not to create a specific set of laws for it, or a dedicated regulatory body akin to America's Food and Drug Administration. Rather, existing rules on privacy, discrimination, vehicle safety and so on must be adapted to take AI into account. What about those killer robots? They are still science fiction, but the question of whether future autonomous weapons systems should be banned, like chemical weapons, is moving up the geopolitical agenda. Formal discussion of the issue at a UN conference in August 2018 was blocked by America and Russia, but efforts to start negotiations on an international treaty will persist in 2019.
Get real
As for jobs, the rate and extent of AI-related job losses, remains one of the most debated, and uncertain, topics in the business world. In future workers will surely need to learn new sills more often than they do now, whether to cope with changes in their existing jobs or switch to new ones. As in the Industrial Revolution, automation will demand changes to education, to cope with shifts in the nature of work. Yet there is little sign that politicians are taking this seriously: instead many prefer to demonize immigrants or globalization. In 2019, this is an area in which policymakers need to start applying real thought to artificial intelligence.?