How Big Tech is struggling with the ethics of AI

After Jack Poulson quit Google, he was ushered into a meeting with Jeff Dean, the head of the company’s artificial intelligence division.

Mr Poulson, a former Stanford professor who worked on machine intelligence for Google, had resigned in protest at “Project Dragonfly”, a plan to develop a censored search engine for China, saying the company had promised just two months earlier not to design or deploy technology that “contravenes [ . . .] human rights”.

The meeting, according to Mr Poulson, was supposed to make him “feel better about Google’s ethical red lines”.

“But what I actually found was the opposite,” he said. “The response was, ‘[human rights organisations] are just outsiders responding to public information, [and] we just disagree that that’s a violation.’ There was no respect for them on this issue.”

He left the company the next day. In recent months, other Google employees have protested at its bid for a Pentagon cloud computing contract, and its involvement in a US government AI weapons program.

© AP

The development and application of AI is causing huge divisions both inside and outside tech companies, and Google is not alone in struggling to find an ethical approach.

The companies that are leading research into AI in the US and China, including Google, Amazon, Microsoft, Baidu, SenseTime and Tencent, have taken very different approaches to AI and to whether to develop technology that can ultimately be used for military and surveillance purposes.

For instance, Google has said it will not sell facial recognition services to governments, while Amazon and Microsoft both do so. They have also been attacked for the algorithmic bias of their programmes, where computers inadvertently propagate bias through unfair or corrupt data inputs.

In response to criticism not only from campaigners and academics but also their own staff, companies have begun to self-regulate by trying to set up their own “AI ethics” initiatives that perform roles ranging from academic research — as in the case of Google-owned DeepMind’s Ethics and Society division — to formulating guidelines and convening external oversight panels.

The efforts have led to a fragmented landscape of efforts that both supporters and critics agree have not yet had demonstrable outcomes beyond igniting a debate around the topic of AI and its social implications.

In Google’s case, its external advisory council on AI lasted only a week before its employees revolted at the appointment of Kay Coles James, from the conservative Heritage Foundation think-tank, and shut it down earlier this month.

Luciano Floridi, who was one of the advisers and is director of the Digital Ethics Lab at Oxford, said the board has been planning to help Google navigate its trickiest dilemmas. “Some projects are perfectly legal, but may not be what people expect from a company like Google or the values it has committed to,” he said.

The most common practice has been to publish company principles for ethical AI. Microsoft, Google, IBM and others have all published lists of company ethics, while research bodies such as the Future of Life Institute, which developed the “Asilomar principles”, have tried to get scientists from around the world to sign on.

But Mr Poulson said these ethics boards and principles lacked teeth. “When it comes to a major decision, literally only the CEO can say no,” he said.

AI Now, a non-profit set up to guide ethical development of the technology, also pointed to the lack of transparency and enforcement in its latest report.

“Ethical approaches in industry implicitly ask that the public simply take corporations at their word when they say they will guide their conduct in ethical ways,” the report said. “This does not allow insight into decision making, or the power to reverse or guide such a decision.”

In a clear example of this conflict, DeepMind shut down an independent review panel last November. It had set this up just two years earlier to scrutinise the company’s sensitive relationship with Britain’s National Health Service, after its health division was absorbed by Google.

Many companies have also joined wider bodies to work across academia, business and civil society. One such effort, the Partnership on AI, was founded in 2017 by Google, DeepMind, Facebook, Amazon, Microsoft and Apple. The partnership now numbers almost 90 groups, half of which are companies such as McKinsey, while the rest are non-profits, academics and other institutions.

“I know there are projects building repositories of AI ethics cases, or writing a white paper on the use of AI in judicial systems,” said Francesca Rossi, head of ethics at IBM and a founding member of the Partnership. “I think the networking is very important too, the fact that these partners can openly connect with each other and work together in a way that is not available in any other place.”

Often the same people — usually executives of well-known tech companies — reappear on several different ethics boards. For instance, Mustafa Suleyman, a co-founder of DeepMind, runs the company’s own ethics unit, while also sitting on parent Google’s Advanced Technology Review Council and co-chairing the Partnership on AI.

Similarly, Mr Floridi, the Oxford professor, has sat on ethics boards for Cisco, Facebook, Google, IBM, Microsoft, and Tencent.

“I have seen that more and more initiatives are coming out at the same time, and at this point, I think the AI ethics community recognises the need for co-ordination and convergence to be as efficient as possible,” Ms Rossi said.

Meanwhile, in China, companies are taking divergent approaches to ethical AI.

De Kai, a computer scientist at Hong Kong’s University of Science and Technology, and a member of Google’s shortlived ethics council, said that Chinese companies are generally more concerned with solving real-world problems as a way to do good, rather than focusing on abstract ethical principles.

How they define “doing good”, however, has been the subject of intense criticism from some quarters. AI companies such as CloudWalk, Yitu and SenseTime have partnered with the Chinese government to roll out facial recognition and predictive policing, particularly among minority groups such as the Uighur Muslims.

In March, Robin Li, Baidu’s chief executive, urged the “contribution of Chinese wisdom” to the ethics debate, and emphasised that providing a “good life to common people” was the ultimate goal. Tencent declares on its website that one of its goals is to “use technology to accelerate the development of the public good”.

Ultimately, experts say the field is still nascent, and a joint approach between the private and public sectors is required to build consensus.

“Unless you have an open debate and try lots of different experiments, how can we identify a solution?” said Mr Floridi. “We have three tools — law, self-regulation and public opinion. Let’s use them all.”

Источник: Ft.com

Источник: Corruptioner.life

Share

You may also like...