Special report: AI technologies present complex ethical challenges, early hopes for a multilateral governance framework, but delicate balance to avoid hampering innovative AI

The grading algorithm was supposed to stop teachers exaggerating grades when the UK’s lockdown meant end of-year exams were cancelled, but the result was anything but robust.

In August 2020, a scandal rocked the British government after the algorithm reduced marks for around 40% of college-leaving students from their teacher’s predicted grade, some of whom stood to lose out on their first-choice university.

Much of the controversy stemmed from how the algorithm was weighted.

In a bid to emulate university intakes in non-pandemic years, the regulator had factored in each school’s historical performance, but the approach helped ingrain existing inequities.

More pupils at independent schools were awarded top grades, while more students from “low” socio-economic backgrounds were downgraded and missed out on a pass, according to the Independent.

UK prime minister Boris Johnson tried a layman’s defence, blaming a “mutant algorithm” for the error and restoring teachers’ marks. But his attempt to deflect blame goes to the heart of the debate surrounding AI’s ethical implementation.

The country’s AI-focused Alan Turing Institute defines the problem: when people make big judgement calls, they take responsibility for how things turn out.

But, as the exams drama showed, the rise of AI has blurred this accountability.

The effects can be calamitous. Social media network Facebook’s algorithm connects much of the global internet community, but its judgement also stoked one the 21st century’s most nefarious incidents: the persecution of Myanmar’s Rohingya community.

Facebook’s AI permitted a systematic campaign of posts propagating hate speech, betraying its lack of perspective in a nation where digital technology is still relatively new.

It has since admitted the platform helped “ferment division and incite offline violence”, facilitating genocide fuelled by sectarian violence.

The case is a terrifying example of big tech’s AI abetting state-enabled atrocities.

Both the UK and Myanmar examples violate the Turing Institute’s definition of ethical AI on multiple counts: they denied recourse to the decisions, engendered bias or discrimination and yielded opaque results that undermined public trust.

But they also show the challenge of applying ethical AI in practice.

A global crusade is underway to nail down rules for everyone to follow so AI cannot be used to railroad human rights. Ethics experts can take heart from earlier multilateral efforts such as the Sustainable Development Goals or the Geneva convention.

But it will be hard work lining up countries with diverging interests and norms, and, as stakeholders ramp up their efforts, so does the potential for
AI-driven controversy, as the dealflow over the past year demonstrates.

GCV Analytics has counted 155 corporate-backed rounds for AI and machine learning companies in the first three quarters of 2020, raising $4.6bn in funding.

That equates to over 60% of $7.4bn for the whole of 2019.

Taken against deal database PitchBook’s figures for the second quarter of 2020, the data suggest about 17% of AI venture dollars came through corporate-backed rounds, as VCs invested a total of $12.6bn in the quarter, largely on the back of larger deals for unicorns in the US and China.

With robotics and other deep AI applications beginning to win traction, the need for strong governance can only increase, but squaring this with geopolitical and financial motives is not straightforward.

Palantir and the investor’s dilemma

On 30 September, one of the most contentious AI developers went public at a $16bn valuation.

Palantir has drawn criticism for offering its technology to secret intelligence services and helping them gather information for surveillance, having reportedly been seeded in part by
In-Q-Tel, the strategic investment affiliate of the US intelligence community.

Palantir exploits machine learning to discern patterns tucked in incomplete datasets – the Guardian has reported its software tracked roadside bombs in Iraq and revealed the detonator was hidden in garage-door handles.

But the market for its services extends far beyond state-sponsored analysis.The addressable market for commercial applications of Palantir’s software is pegged at around $56bn, compared with $63bn for governments, according to Forbes, which has named confectionary supplier Hershey as a client.

The controversy attests to the need for regulation but also how new rules might constrain innovative forms of AI.

Above: Machine-learning has state-sponsored and commercial applications

A source from a Europe-based venture fund said: “It does show that if there needs to be a standard, it needs to be one that applies to everyone.

“As a startup, you have to put more than enough effort into impactful technologies without worrying about the downside of your application. You have to give enough opportunity to develop AI technologies that are used for good, also.”

Palantir may seem like an extreme case, but AI’s dual-uses will often muddy the water for governance.

And problems will escalate as major economies pursue general AI – computers capable of learning all human intellectual tasks.

Text generating models, such as
GPT-3, published by OpenAI, the deep AI developer backed by Microsoft, have wowed the industry, automatically writing on online discussion boards and, even, an artificial report on Brexit for UK newspaper The Guardian.

Without regulation, OpenAI’s GPT-3 system could be put to immoral ends – falsifying official documents to spam email accounts, for instance, or upholding the state narrative in autocratic states. Allied to AI-generated videos, voice and photos, such text can create powerful fake news and fraud opportunities.

With GPT-3 undergoing beta testing, Microsoft has licensed the technology and, in September, joined OpenAI to warn against prescriptive boundaries on the export of new technologies from the US. Instead, the companies suggested that an automated method should help decide whether exports are safe.

One of GPT-3’s discussion board responses read: “It may be hard to imagine that some events, even though they are bad in the moment, can lead to an overall better life. So you need a context for this.”

According to the Turing Institute, an ethical framework for AI should urge developers to consider the potential impact from the outset of the product design process.

But how thorough can these checks be, in practice, given that a startup’s priority is to secure revenue and funding?

And where does the risk of malicious use over-ride the prerogative to build genuine, monetisable products?

Even the most innocuous AI product could raise concerns on closer inspection. A 3D printer for manufacturing parts can also churn out machine gun rounds for guerillas – already, Google lists step-by-step guides for printed gun parts, and a range of ready-made designs can be found through 3D printing communities.

Above: 3D printers can be used for good or evil

Controls will require international agreement, but that is by no means assured – the UN Paris Agreement on climate change brokered in 2015 was a colossal multilateral effort, and followed an attempt derailed by dissenting nations in 2009. Even then, the US quit the initiative though President-elect Biden has said the country will rejoin.

Similarly, agreement on AI will be undermined if global powers cannot resist racing to the bottom to seize the economic and security benefits of new emerging technologies.

In September 2020, the Trump administration announced plans to reinterpret the multilateral Missile Technology Control Regime so that clearance for the export of weaponised drones was no longer subject to a “strong presumption of denial”, according to the Arms Control Association. It is unclear what Biden will do.

Michael Horowitz, a professor of political science at University of Pennsylvania, has reportedly argued the move was necessary because China had captured a substantial slice of the military drone trade, including exports to some US allies.

Joanna Bryson, professor of ethics and technology at the Hertie School in Berlin, is among the academic faculty informing the global debate on ethical AI. Speaking to GCV, she critiqued the misconception that autonomous machines could themselves be responsible for bad judgements, and stressed responsibility always lies with AI creators or operators.

She said: “AI itself cannot be responsible. We are not there to make responsible AI, we are here to make sure that the creators are responsible for their AI.”

Bryson’s specialism verges into a field known as digital governance, which seeks to ensure governments and companies cannot corrupt the use of digital technologies. However, she noted financially motivated businesses were often more aware of the potential dangers.

“One of the really interesting questions to me is that it is not like you can just trust a government – there is not a benign bedrock of government and then black masks who are capitalists.”

“Quite often there is a lot of corruption and problems in government as well, and by contrast there are good people working in companies that both want the world to function and recognise their company will be unsuccessful if the rest of the world falls over.

“So it is not about good versus evil but you want to help everyone get on top of the complexity and regulatory challenge.”

Emerging economies: the Indian and Chinese perspective

In Bryson’s view, an effective watchdog must be equipped to watch over governments and companies on AI’s ethical implementation.

Multilateral efforts are underway. Notably, the Global Partnership on Artificial Intelligence (GPAI) has signed up the EU along with 14 other countries, including the US, following groundwork undertaken by the OECD. But reaching agreement will be difficult; in GPAI’s case, the absence of China points to Beijing’s reluctance to open diplomatic channels.

A stark warning came in a blog post for Massachusetts Institute of Technology’s Technology Review in September from Abhishek Gupta, founder of Montreal AI Ethics Institute, and his research colleague Victoria Heath.

Above: Joanna Bryson

Unless true global perspective can woven into its definition, ethical AI will fail to overcome western biases. On the other hand, the wide international recognition of AI’s potential harm raises hope for agreement.

Bryson recalled the precedent of the Universal Declaration of Human Rights and later efforts by the OECD as among historical examples where China has climbed on board.

She said: “These are so fundamental to establishing what we [as a global society] have to do, so I think they should be the examples for a baseline [on AI governance].”

Victoria Lennox, a doctoral candidate aligned with University of Oxford’s Digital Ethics Lab, was optimistic an agreement could be found, arguing: “The development of an international framework for ethics and AI is not out of the question.

“With the UN sustainable development goals, we see an example of global cooperation leading to the development a globally recognised framework.

“While adoption and implementation is another matter altogether, the challenges and opportunities of advanced digital technologies, while impacting the world unevenly, nevertheless have global implications and necessitate global coordination, cooperation, governance and enforcement.”

Above: Victoria Lennox

China is often characterised as a potential AI menace, with a high-throughput data- gathering industry that has cheaper labour costs than its rivals, and laws that prioritise the state over individuals.

But Beijing wants to be seen to be taking AI ethics seriously and will be wary of losing credibility with its citizens now AI-driven facial recognition underpins many of its activities, including surveillance and social credit scoring.

Domestic scrutiny of Chinese AI is increasing. Guo Bing, an associate professor of law at Zhejiang Sci-Tech University, sued a safari park in November 2019 over the mandatory facial recognition at its entrances, which forced visitors to bear the risk of personal data breaches. It was the first legal challenge against facial recognition, using China’s consumer protection law, and prompted the safari park to introduce fingerprint scanners as an alternative, according to the State of AI report.

Beijing’s state council ostensibly embraced the ethical AI debate in 2017 when adopting its New Generation AI Development Plan, the strategic blueprint that has ignited accelerated growth.

And China now has a government-endorsed document – the Beijing AI Principles – intended to embed respect of human privacy and dignity, and a willingness to start international dialogue.

One of the principles, harmony, comes from the Chinese philosophical tradition, where it broadly represents respect for nature and its alignment of all human beings. Harmony already offers justification for Beijing’s centralised ethos, in which the state prerogative can often embody that of its citizens.

In AI ethics, the principle seeks to establish a comprehensive governance ecosystem to avoid a “malicious AI race”, through collaboration across nations and industries.

Bryson said: “One of the biggest complaints I hear from a Chinese perspective is that talk in the West focuses on supporting and defending the individual, which is of course important, but there is not so much talk about supporting and defending the state as the collective of all a nation’s individuals.”

China can also be expected to consult with its major tech corporates, Baidu, Alibaba and Tencent. GCV Analytics indicates rounds and exits involving at least one of the corporates accounted for $1.1bn in 2019, and $2.4bn the previous year.  The trio will also be eager to export more Chinese AI technology, which means winning foreign trust.

The US, however, ordered ByteDance, to divest the local subsidiary of its social video portal TikTok via a presidential executive order in August 2020. The penalty was ostensibly to contain security threats, but critics pointed to similar moves against the likes of telecommunications equipment and services provider Huawei which appear to clip China’s technological exports.

Above: ByteDance (Beijing office pictured) was ordered to divest the US subsidiary of TikTok

As this report went to press, Joe Biden had seen off Donald Trump in the US presidential election, and it was unclear where this left the executive order on TikTok, or the wider China-US relationship.

India

India followed China by setting out its National Strategy for AI (NSAI) in 2018, but it might hope to steal a march on its rival to influence how AI is governed.

Unlike China, India is a democracy and has also already joined the Global Partnership on Artificial Intelligence.

The country’s workforce is rich with AI specialists, with algorithm-building skills estimated to be 2.6 times the global average, and above both China and the US, according to the Stanford AI Index.

All these facets put New Delhi in an enviable position to guide ethical AI implementation.

Indian national think-tank NITI Aayog has published a discussion paper to explore regulations that could control AI and harness its benefits without hampering the scope for innovation.

One point is the potential need for regulatory frameworks on a sector-by-sector basis, in addition to laws which govern all of AI.

The report suggests over-arching and sector regulators might be helpful, to avoid cases where an application designed for one industry is abused in a different sector. That could present ambiguity over which watchdog should investigate potential regulatory breaches.

The study has also recommended New Delhi should cooperate internationally and invest to catalyse models for protecting data privacy in AI technologies.

In terms of protecting R&D, the paper floats the example of France’s proposal for innovation sandboxes which would have access to supercomputers at research institutes catered to AI’s execution.

Such measures could be balanced by a permanent mandate for regulators to monitor AI technology once it has reached the market.

The search for an ethical AI framework is well underway and there can be little reservation about the intent to succeed.

Providing clarity on what ethical AI means in practice will greatly help financial investors, who must weigh up increasingly grey areas when scrutinising deals.

But, as deep AI becomes more commonplace, it is unclear whether all economies can be sufficiently incentivised or disciplined to follow the rules.The justification must be strong enough to overcome nations’ tendency to seek competitive advantage, and that needs comprehensive engagement with all issues AI presents for societies globally.

The French-Algerian philosopher Albert Camus said: “A man without ethics is a wild beast let loose upon this world.” It is a refrain that, as applied to humans building computers, will guide the AI industry, particularly as it strives for true autonomy across more industries.