As we grapple with adjusting to a post-COVID-19 world, a universally recurring theme across governments, statutory boards, private sector research and international organisations has been leveraging technologies such as artificial intelligence (AI) in response to the pandemic.
It is that gap where AI currently outstrips regulatory oversight that the public fears.
But the rapid rise of AI has produced a two-speed dynamic, where even as researchers and professionals flock to the field, the general public perceives technology to be outstripping regulation:
- The Stanford Center for Human-Centered Artificial Intelligence reports that attendance at NeurIPS – the world’s largest AI conference – is up over 800% relative to 2012;
- Concurrently, 61% of respondents on Edelman’s Trust Barometer worry that “Government does not understand emerging technologies enough to regulate them effectively.”
And it is that gap where AI currently outstrips regulatory oversight that the public fears.
Seen from the lens of the layperson, we fear the bait and switch: that the technologies promoted to keep us safe are repurposed to track us, eliminating what vestiges of privacy and anonymity remain.
FACIAL RECOGNITION — SILVER BULLET OR DANGEROUS WEAPON?
A key component in this debate is facial recognition systems: software that identifies individuals by comparing images of their faces against a database of records. If reading the above definition fills you with a vague uneasiness, you are not alone. The broad deployment of AI is a double-edged sword, as the effectiveness of these solutions hinges on the large-scale collection of personal data, and from there it can be a short hop to state surveillance.
The facial recognition industry in the US is currently valued at USD5 billion and growing at breakneck pace, with projections that it will double by 2025.
While state-of-the-art facial recognition algorithms are hitting over 99.9% accuracy, questions persist beyond mere performance. The rapid growth of the technology has triggered widespread debate among activists, politicians, academics and even police forces, with primary concerns being around privacy, proportionality, and the lack of opportunity for consent.
Facial recognition in Penang
In January 2019, Penang became the first state in Malaysia to launch a facial recognition system capable of identifying criminals through its CCTV network. Photo: Olgazu / Dreamstime
Furthermore, beneath the stellar accuracy metrics may lie a more disturbing picture of algorithmic bias. The MIT Media Lab’s Gender Shades project uncovered disturbing bias in the results of AI models when applied to certain demographics.
These models were significantly more accurate on males than females, and on test subjects with lighter skin versus those with darker skin. These results were even more incriminating when looking at intersectional results (for instance, females with dark skin), where the models were between 20.8% and 34.7% less accurate.
What this means is that if you are a buyer or user of facial recognition systems, headline accuracy may be misleading – systems that boast ‘99.9% accuracy’ may actually be a shorthand for ‘99.9% accuracy on the test sample under test conditions’. This significant gap between real and test conditions is dubbed ‘domain shift’ by the Stanford University Center for Artificial Intelligence, and efforts are currently underway to spread awareness of these risks and place guardrails around how this emerging technology is operationalised.
THE RISE OF RESPONSIBLE AI
Enter the emerging field of responsible and trustworthy AI. If you have not come across the term, you likely will – what was once a niche interest of futurists and researchers is rapidly reaching a tipping point. In what may be the first known case of its kind, August 2020 saw a faulty facial recognition match leading to a Michigan man’s wrongful arrest for a crime he did not commit.
Technology research and advisory firm Gartner have announced that responsible AI and AI governance have become a priority for AI on an industrial scale, and on the policy front no less than 84 documents on ethical guidelines for developing AI systems have been released worldwide, nearly 90% of them in the last four years.
But responsible AI covers many areas, with topics ranging from algorithmic bias and explainable AI through to topical areas such as deepfakes as well as broader societal challenges around AI liability, human dignity and job displacement. Homing in to a more specific question – are we able to reap the benefits of AI systems that can protect us from COVID-19, while protecting our individual privacy?
A BI-MODAL VIEW OF TRUSTWORTHY COMPUTER VISION SYSTEMS
It comes down to appropriateness – the right tool for the right purpose.
Facial recognition and biometric technologies can be a force for good within certain well-specified contexts. Supporters in the US often argue that biometric surveillance technology should be reserved for the greatest risks, such as to help deal with violent crimes, terrorist threats and human trafficking.
However, the usage of AI in public spaces for reasons such as productivity, insights and safety can and should take on a different form. Equating computer vision AI systems with biometric identification systems is a false equivalence.
Technologies, methods and design patterns exist to deploy AI systems that can capture detailed behaviour without compromising the identity or privacy of the individual.
AI systems are a new frontier for appropriate use of technology to tackle... We do not have to be passive spectators of the unfolding ai story.
Photo: Piyamas Dulmunsumphun / Dreamstime
This is the distinction that I hope will make it into best practice as conversations around AI concerns start to coalesce into concrete implementation guidelines – that we limit the rampant collection of personally identifiable information (PII) and the use of biometric technologies to the few use cases that warrant it, and prioritise data privacy for the rest.
Measures that can be implemented that preserve and defend privacy include a range of methods from identifying and blurring faces, through to processing only blog-like silhouettes of people, and architectural steps such as ensuring that any streamed data does not include sensitive information that could compromise individual identities.
At Lauretta.io we implement advanced versions of the above, enabling the understanding of detailed activity but never identifying individuals. Our solutions are designed from the ground up to be privacy first, and we believe in reaping the benefits of AI without compromising individual privacy.
Three simple questions we encourage educated individuals to ask when they next see a video AI camera in public include:
1. Privacy — Is my privacy protected? Is the system storing my personal data?
2. Purpose — What is the AI system used for?
3. Bias — Has the system been tested for bias in realistic environments?
VIDEO-BASED AI IN ACTION BATTLING COVID-19
All of the above may seem like a lot of effort on top of what are already significantly complex AI systems. So what makes these responsible AI measures worth the effort? And why is this particularly important now?
The Computer Got It Wrong
A flawed match from a facial recognition algorithm that draws on state driver’s licence photo databases led to the arrest of Robert Julian-Borchak Williams for a crime he did not commit.
One strong driver is the well-founded hope that AI can be a potent weapon in our worldwide effort against COVID-19. Seen through the lens of technology teams, many key strategies tabled by the World Health Organization (WHO) can be reframed as problems well-supported by the data and AI toolkit. Sampling from their strategy document:
• Suppressing community transmission through physical distancing is currently being enforced by safe distancing ambassadors. But putting people in harm’s way can be a dicey proposition, and no one can be everywhere, all the time. But AI systems working through camera networks can, and when paired with privacy-first principles, such computer vision systems can be a powerful weapon to blunt the pandemic.
• Prevention through respiratory etiquette (primarily through mask wearing) likewise consists of monitoring public spaces at scale. This is again a task that computer vision systems accomplish easily, and can be effective at without a need to identify individuals. Violations can simply trigger a notification to on-duty personnel.
TOWARDS A SAFER AI-POWERED FUTURE
With both artificial intelligence and facial recognition on tremendous growth trajectories, it is important to disentangle the two. AI systems are a new frontier for appropriate use of technology to tackle, and with guidelines in the field still nascent globally, we do not have to be passive spectators of the unfolding AI story – all of us have a part to play to shape it.
All images displayed above are solely for non-commercial illustrative purposes. This article is written in a personal capacity and does not represent the views of the organisations the author works for or is affiliated with.
JASON TAMARA WIDJAJA
Jason Tamara Widjaja is Associate Director, Foundational Data & Analytics (AI) at one of the world’s leading biopharmaceutical companies. A multidisciplinary technology leader, Jason has the dual responsibilities of leading a large and diverse data science team in Singapore and driving business outcomes through AI globally.
Jason spent half his career in Australia before relocating back to Singapore in 2016. Since then, he has been active in the local start-up ecosystem as a co-founder of Lauretta.io with his brother Galvin, and a mentor to AI start-ups in partnership with local venture capital firms in Singapore.
With a strong belief in working towards a better world through data and AI, Jason has been giving back to the society by advocating for ethical AI and diversity in tech. In his free time, he is a contributor to Singapore’s AI Governance Framework and a top writer on Quora, with over 1,000 answers in Data Science, Analytics and Artificial Intelligence.
DECEMBER 2020 | ISSUE 7
Future-Proofing Our Recovery