Uber Eats Courier’s Battle Highlights UK Law’s Struggle with AI Bias Justice

Uber

The BBC recently reported a case involving Pa Edrissa Manjang, a Black Uber Eats courier who received compensation from Uber due to what he deemed as “racially discriminatory” facial recognition checks hindering his access to the app. Manjang had been utilizing the platform for food delivery gigs since November 2019.

This incident has sparked concerns regarding the adequacy of U.K. legislation in addressing the proliferation of AI systems. The lack of transparency surrounding the rapid deployment of automated systems, ostensibly aimed at enhancing user safety and service efficiency, risks amplifying individual grievances while redress for those impacted by AI-driven bias may be protracted.

Legal Battles and Resolution

The legal action stemmed from a series of complaints regarding failed facial recognition checks after Uber introduced the Real Time ID Check system in the U.K. in April 2020. This system, utilizing Microsoft’s facial recognition technology, mandates users to submit a live selfie for identity verification against a stored photo.

According to Manjang’s complaint, Uber suspended and eventually terminated his account following failed ID checks and an automated process citing “continued mismatches” in submitted photos. Legal action was initiated in October 2021 with support from the Equality and Human Rights Commission (EHRC) and the App Drivers & Couriers Union (ADCU).

The ensuing litigation spanned several years, with Uber’s attempts to dismiss the claim or request a deposit prolonging the process. Despite a final hearing scheduled for November 2024, Uber opted to settle with Manjang, the terms of which remain undisclosed. Neither Uber nor Microsoft has provided detailed explanations regarding the incident.

Despite the settlement, Uber denies culpability, asserting that its systems include robust human review to prevent arbitrary decisions. However, the circumstances surrounding Manjang’s case suggest significant flaws in Uber’s ID verification process.

Worker Info Exchange (WIE), an advocacy group supporting Manjang, obtained his submitted selfies from Uber via a Subject Access Request under U.K. data protection laws, confirming their authenticity. Uber allegedly disregarded Manjang’s repeated requests for human review, exposing failures in both automated checks and human oversight.

Legislative Deficiencies and Regulatory Gaps

This case underscores the limitations of existing U.K. laws in regulating AI usage. Manjang’s resolution via discrimination claims under the Equality Act 2006 reveals the inadequacy of current legal frameworks in addressing AI-related discrimination and rights abuses.

Baroness Kishwer Falkner of the EHRC criticized the opaque processes affecting workers like Manjang, stressing the need for greater transparency and accountability in AI usage. While U.K. data protection laws theoretically provide safeguards against opaque AI processes, enforcement gaps, exemplified by the Information Commissioner’s Office’s (ICO) inaction, undermine these protections.

The government’s reluctance to introduce dedicated AI safety legislation and the modest funding allocated for AI regulation further highlight the need for comprehensive regulatory reform to address the evolving challenges posed by AI technologies.

In conclusion, the Manjang case underscores the urgent need for robust legal frameworks and effective enforcement mechanisms to safeguard against the discriminatory and harmful impacts of AI systems in the U.K.

See also: Google.org Unveils $20 Million Program To Accelerate Generative AI

Google.org Unveils $20 Million Program To Accelerate Generative AI
Delta Electronics debuts eco-friendly AI hardware at NVIDIA GTC

Trending Posts

Trending Tools

FIREFILES

FREE PLAN FIND YOUR WAY AS AN TRADER, INVESTOR, OR EXPERT.
Menu