Anya Prince of Iowa and Daniel Schwarcz of Minnesota have written Proxy Discrimination in the Age of Artificial Intelligence and Big Data, Iowa Law Review, Forthcoming. Here's the abstract:
Big data and Artificial Intelligence (“AI”) are revolutionizing the ways in which firms, governments, and employers classify individuals. Surprisingly, however, one of the most important threats to antidiscrimination regimes posed by this revolution is largely unexplored or misunderstood in the extant literature. This is the risk that modern algorithms will result in “proxy discrimination.” Proxy discrimination is a particularly pernicious subset of disparate impact. Like all forms of disparate impact, it involves a facially-neutral practice that disproportionately harms members of a protected class. But a practice producing a disparate impact only amounts to proxy discrimination when the usefulness to the discriminator of the facially-neutral practice derives, at least in part, from the very fact that it produces a disparate impact. Historically, this occurred when a firm intentionally sought to discriminate against members of a protected class by relying on a proxy for class membership, such as zip code. However, proxy discrimination need not be intentional when membership in a protected class is predictive of a discriminator’s facially-neutral goal, making discrimination “rational.” In these cases, firms may unwittingly proxy discriminate, knowing only that a facially-neutral practice produces desirable outcomes. This Article argues that AI and big data are game changers when it comes to this risk of unintentional, but “rational,” proxy discrimination. AIs armed with big data are inherently structured to engage in proxy discrimination whenever they are deprived of information about membership in a legally-suspect class that is genuinely predictive of a legitimate objective. Simply denying AIs access to the most intuitive proxies for predictive but suspect characteristics does little to thwart this process; instead it simply causes AIs to locate less intuitive proxies. For these reasons, as AIs become even smarter and big data becomes even bigger, proxy discrimination will represent an increasingly fundamental challenge to anti-discrimination regimes that seek to limit “rational discrimination.” This Article offers a menu of potential strategies for combatting this risk of proxy discrimination by AI, including prohibiting the use of non-approved types of discrimination, mandating the collection and disclosure of data about impacted individuals’ membership in legally protected classes, and requiring firms to employ statistical models that isolate only the predictive power of non-suspect variables.
Discrimination does exist in America, especially when minority groups are searching for state or federal government jobs. I’ve been experiencing a very difficult time finding a job since I started submitting my resume to Government agencies over 17 months ago. Even though my undergraduate degree is from one of the seven sisters and I have a JD from the University of Baltimore, obtaining interviews has been quite difficult. The internet has affected many people’s opportunities due to past financial difficulties or even civil litigation past or present. Federal and state agencies are eliminating people due to issues that shouldn’t be considered in order to obtain an interview or a job.
AI shouldn’t be used as a guide when considering people for jobs or if it is used it shouldn’t affect individuals whose job qualifications or requisite skills necessary have been clearly established by the applicant. This can potentially automatically eliminate someone with skills that meet or exceed what’s necessary to obtain the position. This is a growing problem that is not being revealed or clearly evident to the applicant, but is excessively discriminatory behind the scenes in HR.
AI shouldn’t be used in secrecy to disqualify job applicants due to the potential for inaccuracies therefore causing undesired consequences.