As automated systems spread throughout the economy, federal agencies announce that they will use existing authorities to ensure compliance with the law.
“We take seriously our responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws,” said the heads of the Consumer Financial Protection Bureau, the Justice Department’s Civil Rights Division, the Equal Employment Opportunity Commission, and the Federal Trade Commission in a joint statement.
Artificial intelligence and algorithms have sparked significant concern over unlawful bias and discrimination in the systems. As they point out, the four agencies have begun to take action on the emerging technologies. DOJ clarified that the Fair Housing Act applies to algorithm-based tenant screening services. In a report to Congress, the FTC warned that AI tools used to combat online harms are harmful themselves. It found certain technologies to be “inaccurate, biased, and discriminatory by design.” The CFPB clarified that creditors using algorithms to make decisions about offering credit must comply with the Equal Credit Opportunity Act. The anti-discrimination law is particularly important after a creditor denies a consumer’s application for credit.
“Whether a creditor is using a sophisticated machine learning algorithm or more conventional methods to evaluate an application, the legal requirement is the same: Creditors must be able to provide applicants against whom adverse action is taken with an accurate statement of reasons,” a CFPB circular said.
Finally, the EEOC said that hiring and employment-related decisions based on algorithms and AI must comply with civil rights laws.
Despite these warnings, the federal law enforcers noted the benefits of “responsible innovation.” Innovation that complies with consumer protection and civil rights laws has the potential to enhance customer experience and convenience.