There is growing support for the idea that artificial intelligence can help remove human bias from situations where institutional discrimination has persisted for decades. California recently passed a fair hiring resolution to encourage the development and use of algorithm-based technologies in job hiring decisions. AI has potential to help end decades of discrimination in the home lending market too, but a new proposed rule change by the U.S. Department of Housing and Urban Development may lead to the wrong kind of AI outcomes.
For years, policymakers have sought to reverse centuries of bias in our housing and lending markets to create a more inclusive economy. Despite some progress, housing in America remains more segregated today than it was in the 1920s. The black homeownership rate remains at crisis levels. At 40.6%, it is lower than it was in 1967 when redlining was legal — and far behind the 73.1% rate for non-hispanic whites. This disparity drives severe inequities: median white households have a staggering ten times the wealth of the median black household.
More financial institutions are turning to AI and machine learning algorithms to make underwriting decisions because they use more data and sophisticated math to spot good borrowers who might have been overlooked or are too hard to score using traditionally exclusive methods.
Lenders have been prohibited from intentionally discriminating against people of color and from creating what is known as disparate impact — lending policies and decisions that unnecessarily harm borrowers protected by our fair housing laws, even if the discrimination is unintentional. Together, these two standards — intentional discrimination and disparate impact — have been the pillars of our nation's fair lending laws. They have compelled lenders to continually examine their policies and develop new practices to lessen discriminatory outcomes.