In November 2018, more than 20,000 Google employees staged a collaborative walkout after the New York Times reported that the company protected three executives accused of sexual harassment[1].  While the focus of this blog has little do with labor rights or sexual harassment, mobilization of this magnitude does highlight another problem in the technology industry—its history of exclusion.  The industry’s uncontended ‘bro culture’ is not only imploding in the workplace, it is directly linked to the ethical problems concerning algorithmic discrimination. Increasing evidence demonstrates that embedded within artificial intelligence (AI) and machine learning tools is an impediment of pre-existing social structures, one that perpetuates and amplifies the gender and racial hierarchy. 

Technology experts suggest that discrimination in decision-making algorithms is the result of unconscious and conscious biases being coded into programs, classification systems, and datasets fed to supervised and unsupervised machine learning models[2].  The hoarding of digital information by major technology companies has empowered data mining techniques to make life altering predictions in areas concerning insurance risk, mortgage loans, shortlisting, hiring and interviewing for jobs, job advertising, bail and recidivism, sentencing, airport security and predictive policing.  While some may argue the well-intended motivations behind implementing automated decision technologies—to establish a more fair and just society, I ask, are we sure we aren’t simply recreating a more precise version of our current reality?  

Following protests at Google and Microsoft, Amazon employees circulated a petition requesting the company to stop selling Rekognition, a facial recognition software, to government and state agencies.  There has been many more protests since. This marks a major turn in the digital era, technology companies are no longer perceived as the young radical liberators of information, but mature corporations with moral obligations.  Several non-profit organizations have emerged to demand diverse and inclusive hiring practices, accountability, and transparency among technology corporations.  

While on one hand, technology companies align themselves with a moral code of conduct aimed at creating a better society, on the other, they are responsible for the onset of new forms of discrimination.  How do technology companies reconcile these two faces? Will it listen to the public? And how far are protesters willing to go? These are questions that are central to my own research and I challenge academics and practicing data scientists to consider these in their work. 

References

[1] Wakabayashi, D., Griffith, E., Tsang, A., & Conger, K. (2018, November 01). Google Walkout: Employees Stage Protest Over Handling of Sexual Harassment. Retrieved from https://www.nytimes.com/2018/11/01/technology/google-walkout-sexual-hara...

[2] Noble, S. U. (2018). Algorithms of oppression how search engines reinforce racism. New York: New York University Press.

Author: 

Maria Smith

Maria is a sociology doctoral student, at the University of California at Berkeley, interested in computational methods, surveillance, race, and inequality.