– gendered comment on algorithm bias and economic exclusion in India
The advent of artificial intelligence has popularised the idea that robots should be the new decision-makers on account of them being just, fair and void of human sensitivities. It is often ignored that robots are also born (made) and brought up (trained) by humans, and much like children, robots are impressionable and influenced by their human creators. While Aishwarya Rai is a worldwide heartthrob, the reason why Chitti falls for the same person as its maker in Enthiran (also known as the movie Robo), could be due to algorithmic bias.
Artificial Intelligence (AI) systems use algorithms to function. These algorithms are based on math and data sets. Decision-making AI is trained using datasets collected over a period of time, using statistical generalization. The AI then makes connections or links between data and identifies patterns to conclude or repeat the decisions as per the datasets. These datasets are the juncture at which stereotypes enter computer systems. Broadly, this takes place in two ways- 1. Sample bias, and 2. Linking Bias.
Sample bias occurs when a particular class of persons are disadvantaged due to a disproportional, incomplete or exclusionary data set. A daily example to understand sample bias is the reason why Google speech recognition system works better with male voices than female voices. It is because the algorithm was trained disproportionately on the voices of men. Algorithms also learn by making connections. When these connections perpetuate bias, it is called feature linking or linking bias. Amazon employed an algorithm to shortlist candidates and had to discontinue using it as it prioritized men over women because the dataset showed that more men get hired eventually.
In South Asian Countries, and particularly in India, the stereotypes attached to women and the LGBTQ+ community are especially prevalent. The socio-economic balance tips heavily towards their male counterparts. According to a study conducted in August 2019, the unemployment rate in educated women is more than twice that of men. This study also revealed, that while equality is guaranteed under Article 14 of the Indian Constitution, discrimination exists in the hiring process. These examples of sexist biases are of importance when India is trying to move towards automation and relying on web-based tools as per the 2018 NITI Aayog Report #AIforAll and the recently published framework for AI Stacks.
With a history of gender discrimination and high unemployment rates of women, algorithms used to make decisions on credit lending and hiring are likely to amplify the disadvantages. Though biases stem from human minds, the algorithms have the power to amplify it manifold due to compounded use. Thus, it is essential to recognize these biases and eliminate them.
The fact that gender bias still exists in hiring decisions is not disputed. As recently as February 2020, the Indian Supreme Court had to remind the Indian government that its arguments for denying women command positions in the Army were based on stereotypes. Since data drives algorithms, the existing data gap will fuel the algorithm bias. India ranked 112th in the Global Gender Gap Index, 2020 released by the World Economic Forum. This ranking is four places lower than the 2018 Index. According to the Sixth Economic Census, only 14 percent of businesses in India are run by women. The lack of access to traditional and collateral financing reinforced by cultural bias is a significant obstacle for Indian entrepreneurs who are women. At a time when credit lending facilities are becoming highly dependent on AI, the gender gap could be amplified manifold. Evidentiarily, in 2019, Apple’s Credit card was already in the news for being discriminatory against women for showing a 20 times lesser credit limit for women despite better credit scores.
Further, the social and cultural bias against women restricts their consideration for a leadership position. They are often associated with household chores in India. A study has shown that an AI algorithm learned to associate women with images of a kitchen, learning from and reviewing more than one lakh images from the internet. This bias is further strengthened by the belief that women are not good for leadership positions. According to the Gender Social Norms Index (2019) around 50 percent of the world’s population feel that men make better political leaders and less than 6 percent of CEOs in S&P 500 companies are women. This uneven data set will weigh heavily against women and amplify the bias when algorithms are used to make hiring decisions. Using historical algorithms, AI could teach itself that men are preferred in leadership and higher paid jobs than women.
A persistent form of gender inequality exists in the workplace which is evident in the form of pay differentials between men and women. The Global Gender Gap Report (GGGR) 2020 estimated that women’s wages are about 15 percent lower than those of men. In India, the gap is about 34 percent according to the Oxfam report 2019. This gap can further be strengthened by AI bias as women are less likely to get exposure to ads for higher- paying jobs. Carnegie Mellon’s study found that women are less likely to see high-paying jobs in Google than men. A similar case was even reported for LinkedIn. In India, women were less likely to see ads for a career coaching service for “$200k+”executive positions. The greatest challenge to prevent this economic gender gap from growing is to bring women in emerging fields where wage growth is more pronounced. It is found that only 12 percent of professionals, 15 percent in engineering and Data and 26 percent in AI are women respectively (GGGR,2020). This calls for ensuring diversity in the technological industry. To encourage women to pursue careers in STEM (Science Technology Engineer and Mathematics), the technology industry should encourage initiatives focused on child care, work-life balance and funding available for female entrepreneurs. In fact, a young girl should be encouraged to study STEM subjects at an early age.
Evidently, algorithmic biases can play a huge role in furthering and amplifying the already existing social, cultural and economic biases in the Indian society. Thus, a regulatory intervention must be made. In September 2020 the Department of Telecommunications released a draft framework for Indian Artificial Intelligence Stack which seeks to “remove the impediments to AI deployment” and proposes a six-layered stack, dealing with consent gathering, storage and AI/Machine Learning analytics. Unlike the NITI Ayog paper “AI for All,” this framework recognizes the problem of algorithm bias. It states that feeding “garbage” data will lead to unintended and harmful consequences. The paper calls for openness in AI algorithms as decision making in opaque, and black boxes can discriminate. Though it recognized the issue, the report does little to suggest a way to avoid algorithmic biases. One of the proposed solutions is to control the data centrally. However, it is unclear how this will mitigate institutional biases.
Artificial Intelligence is being rapidly employed in India without any safeguards against algorithmic bias. The gender framework for the UN Guiding Principles on Business and Human Rights suggests a framework for States, companies and other actors. Itinvolves three steps- firstly, active involvement of women in constructing algorithms, as compared to a passive beneficiary, secondly, data sets should be sex disaggregated, and thirdly, algorithms should be designed to address the already existing gender gap instead of highlighting it. Algorithmic biases need to be recognized and mitigated to protect the marginalized classes. They do not have the economic, political and social capital to withstand the consequences of imperfect algorithms. The algorithmic bias reinforced by culture and social tendency may further deepen the existing gender pay gap by excluding them from employment in higher paid jobs.
This article is co-authored by Varsha Singh and Apilang Apum as a part of an Article series in collaboration with Rethinking Economics.
Varsha is a final year law student with a keen interest in technology law and policy. She is the co-founder of r-TLP, a tech law and policy platform for the marginalized genders. She writes regularly on data privacy and data security.
Apilang is currently teaching Economics at Jomin Tayeng Government Model Degree College Roing. She is a member of Rethinking Economics India Network (REIN).
The Rethinking Economics India Network (REIN) is the National Network of the global Rethinking Economics movement and looks to organize locally against a background of social movements of which we are one part. Operating as a nation-wide ecosystem of stakeholders in the space of pluralist and heterodox economics, we interface with individuals, universities, civil society, and the private sector in order to scale collaborative efforts for teaching, learning, and discussion and build a movement for economics that is pluralist and inclusive.