The financial sector has seen significant growth over the past few years and has leaned into the use of technology on many fronts to make their functioning more efficient. The use of algorithms has grown exponentially, and so have the hopes that addressing inequality and prejudice would become much more manageable and streamlined. Algorithmic bias is not a new issue and has been discussed in reference to multiple aspects. Since algorithms are, in a way, an extension of human intelligence and thought processes, it is no surprise that algorithms have taken on certain prejudices and biases.  

A creditworthiness algorithm works on the basic principle of risk prevention, intending to assess whether a prospective borrower will be able to return the sum borrowed and alongside considering other associated risks. The algorithms are usually based on a system of predicting risk value that inherently includes biased parameters against certain communities, including women. The credit scoring system is generally seen to favour men over women, and there are many underlying factors that account for this discrimination. The Union Budget 2020-21 pushed for the use of more AI in the financial sector, and most FinTech and other financial services have responded with great eagerness. The increased reliance on these algorithms in the absence of a clear regulatory framework in India should be looked at critically. 

This article will look at two broad aspects, (a) the exclusionary consequences of these algorithms and (b) fairness in these models and why that might not be a possibility.

Excluded but not forgotten 

In 2019, Apple and consequently Goldman Sachs were embroiled in a scandal that alleged that their credit scoring algorithms discriminated against female applicants despite the fact that their credit was better than their male counterparts. This incident sparked international debate and discussions with regard to the issue of gender bias in algorithms. The New York State Department on Financial Services released its report of the incident in 2021, noting that while Apple had acted ‘lawfully’ and that there was no ‘unlawful discrimination’, the way the systems were designed have inbuilt bias that need to be addressed. 

India has seen a sharp rise in the use of algorithms for a variety of purposes, including preemptive fraud detection.  Different kinds of data points such as bank account records, other publicly available records, the kind of instrument used to access the site/ app (brand of the laptop), loan repayment and general consumption patterns, etc, are processed and analysed. Most providers ask for a host of permissions that are seemingly unrelated to the primary purpose of the app. One such example is permissions to access a user’s social media activity which is then used to determine their creditworthiness. For instance, a person’s Friends List on Facebook can be looked at to figure out the kind of friends they have and depending on the creditworthiness of their friends, applications were either allowed or rejected. Many providers such as Capital Float, LendingKart, mPokket etc. have used models based on equivalent parameters. 

A study conducted by the Women’s World Bank found that FinTech companies are missing out on an ‘opportunity to reach more than 1 billion customers and are contributing to the already existing “$17 billion gender credit gap” in emerging markets.’ A report published by the International Finance Corporation noted that establishing the creditworthiness of a female entrepreneur in India was extremely hard and that this was affected by biases, intentionally or unintentionally. Banks struggled to establish their creditworthiness as businesses, and MSMEs owned by women are perceived to be riskier than others even though they perform better than male-owned businesses. The biggest hurdle that was identified was the lack of proper market data with regards to these businesses and how they operate. The lack of data compounded with the use of inherently biased algorithms seems catastrophic. 

Algorithmic bias and fairness

The Apple-Goldman Sachs controversy brings to light an approach to address this issue that might seem counterintuitive at first. Algorithms perpetuate biases at a scale that is not traceable and with an impression or idea that there is no bias in the system. Many algorithms do not have factors like race or gender built into them. In some cases, like in New York, this is due to regulatory compliance requirements (lending loans based on considerations like gender is illegal). However, this does not mean that the algorithm won’t consider what is termed as ‘latent variables‘—factors like occupation, historical data like location, etc, that is used to come to conclusions about a person—which would indirectly include gender. In the above example, women with higher credit scores than their husbands were offered lower credit amounts, which counters the defence taken by the companies that the credit scores were borrowed from already available data, which implies that the algorithm made certain conclusions based on these latent variables. 

A pertinent concern therefore is that these biases perpetuate systemic issues that existed with the usual system of credit lending. Women who seek loans from banks and other traditional lending institutions seek a high amount of collateral which many women may not have direct access to and would therefore have to rely on their fathers or husbands to provide the same. Modern fintech firms provide a unique and crucial opportunity to avoid these requirements thereby providing an option that many women can take advantage of. The use of biased algorithms however significantly impedes the progress that could have been made thereby directly affecting the ability of women to be financially independent by perpetuating the same problems as before.  

In the Indian context, these algorithms fall within a chasm of uncertainty and function as intermediaries. There are no clear rules that are applicable in the present case. The most crucial aspect in this regard would be the guarantees, duties and penalties laid down in the yet-to-be passed Personal Data Protection Bill 2019. The closest legislation that comes to the regulation of these algorithms would be the Credit Information Companies Act, 2005 that grants vast powers to the Reserve Bank of India to check practises adopted by CICs regarding the determination of creditworthiness. However, a major roadblock is that most of the newer credit companies, like FinTech firms, etc., have not been registered under the Act and are therefore not under the control of the RBI. Another one would be a notification issued by the RBI in 2019 that forbade credit information bureaus as well as banks from disclosing the credit details of customers without their consent. This notification did not directly address the use of algorithms and what would theoretically fall within permissible bounds. There is a general lack of transparency regarding how these scores are arrived at and why specific rates are fixed at a particular point. In the absence of clear regulatory guidelines, the burden to ensure accuracy falls flat. 

These systems cannot be fair because humans are not ‘fair’; there is no fixed definition of fairness. Unintentional biases creep in even when the data provided by the user is generally neutral, such as in the cases of zip codes. When combined with other data, the results are exclusionary. The system seeks to, in most cases, replace humans and make their function more efficient. Improving the dataset by reintroducing gender as a mitigating factor might not solve the problem entirely as the machine would continue to discriminate, proportionate to the changes introduced through the combined model. So, the algorithm would continue to make skewed decisions but might also give favourable results based on probability/ chance, which again is detrimental. 

In response to these issues, some have suggested creating two separate models, one that works for men and the other women. The primary purpose of these models would be to assess both broad demographics within their own unique circumstances. Models similar to this have been tested in countries like the Dominican Republic, where, after the introduction of the new model, women from lower economic backgrounds were greatly benefited and were given more credit than in the previous model. 

The biggest concern with implementing this model would be the inability to consider the intersection of other identities faced by specific communities. The examples provided to show that these models work are based on a small populace/ sample size. Implementing such a model on a larger scale would definitely need an intersectional approach. For instance, in India, the understanding of the intersectionality between caste and gender would be pivotal in ensuring that access is not inhibited by other factors and circumstances. Schemes enacted for the benefit of women entrepreneurs have not been able to hit their mark, as women from marginalised socio-economic backgrounds are left behind. These biases are made worse when it comes to persons from other gender minorities and marginalised castes. Any model that is going to be created must focus more on ensuring maximum representation with the goal of preventing further ostracisation. 

Where do we go from here?

The use of algorithms has spiked, and this means that more minorities are left with limited options; this takes away opportunities for many women and their financial independence is affected. One factor that complicates all of this, is that most algorithms fall within the ambit of proprietary information and are therefore protected from disclosure. It is argued that these institutions will have to take up the task of making more efficient systems on their own. Transparency, accountability and increased representation seem to be some of the more viable solutions, along with a total rehaul of the system. Relying on the goodwill of profit-oriented and calculative institutions might not be wise given that many women across India and in other developing markets are being directly affected in the present. Additionally, consumers are also denied the right to know for certain the variables that are used to arrive at a particular score and are thus unable to defend themselves against possibly unlawful considerations. 

Most discourse focuses on the economics of it all, i.e., women losing out on getting loans or limited access to credit, would mean that a country’s economy and growth would be affected. It is asserted that these changes should not just be made on that basis alone. While this is a tall ask, these changes are essential because these communities are owed the chance and are worthy of being on an equal footing, and changes must be made to that effect. 

Author

Leave a Reply

Your email address will not be published. Required fields are marked *