Digital disparity in India's technology landscape and the resulting skew in social support distribution
In India, the integration of digital IDs, algorithms, and artificial intelligence (AI) in direct benefit transfers (DBTs) for welfare schemes holds the potential to revolutionize the delivery of essential services, offering increased transparency, reduced corruption, and enhanced efficiency [1][4][5]. However, this transformation also presents significant challenges, particularly for marginalized populations.
One of the key benefits of digital ID systems, such as Aadhaar, combined with algorithmic verification and UPI-enabled transfers, is improved targeting and reduced leakage. These technologies help ensure that subsidies and pensions reach their intended beneficiaries directly, cutting out intermediaries and preventing misuse, thereby saving government funds and improving delivery efficiency [1][4][5].
However, the digital divide in India remains high, with only 31% of the rural population using the internet compared to 67% of the urban population, according to a 2022 Oxfam report. This disparity disproportionately affects marginalized groups, such as migrant women, persons with disabilities, informal sector workers, and economically vulnerable populations, who may face difficulties in digital literacy, poor connectivity, and biometric authentication failures [3][1].
Moreover, AI and algorithms can reflect and amplify social biases embedded in training data, leading to algorithmic bias and discrimination. This risks exclusion or unfair assessment of eligibility for welfare benefits, mirroring how subtle prejudices can affect women's access to services [3].
Large-scale digital ID systems handling sensitive personal data must comply with privacy laws like India’s DPDP Act. Non-compliance can lead to misuse of personal data or loss of trust, further discouraging marginalized groups from enrolling or using schemes [2].
Technical failures, data errors, or governance lapses can also exclude deserving beneficiaries or wrongly deny benefits, especially in welfare schemes linked to life-essential services like pensions, healthcare, or food security [1][3].
As we move forward, it is crucial to design digital ID and AI systems with inclusive safeguards, error mitigation, bias correction, privacy protection, and sensitivity to digital divides to avoid deepening inequalities for marginalized populations [1][2][3][4][5]. Future research could focus on how differential access to technology creates divides across different geographies in India.
Interviews should be conducted with those purposely left out of digitized platforms, and efforts must be made to revisit quantitative metrics and consider the invisible or unaccounted in data on inequality. Biometric failures in systems like Aadhaar and the public distribution system (PDS) have had deadly consequences, as reported in 2018, with seven out of twelve identified cases of starvation deaths related to Aadhaar [6].
Inequality is being learned by machines and normalized through codes. As such, when inequality is written in algorithms, it must be resisted powerfully, precisely, and provocatively in all forms. The homeless, transgenders, migrant workers, domestic help, and others must be captured in data on inequality.
The move towards direct benefit transfer is making things worse for those not recognized in the system. For instance, India's flagship rural employment guarantee scheme, MGNREGA, guarantees 100 days of paid work to rural households, but workers often face delays in receiving their wages, which can be attributed to digitized processes like Aadhaar-based attendance, app-based worksite monitoring, and centralised fund releases [7].
The Aadhaar system, a basis for effective and transparent delivery of several government welfare schemes in India, has become more of a barrier than an enabler, particularly for women in the informal sector [8].
Digital mapping platforms like Google Maps and Wikipedia reflect stark geographic inequalities, with significant under-representation of the Global South [9]. India's push towards data-driven governance impacts marginalized populations severely.
In conclusion, while digital IDs, AI, and algorithm-driven DBTs hold promise for enhancing welfare delivery in India, they must be designed with care and consideration to ensure they do not exacerbate existing inequalities.
- The integration of digital ID systems, AI, and technology in welfare schemes in India can improve targeting and reduce leakage, but it's essential to address the digital divide that disproportionately affects marginalized groups, such as discriminatory algorithmic bias, digital literacy challenges, and poor connectivity.
- As technology advances and AI becomes integrated into more aspects of governance in India, it's crucial to design digital ID and AI systems thoughtfully, considering the needs of marginalized populations, including privacy protection, inclusive safeguards, error mitigation, and sensitivity to digital divides, to prevent further deepening of inequalities.