Technology is foundational, and permeates all areas and is thus a convergence between technology and human rights is certain. Though this Fourth Industrial Revolution in the form of Artificial Intelligence, robotics, and automation provides utility to human rights, it also poses profound challenges to the human rights framework. A growing need can be witnessed to ensure the safety and security of people in a world of globalized surveillance. But it has its own challenges concerning human rights. AI has the capacity to penetrate into the societal processes and thus has the ability to impact a broad array of human rights by its capability to identify, classify and discriminate. It has, certainly, created new forms of oppression which disproportionately affect the most vulnerable.
The Right to Equality and Discrimination.
The use of AI in the criminal justice system could lead to discrimination due to inherent biases in its algorithm. Recruitment of employees with the aid of AI-induced technology could be disruptive when it is made with biases towards gender or any particular religion. For example- Amazon ditched its AI recruitment tool after the company found that it was gender-biased. This happens because data is used to train AI machines and technologies through algorithms. Hence, biased data could lead to biased algorithms and subsequently lead to biased AI technology.
Right to Information and Freedom of Expression.
The use of AI infringes the right to freedom of speech and expression as 24×7 surveillance raises fear of being monitored among people and increases the likelihood of citizens not exercising their freedom of speech and expression. Is because of the pervasive and invisible nature of AI systems, coupled with their ability to identify and track behaviour and can be seen in the form of self-censorship, altered behaviour in public spaces and private communications alike.
The Right to Privacy.
The surveillance and collection of vast amounts of personal information and metadata, and the processing of such data using new analytical techniques, has major implications for the right to privacy. New technologies have spawned products and services that adapt to the particular preferences and other characteristics of the individuals they interact with. This has created unprecedented demand for personal information – with unprecedented implications for the right to privacy. Where personal information is misused, the consequences can be grave. For example, individuals can be influenced or manipulated by targeted information on digital platforms. AI-induced technologies are trained to access and analyse big personal data sets. This personal data is accessed from various digital platforms, and at times without consent. Supreme Court of India has recognized privacy as a human right, there is no legislation yet that protects an individual’s privacy and digital data.
Digital Initiatives and Issues in India
Facebook And Data Leak
It leaked the personal data of over 533 million users globally. In a country-wise breakup, the claimed data breach reportedly includes the personal information of 6 million users in India. The leaked data can be used to commit fraud by impersonating a person as the leaked data dump includes phone numbers, full name, location, email, and other info.
Data Protection Bill, 2019
The primary concern is that it invades the citizen’s fundamental right to privacy.
Firstly, it is based on the grounds permitting the Central Government to exempt any government agency from the requirements of the Bill. Secondly, the government can process personal data in the interest of the security of the state and public order without requiring consent. The personal data of any individual can also be processed in order to prevent, detect, and investigate any offence. Thirdly, parts of the bill will not apply where data are processed for investigative processes, legal proceedings, domestic purposes, journalistic activities, and statistical and or research purposes and includes partial exemptions for “manual processing by small entities. This opens up the possibility of mass surveillance which goes against the fundamental right of privacy. The right to withdraw consent also seems to be have been diluted. To withdraw consent from the processing of any personal data, one needs to provide a valid reason and all legal consequences arising out of such withdrawal are to be borne by him. This might act as deterrence against withdrawing consent and can lead to the exploitation of personal data.
Information Technology (Guidelines for Intermediaries and Digital Media Ethics Code) Rules, 2021 (Rules)
These rules will harm end-to-end encryption as it mandates that at the request of law enforcement agencies, platforms must trace the ‘first originator’ of any message. Two provisions are of particular concern. Section 4(2) Section 4(4) These provisions endanger the security of Indian internet users because they are incompatible with end-to-end encryption. It would substantially increase surveillance, promote automated filtering and prompt a fragmentation of the Internet.
The priority of human beings and their interests, integrity and quality of life in the context of the creation, use, introduction, and development of artificial intelligence. Digital education should be provided around the four pillars i.e., clarity predictability, transparency, and predictability to address the digital harms and eradicate broad and vague definitions, scope and jurisdiction of the regulating authorities. Alongside government and private entities, civil society organizations should also have a place for internet governance and creating mechanisms that prioritise human rights in the technological era. Protection of human rights should be laid out explicitly as a foundation. Making specific commitments, rather than general ones, can help to hold governments accountable for the commitments they are making. If there are clear benchmarks, it becomes possible for people to evaluate how well these are being achieved.
Samridhhi Mandawat, Consultant (Strategy)