systems must be compatible with existing legal principles - including technology agnostic legal systems which can apply to the operators or the developers of technologies (Eg: Common law liabilities under tort) Various jurisdictions are actively regulating machine learning technologies and their applications. The European GDPR provides safeguards against automated proﬁling. India’s privacy jurisprudence has taken note of automated proﬁling and proposed legislation may also cover this aspect. (Justice KS Puttaswamy v UoI)
a number of ways: Institutionally: Machine systems and their development or deployment are often shrouded in secrecy due to intellectual property obligations or other institutional forms of opacity. Technically : Technical understanding of statistical modelling/computer science can hinder knowability of machine decisions. Inherently: Machines do not ‘learn’ or reason like human beings, and mapping machine explanations to human explanations is a challenging task.
Literature indicates that work around ‘explainable AI’ has focussed on the problem of interpretability or inner working of models, in order to understand and improve the functioning of systems. While important, this understanding of explainability is limiting. In cognitive and social sciences, meaningful explanations for decisions are counterfactual (if x → then y); social (conversational and selective), and contrastive (why x and not y). Explanations need to be developed in context. In different contexts, techniques for ‘explaining’ automated decisions should satisfy the values protected by legal norms. Some efforts have focussed on contrastive counterfactual explanations. These take the form of if - then statements which can be produced without opening the black box. The explanation is sought by looking at the smallest possible change (within a deﬁned distance function or neighbourhood of a particular input or feature), which can produce a pre-deﬁned desired classiﬁcation. Eg. Rejection of a loan, the counterfactual produced could say - If your income was increased by Rs. 50,000, then you would have received a loan.
are an instrumental means of achieving legal values. Explanations are required for democratic and open government. Explanations are required for satisfying constitutional values of fairness and non-arbitrariness.
judicial and require the use of discretion to apply facts to come to a decision are required to adhere to principles of natural justice. Principles of natural justice form the core of due process requirements in administrative law and bureaucratic systems. Due process requires - notice, provision of evidence, hearing and a reasoned order. A reasoned order is necessary for preventing arbitrary decisions, and also to allow subjects of decisions to challenge them or change their behavior.
open and democratic government. The Right to Information is a fundamental constitutional right in India, enabled through the Right to Information Act, 2005. “"Information" means any material in any form, including Records,Documents, Memos, e-mails, Opinions, Advices, Press releases, Circulars, Orders,Logbooks, Contracts, Reports, Papers, Samples, Models, Data material held in any electronic form and information relating to any private body which can be accessed by a Public Authority under any other law for the time being in force.” The opacity of machine systems in use in Government for administrative purposes (for macroeconomic planning, for eg.) undermines the right to information. Relevant information explaining the functionality of machine learning systems employed by public authorities must be made available to the public. RTI mandates an audit trail for administrative decisions and proactive disclosures of ‘relevant facts’ in formulating policies and ‘reasons’ for administrative or quasi-judicial decisions. There are limitations to the RTI as it applies to algorithmic systems - Section 8 and 9 restrict transparency to privilege intellectual property and privacy (with certain exceptions).
systems, techniques for interpretability ‘without opening the black box’, is an important development. Contrastive counterfactual explanations may satisfy some norms for due process - using perturbations in inputs and outputs to counterfactually determine optimal changes in input for reaching a desired output. These may be particularly useful where the intention is to provide sufﬁcient information for users to challenge and change system decisions, particularly if based on protected attributes. Code audits and third party code reviews, including algorithmic impact assessments, for Government procurement of computer systems may be useful to introduce transparency and explainabiity under the RTI. Data protection law should incorporate sufﬁcient due process standards for understanding and challenging machine decisions (including notice and possibility of meaningful human intervention).
machine learning algorithms." Big Data & Society 3.1 (2016): 2053951715622512. Citron, Danielle Keats, and Frank Pasquale. "The scored society: Due process for automated predictions." Wash. L. Rev. 89 (2014): 1. Citron, Danielle Keats. "Technological due process." Wash. UL Rev. 85 (2007): 1249. Miller, Tim. "Explanation in artificial intelligence: Insights from the social sciences." Artificial Intelligence 267 (2019): 1-38. Mittelstadt, Brent, Chris Russell, and Sandra Wachter. "Explaining explanations in AI." Proceedings of the conference on fairness, accountability, and transparency. ACM, 2019. Wachter, Sandra, Brent Mittelstadt, and Chris Russell. "Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GPDR." Harv. JL & Tech. 31 (2017): 841.