Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Machine Interpretability, Explainability and th...

divijjoshi
November 23, 2019

Machine Interpretability, Explainability and the Right to Information

I presented some preliminary thoughts on machine learning interpretability and explainability, due process and the right to information in India at the Anthill Inside Conference, 2019, on November 23.

divijjoshi

November 23, 2019
Tweet

Other Decks in Technology

Transcript

  1. Machine Interpretability, Explainability and the Right to Information divij joshi,

    Mozilla Tech Policy Fellow All Slides Licensed under CC 4.0
  2. Machine Learning in Consequential Decisions Machine learning is being used

    for classification tasks with social and legal implications. Eg. Facial Recognition Fraud Analysis (Taxation, Insurance) Credit Scoring Urban Planning (Traffic Management)
  3. ‘AI’ Technologies are Not ‘Outside’ the Law Modern machine learning

    systems must be compatible with existing legal principles - including technology agnostic legal systems which can apply to the operators or the developers of technologies (Eg: Common law liabilities under tort) Various jurisdictions are actively regulating machine learning technologies and their applications. The European GDPR provides safeguards against automated profiling. India’s privacy jurisprudence has taken note of automated profiling and proposed legislation may also cover this aspect. (Justice KS Puttaswamy v UoI)
  4. Forms of Opacity in Machine Learning Machines are ‘non-transparent’ in

    a number of ways: Institutionally: Machine systems and their development or deployment are often shrouded in secrecy due to intellectual property obligations or other institutional forms of opacity. Technically : Technical understanding of statistical modelling/computer science can hinder knowability of machine decisions. Inherently: Machines do not ‘learn’ or reason like human beings, and mapping machine explanations to human explanations is a challenging task.
  5. What are ‘Good’ Explanations? ‘Explanations’ have a variety of meanings.

    Literature indicates that work around ‘explainable AI’ has focussed on the problem of interpretability or inner working of models, in order to understand and improve the functioning of systems. While important, this understanding of explainability is limiting. In cognitive and social sciences, meaningful explanations for decisions are counterfactual (if x → then y); social (conversational and selective), and contrastive (why x and not y). Explanations need to be developed in context. In different contexts, techniques for ‘explaining’ automated decisions should satisfy the values protected by legal norms. Some efforts have focussed on contrastive counterfactual explanations. These take the form of if - then statements which can be produced without opening the black box. The explanation is sought by looking at the smallest possible change (within a defined distance function or neighbourhood of a particular input or feature), which can produce a pre-defined desired classification. Eg. Rejection of a loan, the counterfactual produced could say - If your income was increased by Rs. 50,000, then you would have received a loan.
  6. Transparency and Due Process for the Law of Machines Explanations

    are an instrumental means of achieving legal values. Explanations are required for democratic and open government. Explanations are required for satisfying constitutional values of fairness and non-arbitrariness.
  7. Due Process and Reasoned Decisions Administrative decisions which are quasi

    judicial and require the use of discretion to apply facts to come to a decision are required to adhere to principles of natural justice. Principles of natural justice form the core of due process requirements in administrative law and bureaucratic systems. Due process requires - notice, provision of evidence, hearing and a reasoned order. A reasoned order is necessary for preventing arbitrary decisions, and also to allow subjects of decisions to challenge them or change their behavior.
  8. Open Government and the RTI Transparency is integral to an

    open and democratic government. The Right to Information is a fundamental constitutional right in India, enabled through the Right to Information Act, 2005. “"Information" means any material in any form, including Records,Documents, Memos, e-mails, Opinions, Advices, Press releases, Circulars, Orders,Logbooks, Contracts, Reports, Papers, Samples, Models, Data material held in any electronic form and information relating to any private body which can be accessed by a Public Authority under any other law for the time being in force.” The opacity of machine systems in use in Government for administrative purposes (for macroeconomic planning, for eg.) undermines the right to information. Relevant information explaining the functionality of machine learning systems employed by public authorities must be made available to the public. RTI mandates an audit trail for administrative decisions and proactive disclosures of ‘relevant facts’ in formulating policies and ‘reasons’ for administrative or quasi-judicial decisions. There are limitations to the RTI as it applies to algorithmic systems - Section 8 and 9 restrict transparency to privilege intellectual property and privacy (with certain exceptions).
  9. Standards and Regulations for Explanations in AI Within machine learning

    systems, techniques for interpretability ‘without opening the black box’, is an important development. Contrastive counterfactual explanations may satisfy some norms for due process - using perturbations in inputs and outputs to counterfactually determine optimal changes in input for reaching a desired output. These may be particularly useful where the intention is to provide sufficient information for users to challenge and change system decisions, particularly if based on protected attributes. Code audits and third party code reviews, including algorithmic impact assessments, for Government procurement of computer systems may be useful to introduce transparency and explainabiity under the RTI. Data protection law should incorporate sufficient due process standards for understanding and challenging machine decisions (including notice and possibility of meaningful human intervention).
  10. References Burrell, Jenna. "How the machine ‘thinks’: Understanding opacity in

    machine learning algorithms." Big Data & Society 3.1 (2016): 2053951715622512. Citron, Danielle Keats, and Frank Pasquale. "The scored society: Due process for automated predictions." Wash. L. Rev. 89 (2014): 1. Citron, Danielle Keats. "Technological due process." Wash. UL Rev. 85 (2007): 1249. Miller, Tim. "Explanation in artificial intelligence: Insights from the social sciences." Artificial Intelligence 267 (2019): 1-38. Mittelstadt, Brent, Chris Russell, and Sandra Wachter. "Explaining explanations in AI." Proceedings of the conference on fairness, accountability, and transparency. ACM, 2019. Wachter, Sandra, Brent Mittelstadt, and Chris Russell. "Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GPDR." Harv. JL & Tech. 31 (2017): 841.