Accountable Algorithms

798
44.3
Опубликовано 17 мая 2017, 21:32
Important decisions about people are increasingly made by algorithms: Votes are counted; voter rolls are purged; financial aid decisions are made; taxpayers are chosen for audits; air travelers are selected for enhanced search; credit eligibility decisions are made. Citizens, and society as a whole, have an interest in making these processes more transparent. Yet the full basis for these decisions is rarely available to affected people: the algorithm or some inputs may be secret; or the implementation may be secret; or the process may not be precisely described. A person who suspects the process went wrong has little recourse. And an oversight authority who wants to ensure that decisions are made according to an acceptable policy has little assurance that proffered decision rules match decisions for actual users.

I challenge the dominant position in the legal literature that transparency will solve these problems. Disclosures of source code and the underlying data are often neither necessary (because evidence can be provided in other convincing ways, as I will show) nor sufficient (because of the issues analyzing code and machine learning models) to demonstrate the fairness of a process. Furthermore, transparency may be undesirable, such as when it discloses private information or permits tax cheats or terrorists to game the systems determining audits or security screening. Finally, transparency may not be useful, as in the case of a lottery.

Traditionally, Computer Science addresses these problems by demanding a specification of the desired behavior, which can then be enforced or verified. But this model is poorly suited to real-world oversight tasks, where the specification might be complicated or might not be known in advance. For example, laws are often ambiguous precisely because it would be politically (and practically) infeasible to give a precise specification of their meaning. Instead, people do their best to approximate what they believe the law will allow and disputes about what is actually allowed happen after-the-fact via expensive investigation and adjudication (e.g. in a court or legislature). As a result, actual oversight, in which real decisions are reviewed for their correctness, fairness, or faithfulness to a rule happens only rarely, if at all.

I present a novel approach to relating the tools of technology to the problem of overseeing decision making processes. These methods use the tools of computer science to ensure cryptographically the technical properties that can be proven, while providing the necessary evidence so that a political, legal, or social oversight process can operate effectively. Further, evidence produced by these methods can help the subjects of decisions to trust the integrity and fairness of the decision process. Specifically, these accountable algorithms make use of a novel zero-knowledge commit-and-prove protocol to produce evidence that supports meaningful after-the-fact oversight, consistent with the norm in law and policy. Accountable algorithms can attest to the valid operation of a decision policy even when all or part of that policy is kept secret.

See more on this video at microsoft.com/en-us/research/v...
автотехномузыкадетское