The right to contest automated decisions under the General Data Protection Regulation: Beyond the so

Regulation & Governa nee

Regulation & Governance (2021)

doi:10.1111/rego.12391


The right to contest automated decisions under the General Data Protection Regulation: Beyond the so-called “right to explanation”

Emre Bayamlioglu

KU Leuven Centre for IT & IP Law (CITIP), Leuven, Belgium

Abstract

The right to contest automated decisions as provided by Article 22 of the General Data Protection Regulation (GDPR) is a due process provision with concrete transparency implications. Based on this, the paper in hand aims, first, to provide an interpretation of Art 22 and the right to contest (as the key provision in determining the contours of transparency in relation to automated decisions under the GDPR); second, to provide a systematic account of possible administrative, procedural, and technical mechanisms (transparency measures) that could be deployed for the purpose contesting automated decisions; and third, to examine the compatibility of these mechanisms with the GDPR. Following the introduction, Part II starts with an analysis of the newly enacted right to contest solely automated decisions as provided under Article 22 of the GDPR. This part identifies the right to contest in Article 22 as the core remedy, with inherent transparency requirements which are foundational for due process. Setting the right to contest as the backbone of protection against the adverse effects of solely automated decisions, Part III focuses on certain key points and provisions under the GDPR, which are described as the 1st layer (human-intelligible) transparency. This part explores to what extent “information and access” rights (Articles 13, 14, and 15) could satisfy the transparency requirements for the purposes of contestation as explained in Part II. Next, Part IV briefly identifies the limits of 1st layer transparency - explaining how technical complexity together with competition and integrity-related concerns render human-level transparency either infeasible or legally impossible. In what follows, Part V conceptualizes a 2nd layer of transparency which consists of further administrative, procedural, and technical measures (i.e., design choices facilitating interpretability, institutional oversight, and algorithmic scrutiny). Finally, Part VI identifies four regulatory options, combining 1st and 2nd layer transparency measures to implement Article 22. The primary aim of the paper is to provide a systematic interpretation of Article 22 and examine how “the right to contest solely automated decisions” could help give meaning to the overall transparency provisions of the GDPR. With a view to transcend the current debates about the existence of a so-called right to an explanation, the paper develops an interdisciplinary approach, focusing on the specific transparency implications of the “right to contest” as a remedy of procedural nature.

Keywords: algorithmic transparency, algorithmic regulation, automated decisions, GDPR.

[...]and there is some temptation to obey the computer. After all, if you follow the computer you are a little less responsible than if you made up your own mind. (Bateson 1987, p. 482)

  • 1. Introduction and outline

Increasing automation has been an important topic of concern even at the earliest stage of the debates about the legal, political, and economic impact of data practices in the digital realm. It was clear by the early 1970s that resentment engendered by the systems such as computerized billing would soon spill over onto more delicate domains of life. Cautions were expressed that automated data processing would impair the system operators’ capacity to provide explanations about the results produced by the system and thus, contribute to the “dehumanizing” image of computerization.

Correspondence: Emre Bayamlioglu, Sint-Michielsstraat 6, Box 3443, 3000 Leuven, Belgium. Email: emre. bayamlioglu@kuleuven.be

Conflict of interest: The author declares that there exists no conflict interest and no relevant data to be made available.

Accepted for publication 6 February 2021.

Based on these concerns, the notion of transparency has long been regarded as a means to limit the risks and mitigate the harms arising from the opaque nature of data processing. Since the enactment of the Data Protection Directive (DPD) in 1996, the foundational idea underlying the EU data protection regime has been that the adverse effects of data processing may be best addressed by permitting individuals to learn about the data operations concerning them. Today, with the General Data Protection Regulation (GDPR), the European data protection regime may now be considered as the most extensive body of law aiming to regulate the activities involving personal data. It not only maintains well-defined individual rights fleshing out the principle of transparency but also accommodates various tools and mechanisms for the implementation and enforcement of these rights.

With data-driven practices based on machine learning (ML) being the primary foci of the data protection reform which resulted in the GDPR, one of the novelties of the Regulation is the enhanced transparency scheme provided for solely automated decisions - in particular, the introduction, the right to human intervention, and right to contest in Article 22.1 Accordingly, the paper in hand deals with this specific type of transparency, namely “transparency” in the sense of interpretability for the purpose of contesting automated decisions. The aim is to determine to what extent the GDPR accommodates the practical implications of “right to contest” and the ensuing transparency requirements.

Taking right to contest as a due process provision, Part II starts with a systematic interpretation of Article 22, examining how the concepts of contestation, obtaining human intervention, and expressing one’s view should be understood and interrelated. Rather than a prolongation of the initial provision (Article 15 of the DPD), the right to contest is regarded as the backbone provision with a key role in determining the scope of algorithmic transparency under the GDPR. To fully lay out the transparency implications of the right to contest, this Part also addresses the question: what should be made transparent or known in order to render automated decisions interpretable and thus contestable on a normative basis? (Bayamlioglu 2018). The analysis inquires what interpreting the “algorithm” could mean for the purpose of contesting automated decisions - confirming that the transparency implications of right to contest are too complex to be dealt with merely by addressing certain opacities or invisibilities.

Overall, Part II lays out the theoretical basis of the paper approaching data processing and automated decisionmaking (ADM) as regulatory technologies, which enable a form of “algorithmic regulation”(Yeung 2018).2 Such techno-regulatory approach allows for a conceptualization of ADM and the surrounding transparency debate as a procedural, put in other words, as a due process problem.3 Therefore, instead of handling automated decisions from the narrow lens of discrimination, bias, or unfairness, this paper regards ADM systems as “procedural mechanisms” which produce legally challengeable consequences. The concepts like fairness, equality, or nondiscrimination - as being mainly contextual and domain-dependent - can only address a fragment of the problem and thus, cannot serve as a theoretical basis for the intended analysis. Moreover, misuse of these quasi-legal concepts (to give meaning to the statistical results) runs the risk of technical solutionism, which will misinform policy-makers about the ease of incorporating transparency and accountability desiderata into ML-based systems (Cath 2018, p. 3, 4).

Having laid out the transparency implications of Article 22 as a general provision of due process, Part III, IV, and V inquires to what extent the GDPR can accommodate different conceptions of transparency inherent in the right to contest.4 The analysis is based on a twofold approach. That is, the “information and access” rights (Article 13-15) and the safeguards (Article 22) are treated as complementary but distinct sets of remedies (as 1st and 2nd layer transparency).5 This twofold methodology is guided by the understanding that recognizing distinct forms of opacity inherent to ADM systems is vital in developing (technical and nontechnical) solutions to address the risks arising due to the impenetrable nature of ML (Burrell 2016, p. 2).

In what follows, Part III focuses on certain key principles and provisions under the GDPR, which we describe as the 1st layer (human-intelligible) transparency. It explores to what extent “information and access” rights in Articles 13, 14, and 15 of the GDPR could facilitate or improve the contestability of automated decisions as explained in Part II.

Part IV briefly identifies the limits of 1st layer transparency, explaining how technical constraints together with the competition and integrity-related concerns (of the system developers and operators) render human-level transparency infeasible or legally impossible. Reflecting on both technical and economic limits, this Part offers an account of why the transparency requirements for contesting automated decisions could not be limited to access, notification, or explanation in the conventional sense.

Having seen the limits of directly human-intelligible models based on disclosure and openness in the previous Part, Part V inquires what further solutions the GDPR could accommodate in terms of implementing different conceptions of transparency aiming for contestability. As the 2nd layer transparency, this part systemizes various regulatory instruments and techniques under a threefold structure: (i) the design choices facilitating interpretability; (ii) the procedural and administrative measures; and (iii) the software-based tools for algorithmic scrutiny.

Given that the problem lays with the framing of the optimum extent of transparency and the appropriate mode of implementation, Part VI offers regulatory options (implementation modalities) combining 1st and 2nd layer transparency with a view to implement Article 22 without prejudice to the integrity of the systems or the legitimate interests of the stakeholders.

The final Part concludes that despite the normative, organizational, and technical affordances explained throughout the paper, between the right to contest as provided in the GDPR and its practical application, there are many gaps to be bridged to achieve the desired level of protection without hindering data-driven businesses and services. Accordingly, the conclusion points out the relevant research domains where further progress is required to construct a compliance scheme capable of balancing competing interests. Hence, the paper also serves as a conceptual framework for future research aiming to unravel sector or domain-specific barriers in relation to implementation of the right to contest,

With a view to transcend the current debates surrounding the so-called right to an explanation, the paper conceptualizes ADM as a regulatory technology and focuses on the specific transparency implications of the “right to contest” as a remedy of procedural nature. Building on the former writings of the author about the transparency implications of ADM (Bayamlioglu 2018), the main contribution of the paper lies in this procedural perspective - which enables an interpretation of Article 22 as a due process provision - followed by a systematic analysis of the possible implementation tools and modalities under the GDPR.

  • 2. Article 22 of the GDPR and the right to contest automated decisions

The principle laid out in Article 22, requiring that the automated data-driven assessments cannot be the sole basis of the decisions about the data subjects, is unique to the EU data protection regime. Such provision is not generally included among the US fair information practices or in the OECD guidelines preceding the 1996 DPD (Edwards & Veale 2017). Article 22 does not directly target personal data processing but a certain type of outcome, that is, the decisions that are fully automated and that substantially affect individuals.6

Since the enactment of the DPD in 1996, the practical application and proper implementation of Article 15 (the precursor to Article 22 of the GDPR) has not been of concern neither to the supervisory nor to the judicial authorities (Korff 2010). Although the provision was found intriguing and forward-looking, due to its complex nature - which makes individual enforcement difficult - it has been mostly overlooked and underused (Mendoza & Bygrave 2017). In practice, the compliance standards of the provision have remained at de minimis level, reducing compliance to a mere formality. According to Zarsky: a rule which is rarely applied (Zarsky 2017, p. 1016).

At first glance, Article 22 of the GDPR may be seen not to have brought much change in terms of wording. In this regard, the initial formulation of the provision in 1996 seems to have made it somehow future-proof. However, as will be explained below, with the newly introduced safeguards (the right to human intervention and contestation), the provision now has an essential role in determining the scope of transparency for solely automated decisions under the GDPR.

  • 2.1. Decisions based solely on automated processing, with legal or similarly significant effects

The key provision of the GDPR on ADM, Article 22, applies to processes, which are fully automated, and which bring about legal or similarly significant effects for the data subject. Automated decisions, which fail to comply with the definition provided in Article 22(1), shall not be bound with the provision.

The application of Article 22 initially requires the existence of a “decision,” though neither the former DPD nor the GDPR provides any guidance as to what amounts to a decision. Bygrave suggests that the term “decision” should include similar concepts such as plans, suggestions, proposals, advice, or mapping of options, which somehow have an effect on its maker such that she/he is likely to act upon it (Bygrave 2001).

Article 22(1) further requires that the decision must also be fully automated, allegedly involving no human engagement. Because the level of human intervention to render the decision not fully automated is not clarified, many data controllers interpret the provision narrowly. As a result, significant amount of data-driven practices may be kept out of the reach of EU data protection regime simply by the nominal involvement of a human in the decision-making process.7 This requirement which also existed in the DPD has been widely used as a loophole by the data controllers to derogate from the provisions on automated decisions. This has been despite the preparatory work of the DPD, which explicitly stated that one of the rationales behind Article 15 of the DPD was that human decision-makers might attach too much weight to the seemingly objective and incontrovertible character of sophisticated decision-making software - abdicating their own responsibilities.8

The scope of Article 22 is limited to the decisions that produce legal or similarly significant effects. Legal effects may be described as all qualifications established by a legal norm either in the form of obligations, permissions, rights, powers; or in relation to one’s status such as citizen, parent, spouse, debtor; or relating to categories of things (e.g. moveable, negotiable instrument, public domain). The inclusion of the term similarly significant effects, expands the scope of the provision to cover certain adverse decisions even if the outcome does not straightforwardly affect the data subjects’ legal status or rights.

Regarding the implementation of Article 22 in the EU, some member states have adopted a wider approach, such as Hungary, which includes all automated decisions prejudicial to the data subject or France, where the specific legislation covers ADM producing any significant effect (Malgieri 2019).

  • 2.2. Derogations: Consent, contractual necessity, and mandatory laws

As Article 22(1) grants data subjects the right not to be subject to solely ADM, the provision also contains certain exceptions (derogations) to the rule - subject to Art. 22/4 on special categories of personal data.

One of the most important changes brought by the GDPR as compared to Article 15 of the DPD is the introduction of “explicit consent” in Article 22(2)(c) as one of the grounds, which may be relied upon by the controllers to carry out fully automated decisions. According to Mendoza and Bygrave (2017), the introduction of consent comes as an impairment to the essence of the provision, lowering the de facto level of protection (p. 96). Considering that consent may practically be used to deprive the data subjects of the control of their data, the concerns about this new derogation - as a swift mechanism to carry on with automated decisions - are not all without merit. Nonetheless, rather than serving as a backdoor to circumvent data protection rules, consent may equally be construed as a leverage for transparency (Kaminski 2019). This is particularly the case where explicit and informed consent is taken as the initial step of the safeguards to render automated decisions contestable under Article 22(3). Furthermore, consent does not relieve the data controller from the duty of compliance with the general data protection principles such as fairness and proportionality provided in Article 6. Taking into account the complexity and subtlety of the current ADM systems, the requirement of explicit consent inevitably entails some “explanation” to allow data subjects to make informed choices.9 The extent of communication necessary to render the data subject’s consent explicit and thus valid, may also be taken as a benchmark to determine the minimum content of the notifications under Articles 13 and 14. Consent implemented as a leverage to individualized transparency - but not as a carte blanche for ADM without encumbrance - could play a critical role in reinforcing the right to obtain human intervention and right to contest.

Formation and performance of a contract (contractual necessity) is another derogation provided in Article 22(2)(a). The prohibition on automated decisions shall not be applicable where an automated decision is necessary for entering into or for the performance of a contract between the data subject and the data controller. The derogation based on contractual necessity provides a broad field of play which is tempting for abuse and creative compliance. The extent of automated decisions, which would be necessary in a contractual context is an issue that requires the consideration of the mutual benefits and expectations of the parties. For instance, increasing the efficiency of the system - as a general argument - cannot be regarded as necessity for this is simply what makes data processing more invasive (Guinchard 2017, p. 12).

Article 22/(2)(b) lays out another derogation providing that data subjects may be deprived of the safeguards in Article 22(3) where processing is mandated by the Union or Member State Law. Despite the reference to suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, this exclusion is likely to create discrepancies in terms of contestability standards between the administrative decisions based on public law and ADM relying upon consent or contractual necessity (Masse & Lemoine 2019, p. 9). Confirming this, some member states have already implemented the derogation in a way similar to a blanket-exemption, permitting ADM as default practice for public institutions (Malgieri 2019).

  • 2.3. Safeguards against automated decisions and the right to contest

    • 2.3.1. Safeguards in Article 22(3) in general

Under Article 22(3), where the exemptions based on contractual necessity (22 (2)(a)) or consent (22 (2)(c) take effect, the data controller is obliged to implement measures to safeguard the data subject’s rights, freedoms, and legitimate interests. In principle, these measures should at minimum contain a fair amount of human intervention so that the data subjects may express their view and effectively contest automated decisions. Before the GDPR, the DPD only spoke of arrangements allowing the data subjects to put forward their point of view. The Regulation has improved this position by formulating safeguards providing for human intervention and contestation.

Article 22(3) reads as:

In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.

Although frequently commented and explored in the scholarly writing, there seems not to be much attention to a coherent and systematic interpretation of the provision, in particular how the right to obtain human intervention, expressing one’s views, and contesting the decision could practically be implemented. They are usually treated as if rights or remedies of equal footing (alternatives to each othe

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值