Algorithmic Accountability:A Legal and Economic Framework

本文提出了一种基于美国民权法案第VII章判例法的算法问责定义,并制定了一个统计测试——输入问责测试(IAT),用于设计和审查算法决策过程中的输入,以防止算法模型对合格的受保护群体成员进行系统性惩罚。文章讨论了IAT与其他方法的对比,包括在就业、医疗保健和信贷批准等领域的应用,以及如何避免算法导致的不平等待遇问题。
摘要由CSDN通过智能技术生成

Algorithmic Accountability:

A Legal and Economic Framework

Robert P. Bartlett, III1 2 Adair Morse2 Nancy Wallace2 Richard Stanton°

Abstract

Despite the potential for machine learning and artificial intelligence to reduce face-to-face bias in decision-making, a growing chorus of scholars and policymakers have recently voiced concerns that if left unchecked, algorithmic decision-making can also lead to unintentional discrimination against members of historically marginalized groups. These concerns are being expressed through Congressional subpoenas, regulatory investigations, and an increasing number of algorithmic accountability bills pending in both state legislatures and Congress. To date, however, prominent efforts to define policies whereby an algorithm can be considered accountable have tended to focus on output-oriented policies and interventions that either may facilitate illegitimate discrimination or involve fairness corrections unlikely to be legally valid.

We provide a workable definition of algorithmic accountability that is rooted in the caselaw addressing statistical discrimination in the context of Title VII of the Civil Rights Act of 1964. Using instruction from the burden-shifting framework, codified to implement Title VII, we formulate a simple statistical test to apply to the design and review of the inputs used in any algorithmic decision-making processes. Application of the test, which we label the input accountability test, constitutes a legally viable, deployable tool that can prevent an algorithmic model from systematically penalizing members of protected groups who are otherwise qualified in a target characteristic of interest.

Algorithmic Accountability:

A Legal and Economic Framework

table of contents

Page

  • B. The Input Accountability Test (IAT) Versus Outcome-Oriented

Approaches

  • C. The Input Accountability Test Versus the ‘Least Discriminatory

Alternative’ Test

  • D. The Input Accountability Test Versus HUD’s Mere Predictive Test. 24

Appendix

I. INTRODUCTION

In August 2019, Apple Inc. debuted its much-anticipated Apple Card, a no fee, cash-rewards credit card “designed to help customers lead a healthier financial life.”3 Within weeks of its release, Twitter was abuzz with headlines that the card’s credit approval algorithm was systematically biased against women.4 Even Apple co-founder Steve Wozniak weighed in, tweeting that the card gave him a credit limit that was ten times higher than what it gave his wife, despite the couple sharing all their assets.5 In the days that followed, Goldman Sachs—Apple’s partner in designing the Apple Card—steadfastly defended the algorithm, insisting that “we have not and will not make decisions based on factors on gender.”6 Yet doubts persisted. By November, the New York State Department of Financial Services had announced an investigation into the card’s credit approval practices.7

Around that same time, buzz spread across the media about another algorithm, that of health insurer UnitedHealth.8 The algorithm was used to inform hospitals about patients’ level of sickness so that hospitals could more effectively allocate resources to the sickest patients. However, an article appearing in Science showed that because the company used cost of care as the metric for gauging sickness and because African-American patients historically incurred lower costs for the same illnesses and level of illness, the algorithm caused them to receive substandard care as compared to white patients.9

Despite the potential for algorithmic decision-making to eliminate face-to-face biases, these episodes provide vivid illustrations of the widespread concern that algorithms may nevertheless engage in objectionable discrimination.10 Indeed, a host of regulatory reforms have emerged to contend with this challenge. For example, New York City has enacted an algorithm accountability law, which creates a task force to recommend procedures for determining whether automated decisions by city agencies disproportionately impact protected groups.11 Likewise, the Washington State House of Representatives introduced an algorithm accountability bill, which would require the state’s chief information officer assess whether any automated decision system used by a state agency “has a known bias, or is untested for bias.”12 Federally, the Algorithmic Accountability Act of 2019, which is currently pending in Congress, would require large companies to audit their algorithms for “risks that [they] may result in or contribute to inaccurate, unfair, biased, or discriminatory decisions impacting consumers.”13

Yet, a notable absence in these legislative efforts is a formal standard for courts or regulators to deploy in evaluating algorithmic decision-making, raising the fundamental question: What exactly does it mean for an algorithm to be accountable? The urgency of this question follows from the meteoric growth in algorithmic decision-making, spawned by the availability of unprecedented data on individuals and the accompanying rise in techniques in machine learning and artificial intelligence.14

In this Article, we provide an answer to the pressing question of what accountability is, and we put forward a workable test that regulators, courts, and data scientists can apply in examining whether an algorithmic decisionmaking process complies with long-standing antidiscrimination statutes and caselaw. Central to our framework is the recognition that, despite the novelty of artificial intelligence and machine learning, existing U.S. antidiscrimination law has long provided a workable definition of accountability dating back to Title VII of the Civil Rights Act of 1964.15

Title VII and the caselaw interpreting it define what it means for any decision-making process—whether human or machine—to be accountable under U.S. antidiscrimination law. At the core of this caselaw is the burdenshifting framework initially articulated by the Supreme Court in Griggs v. Duke Power Co)16 Under this framework, plaintiffs putting forth a claim of unintentional discrimination under Title VII must demonstrate that a particular decision-making practice (e.g., a hiring practice) lands disparately on members of a protected group.17 If successful, the framework then demands that the burden shift to the defendant to show that the practice is “consistent with business necessity.”18 If the defendant satisfies this requirement, the burden returns to the plaintiff to show that an equally valid and less discriminatory practice was available that the employer refused to use.19 The focus of Title VII is on discrimination in the workplace, but the analytical framework that emerged from the Title VII context now spans other domains and applies directly to the type of unintentional, statistical discrimination utilized in algorithmic decision-making.20

Despite the long tradition of applying this framework to cases of statistical discrimination, it is commonly violated in the context of evaluating the discriminatory impact of algorithmic decision-making. Instead, for many, the legality of any unintentional discrimination resulting from an algorithmic model is presumed to depend on simply the accuracy of the model—that is, the ability of the model to predict a characteristic of interest (e.g., productivity or credit risk) generally referred to as the model’s “target.”21 An especially prominent example of this approach appears in the Department of Housing and Urban Development’s 2019 proposed rule revising the application of the disparate impact framework under the Fair Housing Act (FHA) for algorithmic credit scoring.22 The proposed rule provides that, after a lender shows that the proxy variables used in an algorithm do not substitute for membership in protected group, the lender may defeat a discrimination claim by showing that the model is “predictive of risk or other valid objective.”23 Yet this focus on predictive accuracy ignores how courts have applied the Griggs framework in the context of statistical discrimination.

To see why, consider the facts of the Supreme Court’s 1977 decision in Dothardv. Rawlinson.24 There, a prison system desired to hire job applicants who possessed a minimum level of strength to perform the job of a prison guard, but the prison could not directly observe which applicants satisfied this requirement.25 Consequently, the prison imposed a minimum height and weight requirement on the assumption that these observable characteristics were correlated with the requisite strength required for the job.26 In so doing, the prison was thus engaging in statistical discrimination: It was basing its hiring decision on the statistical correlation between observable proxies (an applicant’s height and weight) and the unobservable variable of business necessity (an applicant’s job-required strength).

Because this procedure resulted in adverse hiring outcomes for female applicants, a class of female applicants brought suit under Title VII for gender discrimination.27 Deploying the burden-shifting framework, the Supreme Court first concluded that the plaintiffs satisfied the disparate outcome step,28 and it also concluded that the prison had effectively argued that hiring applicants with the requisite strength could constitute a business necessity.29 However, the Court ultimately held that the practice used to discern strength—relying on the proxy variables of height and weight—did not meet the “consistent with business necessity” criteria.30 Rather, absent evidence showing the precise relationship between the height and weight requirements to “the requisite amount of strength thought essential to good job performance,”31 height and weight were noisy estimates of strength that risked penalizing females over-and-above these variables’ relation to the prison’s business necessity goal. In other words, height and weight were likely to be biased estimates of required strength whose use by the prison risked systematically penalizing female applicants who were, in fact, qualified.

The Court thus illustrated that in considering a case of statistical discrimination, the “consistent with business necessity” step requires the assessment of two distinct questions. First, is the unobservable “target” characteristic (e.g., requisite strength) one that can justify disparities in hiring outcomes across members of protected and unprotected groups? Second, even with a legitimate target variable, are the proxy “input” variables used to predict the target noisy estimates that are biased in a fashion that will systematically penalize members of a protected group who are otherwise qualified? In this regard, the Court’s holding echoes the long-standing prohibition against redlining in credit markets. A lender who engages in redlining refuses to lend to residents of a majority-minority neighborhood on the assumption that the average unobservable credit risk of its residents is higher than those of observably-similar but non-minority neighborhoods.32 Yet while differences in creditworthiness can be a legitimate basis for racial or ethnic disparities to exist in lending under the FHA,33 courts have consistently held that the mere fact that one’s neighborhood is correlated with predicted credit risk is insufficient to justify red-lining.34 By assuming that all residents of minority neighborhoods have low credit, redlining systematically penalizes minority borrowers who actually have high credit worthiness.

These two insights from Dothard—that statistical discrimination must be grounded in the search for a legitimate target variable and that the input proxy variables for the target cannot systematically discriminate against members of a protected group who are qualified in the target—remain as relevant in today’s world of algorithmic decision-making as they were in 1977. The primary task for courts, regulators, and data scientists is to adhere to them in the use of big data implementations of algorithmic decisions (e.g., in employment, performance assessment, credit, sentencing, insurance, medical treatment, college admissions, advertising, etc.).

Fortunately, the caselaw implementing the Title VII burden-shifting framework, viewed through basic principles of statistics, provides a way forward. This is our central contribution: We recast the logic that informs Dothard and courts’ attitude towards redlining into a formal statistical test that can be widely deployed in the context of algorithmic decision-making. We label it the Input Accountability Test (IAT).

As we show, the IAT provides a simple and direct diagnostic that a data scientist or regulator can apply to determine whether an algorithm is accountable under U.S. antidiscrimination principles. For instance, a statistician seeking to deploy the IAT could do so by turning to the same training data that she used to calibrate the predictive model of a target. In settings such as employment or lending where courts have explicitly articulated a legitimate business target (e.g., a job required skill or creditworthiness),35 the first step would be determining that the target is, in fact, a business necessity variable. Second, taking a proxy variable (e.g., height) that her predictive model utilizes, she would next decompose the proxy’s variation across individuals into that which correlates with the target variable and an error component. Finally, she would test whether that error component remains correlated with the protected category (e.g., gender). If a proxy used to predict a legitimate target variable is unbiased with respect to a protected group, it will pass the IAT, even if the use of the proxy disparately impacts members of protected groups. In this fashion, the test provides a concrete method to harness the benefits of statistical discrimination with regard to predictive accuracy while avoiding the risk that it systematically penalizes members of a protected group who are, in fact, qualified in the target characteristic of interest.

We provide an illustration of the IAT in the Dothard setting, not only to provide a clear depiction of the power of the test, but also to introduce several challenges in implementing it and suggested solutions. These challenges include multiple incarnations of measurement error in the target, as exemplified by the UnitedHeath use of cost as a target, rather than the degree of illness, mentioned previously. These challenges also include understanding what “significantly correlated” means in our era of massive datasets. We offer an approach that may serve as a way forward. Beyond the illustration, we also provide a simulation of the test using a randomly constructed training dataset of 800 prison employees.

Finally, we illustrate how the IAT can be deployed by courts, regulators, and data scientists. In addition to employment, we list a number of other sectors - including credit, parole determination, home insurance, school and scholarship selection, and tenant selection - where either caselaw or statutes have provided explicit instructions regarding what can constitute a legitimate business necessity target.36 We also discuss other domains such as automobile insurance and health care where claims of algorithmic discrimination have recently surfaced, but where existing discrimination laws are less clear whether liability can arise for unintentional discrimination. Businesses in these domains are thus left to self-regulating and have generally professed to adhering to non-discriminatory business necessity targets.37 For firms with an express target delineation (whether court-formalized or self-imposed), our IAT provides a tool to pre-test their models.

We highlight, however, that firm profit margins and legitimate business necessity targets can easily be confounded in the design of machine learning algorithms, especially in the form of exploiting consumer demand elasticities (e.g., profiling consumer shopping behavior).38 In lending, for instance, courts have repeatedly held that creditworthiness is the sole business necessity target that can justify outcomes that differ across protected and unprotected groups.39 Yet, newly-advanced machine learning techniques make it possible to use alternative targets, such as a borrower’s proclivity for comparing loan products, that focus on a lender’s profit margins in addition to credit risk. In other work, we provide empirical evidence consistent with FinTech algorithms’ engaging in such profiling, with the result that minority borrowers face higher priced loans, holding constant the price impact of borrowers’ credit risk.40 As such, these findings illustrate how the incentive of firms to use shopping behavior as a target can lead to discrimination in lending—a practice that could be detected by application of the IAT.41 Profiling for shopping behavior is a subject applicable to many settings beyond the lending context and a leading topic for future research and discourse.

Our approach differs from other approaches to “algorithmic fairness” that focus solely on ensuring fair outcomes across protected and unprotected groups.42 43 As we show, by failing to distinguish disparities that arise from a biased proxy from those disparities that arise from the distribution of a legitimate target variable, these approaches can themselves run afoul of U.S. antidiscrimination law. In particular, following the Supreme Court’s 2009 decision in Ricci v. DeStefanos" efforts to calibrate a decision-making process to equalize outcomes across members of protected and unprotected groups—regardless of whether individuals are qualified in a legitimate target of interest—are likely to be deemed impermissible intentional discrimination.44

This Article proceeds as follows. In Part 2, we begin by articulating a definition for algorithmic accountability that is at the core of our input accountability test. As we demonstrate there, our definition of algorithmic accountability is effectively a test for “unbiasedness,” which differs from various proposals for “algorithmic fairness” that are commonly found in the statistics and computer science literatures. Building on this definition of algorithmic accountability, Part 3 formally presents the IAT. The test is designed to provide a workable tool for data scientists and regulators to use to distinguish between legitimate and illegitimate discrimination. The test is directly responsive to the recent regulatory and legislative interest in understanding algorithmic accountability, while being consistent with longstanding U.S. antidiscrimination principles. Part 4 follows by exploring how the IAT can likewise be applied in other settings where algorithmic decisionmaking has come to play an increasingly important role. Part 5 concludes.

II. Accountability Under U.S. Antidiscrimination Law

A. Accountability and the Burden-Shifting Framework of Title VII

We ground our definition of accountability in the antidiscrimination principles of Title VII of the Civil Rights Act of 1964.45 Title VII, which focuses on the labor market, makes it “an unlawful employment practice for an employer (1) to ... discriminate against any individual with respect to his compensation, terms, conditions, or privileges of employment, because of such individual’s race, color, sex, or national origin; or (2) to limit, segregate, or classify his employees or applicants for employment in any way which would deprive or tend to deprive any individual of employment opportunities ... because of such individual’s race, color, religion, sex, or national origin.”46 Similar conceptualizations of antidiscrimination law were later written to apply to other settings, such as the prohibition of discrimination in mortgage lending under the FHA.47

In practice, Title VII has been interpreted as covering two forms of impermissible discrimination. The first and “the most easily understood type of discrimination”48 falls under the disparate-treatment theory of discrimination and requires that a plaintiff alleging discrimination prove “that an employer had a discriminatory motive for taking a job-related action.”49 Additionally, Title VII also covers practices which “in some cases, ... are not intended to discriminate but in fact have a disproportionately adverse effect on minorities.”50 These cases are usually brought forth under the disparateimpact theory of discrimination and allow for an employer to be liable for “facially neutral practices that, in fact, are ‘discriminatory in operation,’” even if unintentional.51

Critically, in cases where discrimination lacks an intentional motive, an employer can be liable only for disparate outcomes that are unjustified. The process of how disparities across members of protected and unprotected groups might be justified is articulated in the burden-shifting framework initially formulated by the Supreme Court in Griggs v. Duke Power Co.52 and subsequently codified by Congress in 1991.53 This delineation is central to the definition of accountability in today’s era of algorithms.

Under the burden-shifting framework, a plaintiff alleging discrimination under a claim without intentional motive bears the first burden. The plaintiff must identify a specific employment practice that causes “observed statistical disparities”54 across members of protected and unprotected groups.55 If the plaintiff succeeds in establishing this evidence, the burden shifts to the defendant.56 The defendant must then “demonstrate that the challenged practice is job related for the position in question and consistent with business necessity.”57 If the defendant satisfies this requirement, then “the burden shifts back to the plaintiff to show that an equally valid and less discriminatory practice was available that the employer refused to use.”58

This overview highlights two core ideas that inform what it means for a decision-making process to be accountable under U.S. antidiscrimination law. First, in the case of unintentional discrimination, disparate outcomes must be justified by reference to a legitimate “business necessity.”59 In the context of employment hiring, for instance, this is typically understood to be a job-related skill that is required for the position.60 Imagine, for instance, an employer who made all hiring decisions based on applicant’s level of a direct measure of the job-related skill necessary for the job. Even if the outcome of these decision-making processes results in disparate outcomes across minority and non-minority applicants, these disparities would be justified as nondiscriminatory with respect to a protected characteristic.

Second, in invalidating a decision-making process, U.S. antidiscrimination law does so because of invalid “inputs” rather than invalid “outputs” or results. This feature of U.S. antidiscrimination law is most evident in the case of disparate treatment claims involving the use by a decision-maker of a protected category in making a job-related decision. For instance, Section (m) of the 1991 Civil Rights Act states that “an unlawful employment practice is established when the complaining party demonstrates that race, color, religion, sex, or national origin was a motivating factor for any employment practice, even though other factors also motivated the practice.”61 However, this focus on inputs is also evident in cases alleging disparate impact, notwithstanding the doctrine’s initial requirement that a plaintiff allege disparate outcomes across members of protected and unprotected groups. Recall that even with evidence of disparate outcomes, an employer that seeks to defend against a claim of disparate impact discrimination must demonstrate why these outcomes were the result of a decision-making process based on legitimate business necessity factors (i.e., the disparate outcomes were the result of legitimate decision-making inputs).62 This focus on “inputs” underscores the broader policy objective of ensuring a decision-making process that is not discriminatory.

The practical challenge in implementing this antidiscrimination regime is that the critical decision-making input—an individual’s possession of a job-related skill—cannot be perfectly observed at the moment of a decision, inducing the decision-maker to turn to proxies for it. However, the foregoing discussion highlights that the objective in evaluating these proxy variables should be the same: ensuring that qualified minority applicants are not being systematically passed over for the job or promotion. As summarized by the Supreme Court in Ricci v. DeStefano, “[t]he purpose of Title VII ‘is to promote hiring on the basis of job qualifications, rather than on the basis of race or color.’”63

This objective, of course, is the basis for prohibiting the direct form of statistical discrimination famously examined by economists Kenneth Arrow64 and Edmund Phelps.65 In their models, an employer uses a job applicant’s race as a proxy for the applicant’s expected productivity because the employer assumes that the applicant possesses the average productivity of his or her race. If the employer also assumes the average productivity of minority applicants is lower than non-minorities (e.g., because of long-standing social and racial inequalities), this proxy will ensure that above-average productive minorities will systematically be passed over for the job despite being qualified for it. Because this practice creates a direct and obvious bias against minorities, this practice is typically policed under the disparate treatment theory of discrimination.66

Beyond this clearly unlawful form of statistical discrimination, a decision-maker can use statistical discrimination to incorporate not just the protected-class variable but also other proxy variables for the businessnecessity unobservable attributes. For instance, an employer might seek to predict a job applicant’s productivity based on other observable characteristics that the employer believes are correlated with future productivity, such as an applicant’s level of education or an applicant’s performance on a personality or cognitive ability test.67 Indeed, it is the possibility of using data mining to discern new and unintuitive correlations between an individual’s observable characteristics and a target variable of interest (e.g., productivity or creditworthiness) that has contributed to the dramatic growth in algorithmic decision-making.68 The advent of data mining has meant that thousands of such proxy variables are sometimes used.69

As the UnitedHealth algorithm revealed, however, the use of these proxy variables can result in members of a protected class experiencing disparate outcomes. The problem arises from what researchers call “redundant encodings”—the fact that a proxy variable can be predictive of a legitimate target variable and membership in a protected group.70 Moreover, there are social and economic factors that make one’s group membership correlated with virtually any observable proxy variable. As one proponent of predictive policy declared, “If you wanted to remove everything correlated with race, you couldn’t use anything. That’s the reality of life in America.”71 At the same time certain proxy variables may predict membership in a protected group over-and-above their ability to predict a legitimate target variable; relying on these proxy variables therefore risks penalizing members of the protected group who are otherwise qualified in the legitimate target variable.72 In short, algorithmic accountability requires a method to limit the use of redundantly encoded proxy variables to those that are consistent with the anti-discrimination principles of Title VII of the Civil Rights Act and to prohibit the use of those that are not.73

Our central contribution is in developing accountability input criteria that speak directly to the process demanded by Title VII. Specifically, we use these accountability input criteria to develop a statistical test for whether a proxy variable (or each proxy variable in a set of proxy variables) is being used in a way that causes illegitimate statistical discrimination and should therefore not be used in an algorithmic model. Fundamentally, it is a test for “unbiasedness” designed to ensure that the use of a proxy input variable does not systematically penalize members of a protected group who are otherwise qualified with respect to a legitimate-business-necessity objective. We refer to this test as the input-accountability test. We illustrate the test and its application with a simple pre-employment screening exam designed to infer whether a job applicant possesses sufficient strength to perform a particular job. Before doing so, however, we differentiate the input-accountability test from other approaches to algorithmic accountability.

B. The Input Accountability Test Versus Outcome-Oriented Approaches

Our input-based approach differs significantly from that of other scholars who have advanced outcome-oriented approaches to algorithmic accountability. For instance, Talia Gillis and Jann Spiess have argued that the conventional focus in fair lending on restricting invalid inputs (such as a borrower’s race or ethnicity) is infeasible in the machine-learning context.74 The reason, according to Gillis and Spiess, is because a predictive model of default that excludes a borrower’s race or ethnicity can still penalize minority borrowers if one of the included variables (e.g., borrower education) is correlated with both default and race.75 Gillis and Spiess acknowledge the possibility that one could seek to exclude from the model some of these correlated variables on this basis, but they find this approach infeasible given that “a major challenge of this approach is the required articulation of the conditions under which exclusion of data inputs is necessary.”76 They therefore follow the burgeoning literature within computer science on “algorithmic fairness”75 and advocate evaluating the outcomes from an algorithm against some baseline criteria to determine whether the outcomes are fair.76 As examples, they suggest a regulator might simply examine whether loan prices differ across members of protected or unprotected groups, or a regulator might look at whether “similarly situated” borrowers from the protected and nonprotected groups were treated differently.77

Gillis and Spiess are, of course, correct that simply prohibiting an algorithm from considering a borrower’s race or ethnicity will not eliminate the risk that the algorithm will be biased against minority borrowers in a way that is unrelated to their creditworthiness (which is a legitimate-businessnecessity variable).78 Indeed, we share this concern about redundant encodings, and it motivates our empirical test. However, we part ways with these authors in that we do not view as insurmountable the challenge of articulating the conditions for excluding variables that are correlated with a protected classification, as we illustrate in Part 3.

Equally important, it is with an outcome-based approach rather than with an input-based approach where one encounters the greatest conceptual and practical challenges for algorithmic accountability. As Richard Berk and others have noted, efforts to make algorithmic outcomes “fair” pose the challenge that there are multiple definitions of fairness, and many of these

  • 75 For a summary, see Sam Corbett-Davies and Sharad Goel, The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning (arXiv.org, August 2018), available at https://arxiv.org/pdf/1808.00023.pdf. In particular, a common approach to algorithmic fairness within computer science is to evaluate the fairness of a predictive algorithm by use of a “confusion matrix.” Id. at 4. A confusion matrix is a cross-tabulation of actual outcomes by the predicted outcome. For instance, the confusion matrix for an algorithm that classified individuals as likely to default on a loan would appear as follows:

    Default Predicted

    No Default Predicted

    Default Occurs

    # Correctly Classified as

    # Incorrectly Classified as Non

    Defaulting = Ntp

    Defaulting = Nfn

    (True Positives)

    (False Negatives)

    Default Does Not

    # Incorrectly Classified as

    # Correctly Classified as Non

    Occur

    Defaulting = Nfp

    Defaulting = Ntn

    (False Positives)

    (True Negatives)

Using this table, one could then evaluate the fairness of the classifier by inquiring whether classification error is equal across members of protected and unprotected groups. Id. at 5. For example, one could use as a baseline fairness criterion a requirement that the classifier have the same false positive rate (i.e., Nfp / (Nfp + Ntn)) for minority borrowers as for non-minority borrowers. Alternatively, one could use as a baseline a requirement of treatment equality (e.g., the ratio of False Positives to False Negatives) across members of protected and unprotected groups.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: 《算法交易:获胜策略及其基础逻辑》是一本以电子书形式发布的内容。该书探讨了算法交易的不同策略,以及这些策略的基本原理。算法交易是一种利用计算机算法进行交易的方法,它的目标是通过快速和自动化的方式来实现交易决策,以获得更好的交易结果。 这本书包含了许多不同的策略,如均值回归、趋势跟踪、市场微观结构等。每一种策略都提供了详细的分析和示例,解释了其逻辑和背后的原则。此外,书中还介绍了如何使用技术指标、数据分析和机器学习等工具来优化和改进策略。 除了具体的策略,该书还探讨了算法交易的各种挑战和风险。例如,市场的快速变化和不确定性使得策略的设计和执行变得更加困难。同时,书中还提供了关于如何管理交易风险、监控策略效果以及如何进行组合交易等重要主题的指导。 总的来说,《算法交易:获胜策略及其基础逻辑》是一本全面介绍算法交易的有用资源。它提供了丰富的理论和实践知识,帮助读者理解算法交易的原理和方法,并为他们提供了实施和改进自己策略的工具和技巧。无论是对于想要入门算法交易的初学者,还是对于已经从事算法交易的专业人士,都会从这本书中获得价值。 ### 回答2: 《算法交易:胜利策略及其合理性PDF》是一本介绍算法交易胜利策略和其背后原理的电子书。算法交易是指使用计算机程序来执行和管理交易活动的交易方式。这种交易方式通过利用算法和技术分析,可以自动进行决策和执行交易,从而提高交易效率。 这本书通过系统地介绍了多种成功的算法交易策略,揭示了它们的原理和工作机制。作者可能会涉及到一些经典的策略,如趋势跟踪、均值回归、配对交易等。他们会详细介绍每种策略的执行方式、交易信号的生成和交易规则的设计。通过学习这些策略,读者可以了解到如何设计自己的算法交易策略,并且根据市场情况进行相应的调整和优化。 此外,这本电子书还可能会讨论一些背后的合理性和原则。在算法交易中,不仅仅是策略的选择和执行,还需要考虑到交易的风险管理、资金管理等因素。作者可能会解释一些常见的风险管理技术,如止损、止盈、仓位控制等。他们还可能会介绍一些交易心理学原理,帮助读者更好地理解和控制自己的情绪和行为。 总的来说,《算法交易:胜利策略及其合理性PDF》是一本有关算法交易的实用指南。通过学习这本书,读者可以了解到多种成功的算法交易策略,并学习到如何选择、设计和执行这些策略。同时,读者还可以了解到一些交易背后的合理性和原则,从而更好地运用算法交易进行投资。 ### 回答3: Algorithmic trading(算法交易)指的是使用计算机程序自动进行交易的一种交易策略。这种交易策略利用预先制定的算法,根据市场行情、价格波动等因素,进行买卖决策,从而实现高效的交易。《Algorithmic Trading: Winning Strategies and Their Rationale》是一本介绍算法交易的书籍,作者详细介绍了一些获胜的策略以及它们的理论依据。 该书提供了一些在算法交易中广泛使用的策略,比如趋势跟踪、统计套利和市场制造等。趋势跟踪策略是根据市场的主要趋势方向进行交易,通过捕捉价格上涨或下跌趋势来获取收益。统计套利是利用不同市场之间的价格差异进行买卖,以获取无风险收益。市场制造策略是通过提供流动性和进行薄利交易来获取收益。这些策略在作者书中都有详细的介绍,包括背后的理论基础、具体的交易规则和执行方式。 除了介绍具体的策略,这本书还探讨了算法交易的一些实际问题和挑战。例如,如何选择和优化交易策略,如何控制风险和避免潜在的交易损失。此外,作者还介绍了一些用于评估交易策略和算法性能的常见指标和工具。这些内容对于想要在算法交易领域取得成功的交易员和投资者来说,都是非常有价值的。 总而言之,《Algorithmic Trading: Winning Strategies and Their Rationale》这本书提供了关于算法交易的全面介绍,涵盖了各种策略、理论依据和实际应用。对于对算法交易感兴趣的人来说,这本书是一本必读的参考资料,可以帮助他们理解和应用这种高效的交易策略。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值