算法偏见是什么_处理人工智能的偏见如何使算法公平公正

本文探讨了算法偏见的概念,源于人工智能的决策过程可能受到数据偏见的影响,导致不公平的结果。讨论了如何通过改进数据收集和算法设计来确保算法的公正性。
摘要由CSDN通过智能技术生成

算法偏见是什么

算法在AI中扮演什么角色? (What Role do Algorithms Play in AI?)

As artificial intelligence becomes more pervasive and entrenched in our lives, we are faced with challenging questions to ensure that the future of AI is fair and accountable. Algorithms simply serve as mathematical instructions that guide the functioning of an AI system. “When it comes to artificial intelligence, consider the algorithm a recipe”. Daily, algorithms are shaping our lives. From our Netflix recommendations to our Facebook and Instagram advertisements, everything is dependent on an algorithm. They can generate flawed outcomes when the input dataset reflects personal or societal biases or when the dataset lacks relevant information and in consequence, creates a biased output. Whether the act of feeding biased data in an AI system is intentional or not, it leads to race, gender, and age discrimination which imposes a sense of urgency on organizational decision-makers to undertake resolution.

随着人工智能在我们生活中越来越普遍和根深蒂固,我们面临着具有挑战性的问题,以确保人工智能的未来是公平和负责任的。 算法只是充当指导AI系统功能的数学指令。 “在人工智能方面,请将该算法视为一个秘诀”。 每天,算法都在影响着我们的生活。 从我们的Netflix建议到我们的Facebook和Instagram广告,一切都取决于算法。 当输入数据集反映个人或社会偏见时,或者当数据集缺乏相关信息并因此产生偏向输出时,它们可能会产生有缺陷的结果。 无论是否有意在AI系统中提供有偏见的数据,都会导致种族,性别和年龄歧视,这使组织决策者必须采取紧迫感进行解决。

AI has spread rapidly and widely across disciplines varying from criminal justice and healthcare to financial services and human resources. As AI continues to gain popularity, the ethics of it comes to light. Major corporations such as Google, Facebook, Microsoft, Amazon, IBM, and Apple have all suffered from tangible and intangible losses because of algorithmic bias in their AI systems. Google’s algorithm has been accused of underrepresenting women in job-related image searches. Amazon’s recruiting algorithm revealed a bias against certain demographics, specifically women. In light of these controversies, companies have begun to incorporate ethics into their AI philosophy. Microsoft has introduced “Responsible AI”, that is governed by a set of moral principles to make AI fair, reliable, and inclusive. Although an Ethical AI movement has begun, we still have a long way to go. To accelerate this ethical transformation of AI, every stakeholder has to play a role. But more importantly, users of the tool — organizations, that is to say, employers, managers, and executives need to gravely address the solutions to reduce algorithmic bias and eventually, eliminate it. By confronting the current reality of AI, employers will be compelled to think about the who, why, and how of changing AI for the better.

人工智能已经Swift广泛地传播到各个领域,从刑事司法和医疗保健到金融服务和人力资源。 随着AI继续流行,其道德观念逐渐浮出水面。 诸如Google,Facebook,Microsoft,Amazon,IBM和Apple之类的大公司都因其AI系统中的算法偏差而遭受了有形和无形的损失。 谷歌的算法被指控在与工作相关的图像搜索中女性代表性不足。 亚马逊的招聘算法显示出对某些人口统计学的偏见,尤其是女性。 鉴于这些争议,公司已开始将伦理学纳入其AI哲学。 微软推出了“负责任的AI”,它遵循一系列道德原则,以使AI公平,可靠和包容。 尽管道德AI运动已经开始,但我们还有很长的路要走。 为了加速AI的这种道德转型,每个利益相关者都必须发挥作用。 但更重要的是,该工具的用户-组织,即雇主,经理和执行人员需要认真解决解决方案,以减少算法偏差并最终消除它。 通过面对AI的当前现实,雇主将被迫考虑谁,为什么以及如何使AI变得更好。

So, what can you do?

所以,你可以做什么?

多样性与包容性如何提供帮助? (How can Diversity & Inclusion Help?)

Joy Buolamwini describes algorithmic bias as “the coded gaze” and rightfully so because of the power vested in coding programmers when it comes to building an algorithm’s decision model. To create more inclusive codes, the people behind the code matter. Ensuring that a diverse pool of individuals is involved in designing and testing the algorithm, could help detect unintentional biases or include key information that was previously withheld in the dataset. The baseline definition of “diverse pool” involves selecting qualified people from different social and cultural backgrounds which will help address the lack of gender and race diversity in tech. Therefore, employers must address the lack of diversity in the technology discipline by hiring data scientists, IT specialists, and computer programmers from varied geographical and cultural backgrounds. Hiring a diverse group of AI designers and testers would not only help exercise inclusive coding practices in the workplace but also ensure that the input data mirrors almost no racial or gender biases. Once hiring managers begin thinking about “who codes” then they will finally realize the value of diversity in dealing with biases in AI.

Joy Buolamwini将算法偏差描述为“编码注视”,这是正确的,因为在建立算法的决策模型时,编码程序员具有强大的能力。 要创建更具包容性的代码,代码背后的人员至关重要。 确保在设计和测试算法时涉及不同的人员,可以帮助检测意外偏差或包含以前在数据集中保留的关键信息。 “多元化”的基线定义涉及选择来自不同社会和文化背景的合格人才,这将有助于解决技术领域性别和种族多样性的匮乏。 因此,雇主必须通过雇用来自不同地理和文化背景的数据科学家,IT专家和计算机程序员来解决技术学科中缺乏多样性的问题。 聘请各种各样的AI设计师和测试人员不仅可以帮助在工作场所进行包容性编码实践,还可以确保输入数据几乎不反映种族或性别偏见。 一旦招聘经理开始考虑“谁编码”,他们最终将意识到多样性在处理AI偏见中的价值。

为什么要投资于教育措施? (Why Invest in Educational Measures?)

In addition to diversity, education can also play a contributory role in dealing with the biases present in AI systems. Ethics education in organizations can create cultural awareness among employees and help gain exposure to different lifestyles and value systems within society. A deeper understanding and acceptance of the existence of diverse perspectives, behaviors, and attitudes would help all employees directly involved in the algorithm design process to consider bias in data that might have been overlooked in the past. Though this does not guarantee to eliminate algorithmic bias from its roots, it’s certainly a step forward in reducing the occurrence of social biases in the dataset.

除了多样性以外,教育在应对AI系统中的偏见方面也可以起到促进作用。 组织中的道德教育可以在员工中建立文化意识,并帮助他们接触社会上不同的生活方式和价值体系。 对各种观点,行为和态度的存在的更深刻理解和接受,将有助于直接参与算法设计过程的所有员工考虑过去可能被忽略的数据偏差。 尽管这不能保证从根源上消除算法偏差,但这无疑是减少数据集中社会偏差发生的一步。

Salesforce’s online learning platform initiative, Trailhead, helps inform millions of people on “the technology of tomorrow” ranging from Blockchain to AI. Trailhead recently introduced a new module named Responsible Creation of Artificial Intelligence which aims to educate everyone who might be directly involved in the AI development process on building and responsibly employing AI and understanding the implications of it on consumers, businesses, and society as a whole. The module explores topics pertinent to detecting and eliminating bias from data and algorithms to advocate the ethical and efficacious use of intelligent technologies.

Salesforce的在线学习平台计划 Trailhead可以帮助数百万人了解从区块链到AI的“未来技术”。 Trailhead最近推出了一个名为负责任的人工智能创造的新模块,旨在教育可能直接参与AI开发过程的每个人如何构建和负责任地使用AI,并了解其对消费者,企业和整个社会的影响。 该模块探索与检测和消除数据和算法偏差有关的主题,以提倡道德和有效使用智能技术。

As employers and managers, you should incorporate ethics education such as the one described above as part of mandatory employee training so that employees working in the technology department of the organization can make culturally informed decisions regarding the detection and elimination of bias.

作为雇主和管理人员,您应将上述道德教育作为强制性员工培训的一部分,以使组织的技术部门工作的员工可以就发现和消除偏见做出明智的文化决定。

透明,可解释的AI是答案吗? (Is Transparent, Explainable AI the Answer?)

To fully combat social identity bias and discrimination attributable to AI systems, transparency is vital. The need for transparency gained ample attention after a husband and wife received exceedingly different credit limits on their Apple Card despite her having a higher credit score. Since the credit card limit is determined by an algorithm, Apple faced significant backlash on the biased nature of its AI system. Words like “transparent”, “accurate”, “observable”, “responsible” and “fair” started trending and researchers began demanding clarity on an algorithm’s decision model. Transparency in the general sense would imply revealing the input, programming, and output of an algorithm but because organizations usually regard algorithms as part of their intellectual property, it makes them hesitant in disclosing the code behind the algorithm. But if you think deeply about the impact transparency can have, customers would benefit more if rather than understanding how an AI system works, it explains its reason behind reaching a particular decision.

为了充分应对人工智能系统造成的社会认同偏见和歧视,透明度至关重要。 尽管丈夫和妻子的信用评分较高,但他们的Apple Card信用限额却截然不同 ,透明度的需要得到了足够的注意。 由于信用卡限额是由算法确定的,因此苹果公司在其AI系统的偏向性上面临着强烈的反对。 诸如“透明”,“准确”,“可观察”,“负责任”和“公平”之类的词开始流行,研究人员开始要求对算法的决策模型进行澄清。 一般而言,透明性意味着要揭示算法的输入,编程和输出,但是由于组织通常将算法视为其知识产权的一部分,因此使他们不愿透露算法背后的代码。 但是,如果您对透明性可能产生的影响进行深入思考,那么,如果不了解AI系统的工作原理,而是从客户那里获得特定决策,其原因将是客户将从中受益。

For instance, if a customer is declined a bank loan or rejected for a job then the loan approval and hiring algorithm should explain why the customer was denied the loan and rejected for the position respectively. The algorithm could say that it came to this conclusion because the loan applicant had little savings or inadequate credit references and then provide the minimum number in savings and references that it considers for loan applications. Similarly, if a candidate is rejected for a position then the algorithm should say that the applicant was rejected because of his or her limited experience in that industry or the lack of a particular skill and then go on to provide information on the minimum years of experience or the relevant skill required. In both these cases, the algorithm is not only justifying its decision but also providing some instruction to improve upon for future applications. This implies that employers or managers must be aware of what type of data is fed to the algorithmic model and how that data is utilized by the model to generate an outcome. It is also your responsibility to effectively communicate this understanding and explain the algorithm's decision to the customer so that they don’t question the accuracy and reliability of the AI system. Through the process of making AI more explainable, data scientists and programmers are also provided with the opportunity to delve deeper and identify whether an algorithm's decision is a result of biased input data or not.

例如,如果客户被拒绝银行贷款或被拒绝工作,那么贷款批准和雇用算法应解释为什么客户被拒绝贷款和拒绝职位。 该算法可以得出这样的结论,因为贷款申请人的积蓄很少或信贷参考不充分,然后提供了其考虑用于贷款申请的储蓄和参考的最小数量。 同样,如果候选人被拒绝担任职位,则算法应说明申请人是由于其在该行业的经验有限或缺乏特定技能而被拒绝的,然后继续提供有关最少工作年限的信息或所需的相关技能。 在这两种情况下,该算法不仅证明其决策合理,而且还提供了一些指令以供将来的应用改进。 这意味着雇主或管理者必须知道将哪种类型的数据馈送到算法模型,以及模型如何利用这些数据来生成结果。 有效地传达这种理解并向客户解释算法的决定也是您的责任,这样他们就不会质疑AI系统的准确性和可靠性。 通过使AI更具可解释性的过程,数据科学家和程序员也有机会更深入地研究并确定算法的决策是否是有偏见的输入数据的结果。

Google integrated a new feature named What-If to its machine learning web platform TensorBoard. Through this feature, anyone can analyze a machine learning model and create explanations for its outcome without requiring any coding from programmers. IBM has introduced new cloud-based AI tools that help show customers which factors are involved in the algorithm arriving at a conclusion. In addition to this, the tools can analyze algorithmic decisions in real-time to identify implicit biases and provide recommendations on dealing with them. KPMG has also begun experimenting with explainable tools developed in-house to better understand the decision-making process of an algorithm and provide customers with satisfactory responses about the decisions concerning them. Lastly, Bank of America and Capital One are amidst developing AI algorithms that can explain the rationale behind reaching a particular banking outcome or decision.

Google在其机器学习Web平台TensorBoard中集成了一项名为“假设分析”的新功能。 通过此功能,任何人都可以分析机器学习模型并为其结果创建解释,而无需程序员进行任何编码。 IBM引入了新的基于云的AI工具,这些工具可帮助客户向客户展示算法得出结论中涉及的因素。 除此之外,这些工具还可以实时分析算法决策,以识别隐性偏差并提供处理建议。 毕马威(KPMG)也已开始尝试使用内部开发的可解释工具,以更好地了解算法的决策过程,并为客户提供有关其决策的满意响应。 最后,美国银行和Capital One正在开发AI算法,该算法可以解释达成特定银行业务结果或决策背后的理由。

To further assign accountability to AI systems, an AI platform called the Grace Platform provides organizations with the opportunity to remodel AI to its transparent, explainable, and ethical form. It offers technology companies and larger organizations with data monitoring, algorithm traceability, and model training and development related services. With the help of Grace, employers can ensure that their AI decision models are being meticulously studied and examined to recognize any flaws relating to personal or societal biases being reflected in the system.

为了进一步为AI系统分配责任制,称为Grace平台的AI平台为组织提供了将AI重塑为透明,可解释和符合道德的形式的机会。 它为技术公司和大型组织提供数据监视,算法可追溯性以及模型培训和开发相关的服务。 在Grace的帮助下,雇主可以确保认真研究和检查其AI决策模型,以识别与系统中反映的个人或社会偏见相关的任何缺陷。

Such efforts encourage organizational leaders and managers to take a more moral stance on their AI use and maintain compliance with ethical standards in the long-term. By defining transparency through the scope of explainable outcomes, employers are directly addressing the customer’s stake in AI and ensuring that customers feel well-informed about decisions made by an algorithm. Ultimately, the power of diversity, education, and transparency will unveil innumerable benefits for every stakeholder and pave the way towards a bias-free AI.

这些努力鼓励组织领导者和管理者对AI的使用采取更道德的立场,并长期保持对道德标准的遵守。 通过在可解释的结果范围内定义透明度,雇主可以直接解决客户在AI中所占的份额,并确保客户对算法做出的决定有充分的了解。 最终,多样性,教育和透明度的力量将为每个利益相关者带来无数的利益,并为实现无偏差的AI铺平道路。

期待 (Looking Forward)

Such a complicated issue indeed demands multiple solutions and though it can be a time-consuming process to implement all said changes, it is essential to work towards an ethical future of intelligent technologies. So, build diverse engineering and programming teams, seek ethics education, learn more about detecting and removing bias, and create transparent and explainable algorithms. AI is here to stay so we have to ensure its benefits are not reaped at the expense of society’s sense of fairness, integrity, and equality. In the words of Osonde Osoba, “if you want to build a better, fairer society we need AI systems that reflect and amplify the better parts of our nature”.

如此复杂的问题确实需要多种解决方案,尽管实现所有上述更改可能是一个耗时的过程,但为实现智能技术的道德未来而努力是至关重要的。 因此,组建多元化的工程和编程团队,寻求道德教育,了解有关检测和消除偏差的更多信息,并创建透明且可解释的算法。 人工智能将继续存在,因此我们必须确保不会从社会的公平,正直和平等感中获取利益。 用奥森德·奥索巴 ( Osonde Osoba)的话来说 ,“如果您想建立一个更好,更公平的社会,我们需要能够反映并扩大我们自然界更好部分的AI系统”。

翻译自: https://medium.com/swlh/dealing-with-biases-in-artificial-intelligence-how-can-we-make-algorithms-fair-and-just-dca921a68735

算法偏见是什么

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值