What to account for when accounting for algorithms

该文献回顾了算法问责制的系统性研究,涵盖了从2008年到2018年的242篇文章。研究使用亲和图方法筛选出93篇核心文章,分析了算法问责的五个关键点:责任主体、论坛、关系、账户内容和可能的后果。研究发现,算法问责不仅限于批判性算法研究,而是跨学科的,涉及数据研究、法律、计算机科学和治理研究等领域。
摘要由CSDN通过智能技术生成

What to account for when accounting for algorithms

A systematic literature review on algorithmic accountability

Maranke Wieringa

m.a.wieringa@uu.nl

Datafied Society Utrecht University

Utrecht, The Netherlands

ABSTRACT

As research on algorithms and their impact proliferates, so do calls for scrutiny/accountability of algorithms. A systematic review of the work that has been done in the field of ’algorithmic accountability’ has so far been lacking. This contribution puts forth such a systematic review, following the PRISMA statement. 242 English articles from the period 2008 up to and including 2018 were collected and extracted from Web of Science and SCOPUS, using a recursive query design coupled with computational methods. The 242 articles were prioritized and ordered using affinity mapping, resulting in 93 ’core articles’ which are presented in this contribution. The recursive search strategy made it possible to look beyond the term ’algorithmic accountability’. That is, the query also included terms closely connected to the theme (e.g. ethics and AI, regulation of algorithms). This approach allows for a perspective not just from critical algorithm studies, but an interdisciplinary overview drawing on material from data studies to law, and from computer science to governance studies. To structure the material, Bovens’s widely accepted definition of accountability serves as a focal point. The material is analyzed on the five points Bovens identified as integral to accountability: its arguments on (1) the actor, (2) the forum, (3) the relationship between the two, (3) the content and criteria of the account, and finally (5) the consequences which may result from the account. The review makes three contributions. First, an integration of accountability theory in the algorithmic accountability discussion. Second, a cross-sectoral overview of the that same discussion viewed in light of accountability theory which pays extra attention to accountability risks in algorithmic systems. Lastly, it provides a definition of algorithmic accountability based on accountability theory and algorithmic accountability literature.

CCS CONCEPTS

• Social and professional topics ^ Management of computing and information systems; Socio-technical systems; • General and reference; • Human-centered computing ^ Collaborative and social computing;

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.

FAT* ’20, January 27-30, 2020, Barcelona, Spain

© 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM.

ACM ISBN 978-1-4503-6936-7/20/02...$15.00 What to account for when accounting for algorithms | Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency

KEYWORDS

Algorithmic accountability, algorithmic systems, data-driven governance, accountability theory

ACM Reference Format:

Maranke Wieringa. 2020. What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability. In Conference on Fairness, Accountability, and Transparency (FAT* ’20), January 27-30, 2020, Barcelona, Spain. ACM, New York, NY, USA, 18 pages. What to account for when accounting for algorithms | Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency

  • 1 INTRODUCTION

From aviation to recruiting: it seems no sector is unaffected by the implementation of computational systems. Such computational, or ’algorithmic’, systems, were once heralded as a way to remove human bias and to relieve human labor. Despite their aims, such systems were found capable of inflicting (minor to serious or even lethal) harms as well, be it intentional/unintentional. Examples of drastic situations abound. In 2019, two of Boeing’s planes were presumably downed by software [71]. Volkswagen designed their cars’ software to automatically cheat emission-testing [77]. Governmental systems initially designed to help now profile and discriminate the poor [63]. Amazon created a recruiting system which systematically discriminated against women, as the training data was made up of historical hiring data in which males were vastly overrepresented [47]. The effects of these systems may be intentional (e.g. Volkswagen’s emission fraud), but more often are unintended sideeffects, some of which may have far-reaching consequences such as the death of 346 Boeing passengers [73].

Central to such computational systems are algorithms: those sets of instructions fed to a computer to solve particular problems [70, p. 16]. As algorithms are increasingly applied within a rapidly expanding variety of fields and institutions affecting our society in crucial ways, new ways to discern and track bias, presuppositions, and prejudices built into, or resulting from algorithms are crucial. The assessment of algorithms in this matter has come to be known as ’algorithmic accountability’.

Algorithmic accountability has gained a lot of traction recently, due to the changed legislative and regulatory context of data-practice, with the implementation of the General Data Protection Regulation (GDPR), several lawsuits (e.g. A.4), and the integration with open government initiatives [65, p. 1454]. Examples of such governmental initiatives abound: the city of New York [109] installed an Automated Decisions Systems Task Force to evaluate algorithmic systems, and the Dutch Open Government Action Plan includes a segment on Open Algorithms [92]. Within civil society and academia, there are also many laudable initiatives [e.g.

3-5, 11, 50, 60, 108, 128, 138] advocating for more algorithmic accountability, yet a thorough and systematic definition of the term lacks, and it has not been systematically embedded within the existing body of work on accountability.

Nevertheless, there have been numerous works over the past decades which touch upon the theme of algorithmic accountability, albeit using different terms and stemming from different disciplines [e.g. 83, 88, 94, 116, 123]. Thus, while the term may be new, the theme certainly stands in a much older tradition of, for instance, computational accountability [e.g. 66, 112] and literate programming [88], advocating much of the same points.1 Algorithmic accountability is thus not a new phenomenon, and accountability even less so. To avoid reinventing the wheel, we should look to these discussions and to embed algorithmic accountability firmly within accountability theory.

This contribution presents the preliminary results of a systematic review on algorithmic accountability, following the PRISMA statement [95]. 242 English articles from the period 2008 up to and including 2018 were collected and extracted from Web of Science and SCOPUS, using a recursive query design (see appendix B for an explanation of the methodology) coupled with computational methods. The material was ordered and prioritized using affinity mapping, and the 93 ’core articles’ which were identified as the most important will be presented in this contribution. This recursive search strategy made it possible to look beyond the term ‘algorithmic accountability’ and instead approach it as a theme. That is, the query also included terms closely connected to the theme (e.g. ethics and AI, regulation of algorithms). This approach, allows for an interdisciplinary perspective which appreciates the multifaceted nature of algorithmic accountability. In order the structure the material, accountability theory is used as a focal point. This review makes three contributions: 1) an integration of accountability theory in the algorithmic accountability discussion, 2) a cross-sectoral overview of that same discussion viewed in light of accountability theory which pays extra attention to accountability risks in algorithmic systems, and 3) it provides a definition of algorithmic accountability based on accountability theory and algorithmic accountability literature. In Appendix A the reader can find concrete situations which highlight some problems with accountability. These will be referred to in the corresponding sections of this paper.

  • 2 ON ACCOUNTABILITY AND

ALGORITHMIC SYSTEMS

2.0.1 Defining accountability. Making governmental conduct transparent is now viewed as ‘good governance’. As such, accountability efforts can often be said to have a virtuous nature [25]. However, a side effect to such accountability efforts is the ‘sunlight is the best disinfectant; electric light the most efficient policeman’ [29] logic. In having to be transparent about one’s work, one starts to behave better: here we see accountability used as a mechanism to facilitate better behavior [25]. Both logics can co-exist. Accountability as a term can be used in a broad and narrow sense. Typically, though, the term refers to what Bovens [24, p. 447] describes as: a relationship between an actor and a forum, in which the actor has an obligation to explain and to justify his or her conduct, the forum can pose questions and pass judgement, and the actor may face consequences.2

Thus an ‘actor’ (be they an individual, a group, or an organization) is required to explain their actions before a particular audience, the ‘forum’.3 This account is bound to particular criteria. The audience can then ask for clarifications, and additional explanations, and subsequently decides if the actor has displayed proper conduct, from which repercussions may or may not follow. What is denoted with algorithmic accountability is this kind of accountability relationship where the topic of explanation and/or justification is an algorithmic system. So what, then, is an algorithmic system?

2.0.2 Defining algorithmic systems. As noted above, algorithms are basically instructions fed to a computer [70, p. 16], They are technical constructs that are simultaneously deeply social and cultural [125]. Appreciating this ‘entanglement’ [13, 133] of various perspectives and enactments [125] of algorithms, this contribution sees algorithms not as solely technical objects, but rather as socio-technical systems, which are embedded in culture(s) and can be viewed, used, and approached from different perspectives (e.g. legal, technological, cultural, social). This rich set of algorithmic ‘multiples’ [107, cited in 125] can enhance accountability rather than limit it. The interdisciplinary systematic literature review presented in the remainder of this contribution bundles knowledge and insight from a broad range of disciplines and appreciates the entanglements and multiples that are invariably a characteristic of algorithmic system interaction.

  • 3 WHAT TO ACCOUNT FOR WHEN ACCOUNTING FOR ALGORITHMS?

This paper draws Bovens’s widely accepted definition of accountability as a relation between actor and forum is used as a focal point to structure the 93 interdisciplinary articles. The material is analyzed on the five points Bovens’s identified as integral to accountability: (1) its arguments on the actor, (2) the forum, (3) the relationship between the two, (3) the content and criteria of the account, and finally (5) the consequences which may result from the account. Below, I will discuss the findings of each of these five points.

  • 3.1 Actor

A first question would be who should be rendering the account, or who is responsible [e.g. 39, 44, 57, 89, 99, 101, 127, 142]? Aside from such a general specification, Martin [101] and Yu et al. [142] argue that one needs to specifically address two different questions. For instance, who is responsible for the harm that the system may inflict when it is working correctly [101]? Who is responsible when it is working incorrectly [142]? These questions are often not readily answerable as the organization who is using the algorithmic system need not be the developing party. In many cases organizations commission a third party to develop a system for them, which complicates the accountability relationship. When is the developer to be held accountable and when should we call the organization commissioning and using the application to the stand [57, p. 62] (cf. A.3)?

  • 3.1.1  Levels of actors. Answering these questions is far from straightforward. In any given situation one may distinguish different actors on different levels of organization, making a single actor hard to pinpoint [106]. For instance, we can take the individual developing or using the system as the actor, but on higher levels one may also hold the team, the department or the organization as a whole accountable. In some instances the organization may use a system developed by a third party, which complicates the scheme even further (cf. A.3). In other words, locating the actor is often tough, and different contexts might even require different kinds of actors for the same subject.

Bovens [24] describes four types of accountability relations based on the level of the actor: individual accountability, hierarchical accountability, collective accountability, and corporate accountability. Individual accountability means that individual’s conduct is held to be their own. In other words, when one is not shielded from investigation by their superiors or organization [24, p. 459]. Hierarchical accountability describes the situation in which the persons heading the organization, department or team are held accountable for that greater whole [24, p. 458]. Collective accountability rests on the idea that one can hold a member of a group of organization accountable for the whole of that organization, regardless of their function or standing [24, p. 458-459]. This kind of accountability relationship is rare in democratic contexts, as it is ‘not sophisticated enough to do justice to the many differences that are important in the imputation of guilt, shame and blame’ [24, p. 459]. We can speak of corporate accountability in situations where an organization as a non-human legal entity is held accountable [24, p. 458]. This is for instance the case in instances where we speak of the ‘data controller’ [135] or the ‘developing firm’ [101].

Special attention needs to be given to cases in which there is a third party who - for instance - has developed a given system for a particular organization, especially when the organization is a public institution. To illustrate, a private company may develop an fraud detection algorithm which scrutinizes people on benefits for a municipality [e.g. 131]. Martin [101] argues that in such situations, these third party organizations become a voluntary part of the decision system, making them members of the community. This willful membership creates ‘an obligation to respect the norms of the community as a member’ [101]. This then raises the question: how can one make sure that a third party respects the norms and values of the context in which the system will be deployed [see also 69, 124]?

  • 3.1.2  Roles of actors. Actors can also be distinguished by their roles. Arguably the person drafting the specifications of the system will be put forth as an actor in different situations than a developer of the system or its user. Thus, roles can also be said to be a factor in determining the appropriate actor for particular situations. We can distinguish between three kinds of actor roles: decision makers, developers, and users.

Let us first look at decision makers, those who decide about the system, its specifications, and crucial factors. Coglianese and Lehr [41, p. 1216] note that it is important to consider ‘who within an agency actually wields algorithm-specifying power’. There is much at stake in balancing which individual gets to make these decisions precisely because higher-level employees (the authors specifically discuss public administration) are more accountable to others, so they cannot be unknowledgeable about critical details of the algorithm. Here, it seems, Coglianese and Lehr refer to hierarchical accountability. They continue to argue that introducing algorithmic systems may upend work processes in a fundamental way, especially when algorithms express value judgements quantitatively, as much is lost in that translation [41, p. 1218]. Who in an organization is allowed to systematically decide how such value judgements will be structurally translated into a number? Coupled to this is the question who gets to decide when an algorithm is ‘good enough’ at what it is supposed to do [69]? Who, for instance, gets to decide what acceptable error rates are [41, 90] (cf. A.2)?

Developers are often seen as the responsible party for such questions as they are ‘knowledgeable as to the design decisions and [are] in a unique position to inscribe the algorithm with the valueladen biases as well as roles and responsibilities of the algorithmic decision’ [101]. Kraemer, Van Overveld, and Peterson [90, p. 251] are like-minded, as they note that since the developers ‘cannot avoid making ethical judgments about what is good and bad, (...) it is reasonable to maintain that software designers are morally responsible for the algorithms they design’. Thus, developers implicitly or explicitly make value judgments which are woven into the algorithmic system. Here, the logic is that the choices should be left to the user as much as possible. The more those choices are withheld from users, the heavier the accountability burden for the developing entity is [90, 101].

This does imply, however, that developers and/or designers should also have the adequate sensitivity for ethical problems which may arise from the technology [130, p. 3]. Decisions about the balancing of error rates are not often part of specifications [90], which means developers have to be able to recognize and flag these ethical considerations before they can deliberate with stakeholders where needed and account for those choices. Another problem arises from what is termed the ‘accountability gap’ ‘between the designer’s control and algorithm’s behavior’ [106, p. 11]. Especially in learning algorithms which are vastly complex, the developer or the developing team as a whole may not control or predict the systems’ behavior adequately [82, p. 374].

Special attention has to be given to the users of the system, and their engagement with it. First of all one may wonder, who is the user of the system [80, 103, 117]? Secondly, we may ask what is the intensity of human involvement? In some cases implementing algorithmic systems comes at the loss of human involvement [e.g. 41]. In general, we can distinguish between three types of systems: human-in-the-loop, human-on-the-loop, human-out-of-the-loop. This typology originally stems from AI warfare systems, but is productively applied in the context of algorithmic accountability [40,45]. Human-in-the-loop systems can be said to augment human practice. Such systems make suggestions about possible actions, no action will be undertaken without human consent. In other words, these are decision-guidance processes [141, p. 121]. Human-on-the-loop systems are monitored by human agents, but instead of the default being ‘no, unless consent is given’, this kind of system will proceed with their task unless halted by the human agent. Finally, there are human-out-of-the-loop systems where no human oversight is taking place at all. We then speak of automated decisionmaking processes [141, p. 121]. Arguably, these different kinds of involvement have consequences for the accounts which can be rendered by the user-as-actor. Thus, one aspect of an account of algorithms should be the measure of human involvement [37, 40, 41, 46, 62, 89].

  • 3.2 Forum

As one sets out to account for their practice, it is important to consider to whom that account is directed [33,86,103,135]. Kemper and Kolkman [86] argue that one cannot give account without the audience understanding the subject matter and being able to engage with the material in a critical way. Their argument for the ‘critical audience’ shows parallels with Bovens’s articulation of accountability in which ‘the forum can pose questions and pass judgement’ [24, p. 450].

What shape can this critical audience take, then? The EU’s [64] General Data Protection Regulation (GDPR), hailed partly for its ‘right to explanation’, may point towards the individual citizen as the forum in the context of algorithmic accountability [135, p. 213214]. In other cases, one may need to give account to one’s peers, or the organization accounts to an auditor [32, p. 318]. Different fora can be interwoven, but each requires different kinds of explanations and justifications [cf. 21, 22, 142].

Bovens [24] describes five kinds of accountability relations based on the type of forum: political accountability (e.g. ministerial responsibility; cf. A.4), legal accountability (e.g. judges; cf. A.4), administrative accountability (e.g. auditors inspecting a system), professional accountability (e.g. insight by peers; cf. A.3), and social accountability (e.g. civil society).

Political accountability can be said to be the inverse and direct consequence of delegation from a political representative to civil servants [24, p. 455]. As tasks are delegated, the civil servant has to account for their conduct to their political superior.

What has changed is that not only do politicians delegate to civil servants now, but civil servants themselves start to delegate to and/or are replaced by algorithmic systems. This change is one that has been identified before by Bovens and Zouridis [27] in connection to the discretionary power of civil servants. Bovens and Zouridis note that civil servants’ discretion can be heavily curtailed by ICT systems within the government. Building on Lipsky’s [96] conception of the street-level bureaucrat, they make a distinction between street-level bureaucracy, screen-level bureaucracy, and system-level bureaucracy. Each of these types of bureaucracy allow for different measures of discretionary power of civil servants. Whereas street-level bureaucrats have a great measure of discretion, screen-level bureaucrats’ discretionary power is much more restricted. System-level bureaucracy allows for little to no discretionary power, as the system has replaced the civil servant entirely.

The different forms of bureaucracy are coupled to the way in which systems are playing a role within work processes. As Bovens and Zouridis note, the more decisive the system’s outcome is, the less discretion the user has. Delegation to systems is thus not a neutral process, but one that has great consequences for the way in which cases are dealt with, and border cases especially Delegation to systems is important to consider for two other reasons as well. First, following Bovens’ [24] logic of delegation and/or accountability it would make sense to start to hold the algorithmic system accountable.

There are many efforts to make algorithms explainable and intelligible. Guidotti et al. [72], for instance, note that we can speak of four different types of efforts to make a ‘wicked’ algorithm intelligible [10]. These four approaches also correspond to efforts discussed in the field of explainable AI (XAI) [e.g. 2]:

  • (1) Explaining the model, or: ‘global’ interpretability;

  • (2) Explaining the outcome, or: ‘local’ interpretability;

  • (3) Inspecting the black box;

  • (4) Creating a ‘transparent box’.

An explanation of the model is an account of the global logic of the system, whereas an explanation of the outcome is a local and, in the case of personalized decisions, personal. Inspecting the black box can take many shapes, such as reconstructing how the black box works internally, and visualizing the results (cf. A.1). In other cases auditors may be used to scrutinize the system [32, p. 318]. Another approach would be to construct a ‘transparent box’ system which does not use opaque or implicit predictors, but rather explicit and visible ones.

Such technical transparency of the working of the system can be helpful, but in itself should be considered insufficient for the present discussion, as accountability > transparency. The transparent workings of a system do not tell you why this system was deemed ‘good enough’ at decision making, or why it was deemed desirable to begin with [102]. Nor does it tell us anything about its specifications or functions, nor who decided on these, nor why [41, p. 1177]. Whereas transparency is thus passive (i.e. ‘see for yourself how it works’), accountability requires a more active and involved stance (i.e. ‘let me tell you how it works, and why’).

Second, whereas, in the context of the government, civil servants have the flexibility to subtly shift the execution of their tasks in light of the present political context, systems do not have such sensitivity. Often, such systems are not updated to the contemporary political context, and thus ‘lock in values’ for the duration of their lifecycle (cf. A.1). Accounts on algorithms are thus key as algorithmic systems are both ‘instruments and outcome of governance’ [84, following 85]. They are thus tools to implement particular governance strategies, but are also themselves a form of governance. Thus, accountability is crucial if we wish to avoid governance effects through obsolete values/choices, embedded in algorithmic systems.

Legal accountability is usually ‘based on specific responsibilities, formally or legally conferred upon authorities’ [24, p. 456]. Much of the actions systems will undertake are not up for deliberation, as they are enshrined in law [62, p. 413]. There are thus already laws and regulations which apply to systems and can be leveraged to ensure compliance.

However, as Coglianese and Lehr [41, p. 1188] note, laws do not prescribe all aspects of algorithmic systems. For instance, there is no one set acceptable level of error. Rather, the acceptability strongly depends on the context and use of the system [90, 102] (cf. A.2). There is thus also a matter of discretion on the part of the system designers on how one operates within the gaps of the judicial code, in these cases ‘ethical guidance’ needs to come from the human developer, decision maker, or user [62, p.416-417].

This does raise some questions with regards to the ethical sensitivity of these human agents. As it stands, technical experts may not be adequately aware of the laws and legal system which they operate in [46]. On the side of the legal system, there may also be insufficient capacity to understand algorithmic systems. We see this in cases where lawyers and expert witnesses must be able to inform their evidence with the working and design of the system [33]. Algorithmic systems thus require new kinds of expertise from lawyers, judges, and legal practitioners in general [e.g. 75, p. 75], as they need to be able to assess the sometimes conflicting laws and public values, within those systems, which themselves can be vastly complicated and rely on a large amount of connected data sources. Meaningful insight in this interwoven socio-technical system [31] is needed to decide whether these values are properly balanced and adequately accounted for. Yet it can be particularly hard to decide on what may provide meaningful insight and what will result in ‘opacity through transparency’ [113, 118].

While the data gathering phase is quite well regulated, the analysis and use phases of data are underregulated [32], leaving judges to fend for themselves. What might prove to be crucial is, however, the socio-technical aspect of the system. For as has been discussed above, algorithmic systems ‘do not set their own objective functions nor are they completely outside human control. An algorithm, by its very definition, must have its parameters and uses specified by humans’ [41, p. 1177]. Eventually, someone has made choices about the system, and mapping these pivotal moments might help in resolving a trans

1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。
应用背景为变电站电力巡检,基于YOLO v4算法模型对常见电力巡检目标进行检测,并充分利用Ascend310提供的DVPP等硬件支持能力来完成流媒体的传输、处理等任务,并对系统性能做出一定的优化。.zip深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
压缩感知是一种通过在接收端进行采样时减少传感器测量的技术。设计消息传递算法的目标是通过最小化传输数据量来恢复原始信号。 首先,需要确定传输消息的类型和格式。对于压缩感知问题,常见的消息类型包括测量数据、传感矩阵、重构算子和稀疏表示等。因此,在设计阶段需要定义这些消息的格式和表示方法。 其次,需要确定消息传递的顺序和步骤。一般来说,消息传递的顺序应该从接收端开始,依次向发送端发送消息。在每一步中,发送端根据接收到的消息更新自己的估计,并将更新后的消息发送给其他节点。 另外,还需要确定消息传递的策略和算法。在压缩感知问题中,常见的策略包括迭代收缩(Iterative Shrinkage)和最小均方误差(Minimum Mean Square Error)等。这些策略可以根据问题的特点进行选择,并进行相应的算法设计。 最后,需要进行算法的评估和调优。在设计完成后,可以通过模拟和实验来评估算法的性能和效果。如果发现算法存在问题或者需要改进,则可以对算法进行调优,以提高信号恢复的准确性和效率。 综上所述,设计压缩感知消息传递算法涉及确定消息类型和格式、确定传递顺序和步骤、确定传递策略和算法以及进行算法评估和调优等步骤。通过合理地设计算法,可以有效地实现对压缩感知问题的信号恢复。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值