Philosophy and Ethics of Artificial Intelligence考试重要知识点整理

2 Intelligence

在这里插入图片描述
Russell and Norvig fall within the acting rationally camp:
What is AI?” Question recast as “What is intelligence?” and then identify intelligence with acting rationality.

Moravec’s paradox

Fact that low level sensorimotor tasks seem easy to humans despite requiring huge computational resources.

Ideal agents

  • Calculatively rational – programs that if executed infinitely fast would result in perfectly rational behavior
  • Bounded Optimality – given a machine M (with its associated time and space constraints) what is the optimal program that given an environment E results in the agent acting to maximize expected utility
  • AGI:the ability to accomplish any goal at least as well as human
  • superintelligence: any intellect that greatly exceeds the cognitive performance of humans

AIXI

  • AIXI is a mathematically rigorous framework for defining optimal intelligence and provides a definition of intelligence that generalises the Russell and Norvig definition.
  • AIXI transform above definition into meaningful equations and then studying those equations.

path to AGI

  • singularity:critical see level corresponds to when machines can perform AI design, before which humans improve machines, after which machines improve machines
  • Early Ai adopted logic/symbolic paradigm - rules applied to data but an AI winter
  • instead of high level symbol manipulation, adopted connectionist paradigm.
  • Back propagation algorithms and advances in hardware and processing power fuelled explosion in use of neural networks.

turing test

  • target: Behaviour-based, human based, systems that act like humans
    pastTT<------>intelligence
  • chauvinistic objection: 有智力的可能并不想通过TT
    pastTT------>intelligence
  • blockhead controled by a look-up tree that contains a programmed response for every discriminable input 来反驳,但是又可以用反驳conceivability—>logical possibility来反驳反驳
  • logically sufficient:充分
  • logically necessary: 必要

3 Consciousness

  • consciousness = subjective experience

3.1 Chinese room(华语房间)

intelligence》》》understanding智力就是理解,没有理解没有智力
<百度百科>:中文房间的实验过程可表述为:一个对中文一窍不通的,以英语作母语的人被关闭在一只有两个通口的封闭房间中。房间里有一本用英文写成,从形式上说明中文文字句法和文法组合规则的手册,以及一大堆中文符号。房外的人不断向房间内递进用中文写成的问题。房内的人便按照手册的说明,将中文符号组合成对问题的解答,并将答案递出房间。
约翰·希尔勒认为,尽管房里的人甚至可以以假乱真,让房外的人以为他是中文的母语用户,然而他压根不懂中文。而在上面的过程中,房外人所扮演的角色相当于程序员,房中人相当于计算机,而手册则相当于计算机程序。而正如房中人不可能通过手册理解中文一样,计算机也不可能通过程序来获得理解力。

  • Dennet认为这样的一个程序是不可能得,通过Chinese Turing test可能需要100billin lines of code,所以手动运行要花费多个生活周期。
  • 另外就是并不能说明这样能够证明程序依然不懂中文,汉堡包的例子。
  • 现代NLP的例子,the meaning of a word is all the ways the word can be used。这样说的话,Chinese room就反证明了机器拥有conscious experiences 的可能。
  • 根据功能主义functionalism,如果S1系统conscious,然后S1和S2的输入输出一样,就认为S2也有conscious

3.2 functionalism

  • treat minds as information processing systems
  • According to functionalism it would be sufficient to declare that an artificially intelligent machine is conscious if it behaved as if it were conscious.

3.3 is consciousness necessary for being intelligent?(意识是必要的吗对于智能)

  • consciousness is not needed for intelligence(the definitions of AGI, intelligence, superIntelligence )

theories of consciouness

  • dualist theories: 物质二元论substance dualism,I think therefore I am;property dualism,属性二元论
  • physicalist theories: 物理理论,意识可以用物理来解释。

3.4 philosophical Zombies(哲学僵尸)

<百度百科>:哲学僵尸,在精神哲学和领悟力的领域中,哲学僵尸(Philosophical Zombie or P-Zombie)是一种和一般人类无法区别并且被认定为缺乏觉察经验、感受性、感觉、感质的假设性存在者。

  • 具体的扩展内容可以看知乎-如何理解哲学僵尸
  • Philosophical Zombies is conceivable
  • Philosophical Zombies is possible
  • if consciousness is a kind of physical property, it is impossible
  • so consciousness is not a kind of physical property
  • 因为这样僵尸是可能的,如果物质相同但是没有意识,说明意识不是物理性质,property dualism true
  • Dualism true;physicalismfalse

3.5 IIT’s Criteria or Consciousness

  • IIT:Integrated Information theory
  • a system having information about itself:
    current state of typical human brain can tell a lot about what brain looked like a moment ago and what it will look like in the next moment.
    现在人的大脑可以说出之前大脑的样子和大脑将来的样子
  • Integration of information
    key difference between brains and computers is today’s computers consist of transistors connected to only a few others, and that are feed-forward and not recurrent.
    现在的大脑和电脑的主要区别,在于电脑的晶体管仅仅和少数的其他晶体管相连,并且传输方式仅仅是前馈而没有反复的。
  • maximality of integration(我也不太明白这个)
  • IIT rejects functionalist account of consciousness and supports the idea that consciousness is the way information feels.
  • IIT insist that digital computers cannot be conscious since the lack of integration.
  • reject IIT(老师PPT上的观点)

4 reasoning and communication

  • Early logic-based approaches to AI involved reasoning with symbolic formula representing beliefs, desires, goals, enabling reasoning about the way the world is.
    (这里的观点刚好跟multi-agent system中的内容相吻合,说明multi-agent system都是一些早期的基于逻辑推理的AI )

logic in AI

  • GOFAI (Good Old-Fashioned Artificial Intelligence) 有效的老式人工智能。
    人工智能领域, GOFAI 泛指用最原始的人工智能的逻辑方法解决小领域的问题, 例如棋类游戏的算法。
  • W ∣ = Δ 0 W |= \Delta_0 W=Δ0 with a model W of world, designer encodes axiomatic truths Δ 0 \Delta_0 Δ0 about W.
  • K B Δ 0 ∣ − p s α i f f W ∣ = α KB_{\Delta_0} |-_{ps} \alpha iff W |= \alpha KBΔ0psαiffW=α
    with K B Δ 0 KB_{\Delta_0} KBΔ0 is knowledge base; PS proof system.

monotonic and non-monotonic logics

  • classical logic is monotonic: K B Δ 0 ∣ − P S α ⟹ K B Δ 0 ∪ Δ 1 ∣ − P S α KB_{\Delta_0}|-_{PS}\alpha \Longrightarrow KB_{\Delta_0\cup\Delta_1}|-_{PS}\alpha KBΔ0PSαKBΔ0Δ1PSα
    含义就是,从前提一旦推理出来的结论,总是有效的,即便后来又获得了新的信息。

机器学习和GOFAI的区别(机器学习的优劣)

  • Machine learning solved symbol grounding problems. Instead of encoding hundreds of thousands of facts and rules, these were “automatically extracted” from large datasets.
  • machine learning is little brittle.
  • 但是机器学习太需要large data sets
  • Black box problem: reasoning processes are opaque; lack of explanations of reasoning.

Non-monotonic logics as argumentation

  • counter-argument
    ( p , p → ¬ f , ¬ f ) ⟺ ( b , b → f , f ) ({p,p\rightarrow\neg f},\neg f) \Longleftrightarrow({b,b\rightarrow f},f) (p,p¬f,¬f)(b,bf,f)

Dung’s theory of Argumentation

  • argument framework < A r g s , A t t a c k s > <Args,Attacks> <Args,Attacks>
  • Definition:A set E ⊆ A r g s E\subseteq Args EArgs is a set of acceptable arguments
    iff:
  1. 所有E中的元素,都没有conflict; ∀ X . Y ∈ E , ( X , Y ) ∉ A t t a c k s \forall X.Y \in E,(X,Y)\notin Attacks X.YE,(X,Y)/Attacks
  2. 所有的攻击E中的元素的元素,都会被一个E中的元素攻击。
  3. 这一点是我自己总结的:就是所有的点都在某种情况下出现过in。具体的内容可以看multi-agent system考试复习笔记。
  • Definition: E ⊆ A r g s E \subseteq Args EArgs is a preferred extension iff it is a maximal conflict free set of acceptable arguments.
    在这里插入图片描述
    就是A和D之间没有conflict所以{A,D}是preferred extension;同理,{B,D}也是preferred extension。
  • grounded extension

简单来说就是寻找没有被攻击的argument为in,然后依次寻找肯定为in的argument。这一点跟Multi-agent system有很大的类似之处

argument

在这里插入图片描述

5 Ethics and Morality

Deontology/consequentialism/virtue Ethics

  • Deontology: Treat others as you would like others to treat you. too vague
  • Virtue Ethics: different people often have different opinions on what constitues a virtue
  • consequentialism: Consequentialists can and do differ widely in terms of specifying the Good.
  • Utilitarianism:Utilitarianism = Impartial Maximisation of Happiness
  • Utilitarianism is too demanding
  • AMA(Artificial Moral Agents)
    Ethical Impact Agents: Any machine that can be evaluated for its ethical impact
    Implicit Ethical Agents: Machines designed not to have negative ethical effects
    Explicit Ethical Agents: Machines that can reason what is the best action in
    ethical dilemmas
    Full Ethical Agents: Machines that can be said to have ‘moral agency’ and are
    able to justify their moral judgements.

6 Algorithms

transparency is not desirable

  • Transparency: the ability to examine and explain how algorithms make decisions.
  • 但是有的时候并不希望透明:For example, explaining a decision may reveal private data about an individual (violating EU General Data Protection Regulation GDPR).

procedural regularity 程序规则化

  • specifically tools for procedural regularity can assure that
  • software verification: exhaustively testing for all possible inputs to ensure that an invariant is never violated
  • cryptographic commitments 加密承诺: can be used to ensure that the same decision policy was used for each of many decisions.

ACM principle

  • association for computing machinery
  • awareness
  • access and redress
  • accountability
  • explanation
  • data provenance
  • auditability
  • validation and Testing

justice for humans

  • if there is no self freely making a choice, then we should abandon the idea of retribution, and focus on the other reasons for penalising criminals
  • protect society from the criminal and protect the criminal from society
  • 保护社会免受犯罪侵害,并使罪犯免受社会侵害
  • deter others from committing the crime
  • 阻止他人犯罪
  • rehabilitate the criminal so that he doesn’t commit the crime again
  • 使罪犯康复,以使他不再犯罪
  • for more serious crimes, it may be that if a criminal is not punished, society would break down because a lack of faith and trust in the justice system. So there may be utilitarian arguments for having some retributive component
  • 对于更严重的犯罪,可能是如果不对罪犯进行惩罚,由于对司法系统缺乏信心和信任,社会将崩溃。因此,可能存在功利主义的论点,认为它具有某种报应性成分

what does this mean for AI

  • AI and use will not have distinct real selves or real free will
  • adopt a utilitarian approach - the penalties:功效看上面

Algorithmic Bias

  • create unfair outcomes, such as favoring one arbitrary group of users over others
  • 例子:statistically women have less accidents than men, but the gender of a person should not be relevant to decided whether the person is a safer driver.
Type of Algorithmic Bias
  • Data Bias: if the original training data is biased, the algorithm will perpetuate and potentially compound these biases.
  • Algorithm Bias: 搜索引擎给出的第一页的内容比四二也得优先
  • Use and Interpretation Bias:就是一个人在错误的情况下使用ai,或者过于信任AI
  • feedback bias:就是人工判断数据受否有效的偏差
  • 例子:Lock them up adn throw away the key: COMPAS算法用来US去预测re-offending rate. 系统存在recially biased,black defendants 有更高的reoffending。
  • Here’s looking at you white man: Facial recognition software is increasingly being used in law enforcement. 白人男子的数据集大于black women,因此准确率不同。
solutions to Algorithmic bias
  • pre-processing modifies training data;
  • in-processing modify algorithmic model;
  • post process removes discriminatory rules.
  • discursive framework, self-assessment tools
  • documentation standards: This information would help users interrogate datasets and identify potential biases in datasets and models prior to and during processing.
  • technical standards and certification programmes: A number of national and international organisations have started to publish technical standards which could help mitigate algorithmic bias in the design and deployment of AI systems.

7 for good or bad?

LAWS Lethal Autonomous Weapons System

  • a weapon that can complete entire engagement cycle on its own. 自己寻找目标,识别目标,做出是否攻击的决定
argument for banning LAWS
  • 攻击对象附近的人可能会被杀死
  • development will lead to an AI arms race
  • available on black market so may get into the hands of terrorists
  • be hacked.
  • Technological Chanlleges:如何区分civilians and soldiers; complex situation
  • taking humans out of loop: 算法没有empathy, sympathy,所以不能做出humane context-based kill or don’t kill devisions.
  • accountability: 没有可以责怪的人了
arguments against banning LAWS
  • LAWS might casue less harm than conventional weapons.

AI In Medicine

  • black box issue : not possible to explain how neural network identifies cancerous cells.
  • automation bias: IBM watson
  • watson role: enhance the devision making of health care by giving them grater confidence.
  • watson liability: has potential to increase liability for health care professionals.
  • watson limitation: might make a recommendation that is inconsistent with current clinical standards.

Asimolar principle

具体的23条内容请看

8 superintelligence

kinds of superintelligence

  • speed superintelligence: a system that can do all that a human intellect can do, but much faster
  • quality superintelligence: a system that is at leat as fast as a human mind and vastly smarter.

advantages of digital intelligence

  • speed of computational elements
  • internal communication speed
  • number of computational elements
  • storage capability
  • it is a feature of machine learning that often unforseen ways are found for achieving goals.

9 AI and Human Society

filter bubbles

  • a state of intellectual isolation: uses filtering algorithms to selectively guess what information a user would like to see.
  • our views become more rigid and less likely to be changed by debate.
  • echo chamber 就是少部分有共同观点的人在一起repeat-overhear-repeat again,回声会变大,然后让多数人信以为真。

小论文可能用上的词句

  • Lack of transparency/explanation
  • Deontology: Treat others as you would like others to treat you
  • Our emotions/moral
    intuitions are generally sensible but not always correct
  • Homo utilitus is an ideal to aim at:recalibrating our moral stance while
    acknowledging that we are human.
  • But when faced with
    complex and controversial moral dilemmas, utilitarianism is the answer
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值