《人工智能(智能系统指南,第二版)》读书笔记——7、第六章

本文是《智能系统指南》一书第六章读书笔记,主要探讨了人工神经网络,包括其自适应机制、单层感知器、多层前馈网络、回传学习算法以及无监督学习中的Hebb学习法则和竞争学习。阐述了神经元结构、权重调整、误差反向传播等概念,并介绍了Hopfield网络与BAM的联想记忆功能。
摘要由CSDN通过智能技术生成

1. introduction to knowledge-based intelligent systems(summary / questions for review / references)

2. rule-based expert systems

3. uncertainty management in rule-based expert systems

4. fuzzy expert systems

5. frame-based expert systems

6. artificial neural networks

7. evolutionary computation

8. hybrid intelligent systems

9. knowledge engineering and data mining


6. artificial neural networks

Machine learning involves adaptive mechanisms(自适应机制) that enable computers to learn from experience, learn by example and learn by analogy(类比). Learning capabilities can improve the performance of an intelligent system over time. The most popular approaches to machine learning are artificial neural networks(人工神经网络) and genetic algorithms(遗传算法). This chapter is dedicated to neural networks.

An artificial neural network consists of a number of very simple and highly interconnected processors, called neurons(神经元), which are analogous to the biological neurons in the brain. The neurons are connected by weighted links that pass signals from one neuron to another. Each link has a numerical weight associated with it. Weights are the basic means of long-term memory in ANNs. They express the strength, or importance, of each neuron input. A neural network "learn" through repeated adjustments of these weights.

A neuron is a simplest computing element. Figure 6.3 is "diagram of a neuron". The input signal can be raw data or outputs of other neurons, the output signal can be either a final solution to the problem or an input to other neurons. The outputs is determined by inputs transfer or activation function(激活函数). The most common choices for activation function are:the step, sign, linear and sigmoid functions(阶跃、符号、线性和S形函数), which are illustrated in figure 6.4. The step and sign activation functions, also called hard limit functions(硬限幅函数), are often used in decision-making neurons for classification and pattern recognition tasks. Neurons with the linear function are often used for linear approximation(线性近似). Neurons with the sigmoid function are often used in the back-propagation networks(后向传递网络).

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值