标准模板库简介(二)

Refinement

Input Iterator is, in fact, a rather weak concept: that is, it imposes very few requirements. An Input Iterator must support a subset of pointer arithmetic (it must be possible to increment an Input Iterator using prefix and postfix operator++), but need not support all operations of pointer arithmetic. This is sufficient for find, but some other algorithms require that their arguments satisfy additional requirements. Reverse, for example, must be able to decrement its arguments as well as increment them; it uses the expression --last. In terms of concepts, we say that reverse's arguments must be models of Bidirectional Iterator rather than Input Iterator.

改进

Input_Iterator事实上是一个相当弱的概念:即,它只提出了相当少的要求.一个Input_Iterator必须支持指针算运算的一个子集(必须可以使用前缀和后缀操作符++来增加一个Input_Iterator),但是无需支持指针运算的所有操作.这对于find已经足够了,但是其它一些算法还需要它们的参数满足一些额外的要求.例如,reverse,必须能够对它的参数进行增加和减少操作,它使用表达式--last.从概念来说,我们说reverse的参数必须是Bidirectional_Iterator而不是Input_Iterator的模型.

The Bidirectional Iterator concept is very similar to the Input Iterator concept: it simply imposes some additional requirements. The types that are models of Bidirectional Iterator are a subset of the types that are models of Input Iterator: every type that is a model of Bidirectional Iterator is also a model of Input Iterator. Int*, for example, is both a model of Bidirectional Iterator and a model of Input Iterator, but istream_iterator, is only a model of Input Iterator: it does not conform to the more stringent Bidirectional Iterator requirements.

Bidirectional_Iterator概念与Input_Iterator非常相似:它只是又额外加入了一些要求. 为Bidirectional_Iterator的模型的类型是Input_Iterator的模型的类型的一个子集:每一个是Bidirectional_Iterator的模型的类型也是Input_Iterator的一个模型.例如,int*既是Bidirectional_Iterator的模型又是Input_Iterator的模型,但是istream_iterator只是Input_Iterator的一个模型:它不符合更为严格的Bidirectional_Iterator的要求.

We describe the relationship between Input Iterator and Bidirectional Iterator by saying that Bidirectional Iterator is a refinement of Input Iterator. Refinement of concepts is very much like inheritance of C++ classes; the main reason we use a different word, instead of just calling it "inheritance", is to emphasize that refinement applies to concepts rather than to actual types.

通过说Bidirectional_Iterator是Input_Iterator的一个提炼,我们描述了Input_Iterator和Bidirectional_Iterator之间的关系.概念的提炼非常类似于C++类的继承;我们使用一个不同的单词而不是就称其为继承的原因是为了强调提炼应用于概念而非实际的类型.

There are actually three more iterator concepts in addition to the two that we have already discussed: the five iterator concepts are Output Iterator, Input Iterator, Forward Iterator, Bidirectional Iterator, and Random Access Iterator; Forward Iterator is a refinement of Input Iterator, Bidirectional Iterator is a refinement of Forward Iterator, and Random Access Iterator is a refinement of Bidirectional Iterator. (Output Iterator is related to the other four concepts, but it is not part of the hierarchy of refinement: it is not a refinement of any of the other iterator concepts, and none of the other iterator concepts are refinements of it.) The Iterator Overview has more information about iterators in general.

除了我们已经讨论过的两个迭代器概念之外事实上还有另外三个迭代器概念:这五个迭代器概念(iterator concept)是:Output_Iterator,Input_Iterator,Forward_Iterator,Bidirectional_Iterator和Random_Acess_Iterator;Forward_Iterator是Input_Iterator的一个提炼, Bidirectional_Iterator是Forward_Iterator的一个提炼而Random_Acess_Iterator是Bidirectional_Iterator的一个提炼(Output_Iterator与其它四个概念相关联,但是它不是提炼体系的一部分:它不是其它任何迭代器概念的一个提炼,而且其它迭代器概念也不是它的一个提炼).迭代器概览中有关于迭代器的更多信息.

Container classes, like iterators, are organized into a hierarchy of concepts. All containers are models of the concept Container; more refined concepts, such as Sequence and Associative Container, describe specific types of containers.

容器类,像迭代器一样,被组织为一个概念体系.所有的容器都是Container概念的模型,更加精确的概念,例如Sequence和Associative Container,描述了容器的具体类型.

Other parts of the STL

If you understand algorithms, iterators, and containers, then you understand almost everything there is to know about the STL. The STL does, however, include several other types of components.

STL的其它部分

如果你理解了算法,迭代器和容器,那么你几乎理解了关于STL要知道的几乎所有东西.但是,STL确实还包含了其它几种类型的组件.

First, the STL includes several utilities: very basic concepts and functions that are used in many different parts of the library. The concept Assignable, for example, describes types that have assignment operators and copy constructors; almost all STL classes are models of Assignable, and almost all STL algorithms require their arguments to be models of Assignable.

首先,STL包含了几个utilities(工具):用于库中的许多不同部分的非常基本的概念和函数.例如,概念Assignable描述了拥有赋值操作符和拷贝构造函数的类型,几乎所有的STL类都是Assignable的模型,并且几乎所有的STL算法都要求它们的参数是Assignable的模型.

Second, the STL includes some low-level mechanisms for allocating and deallocating memory. Allocators are very specialized, and you can safely ignore them for almost all purposes.

第二,STL包含了一些用于分配和回收内存的底层机制.Allocators是非常特殊化的,你几乎总是可以忽略它们.

Finally, the STL includes a large collection of function objects, also known as functors. Just as iterators are a generalization of pointers, function objects are a generalization of functions: a function object is anything that you can call using the ordinary function call syntax. There are several different concepts relating to function objects, including Unary Function (a function object that takes a single argument, i.e. one that is called as f(x)) and Binary Function (a function object that takes two arguments, i.e. one that is called as f(x, y)). Function objects are an important part of generic programming because they allow abstraction not only over the types of objects, but also over the operations that are being performed.

最后,STL包含了许多function objects,也叫functor/就如iterator是指针的泛化一样,function objects是函数的泛化:一个function object是你可以用通常的函数调用语法调用的任何东西.有几种与function objects相关联的不同概念,包括Unary Function(接受单一参数的function object,例如f(x))以及Binary Function(接受两个参数的function object,例如f(x,y)).Function objects是泛型编程的一个重要部分,因为它们允许不仅对对象的类型进行抽象,而且对所执行的操作进行抽象.

深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值