White Paper: Interrupts and USB

本文转载至: http://www.embeddedsys.com/subpages/resources/images/documents/InterruptsAndUSB.pdf

In most embedded computer systems, there is a need for interrupts to handle events that require  prompt handling by the operating system or application program. Although USB supports  interrupt transfers, it is significantly different from the interrupts implemented on other bus  architectures such as PCI or ISA.

USB interrupt transfers provide a guaranteed maximum latency communication pathway between  the host and the USB device. An interrupt IN transfer is an interrupt transfer that is originated at  the device and targeted at the host (direction is always referenced as viewed by the host).  Interrupt IN transfers can be used by the device to alert the host of an important system event.

An interrupt transfer is not equivalent to an interrupt at one of the IRQ inputs of the host  processor. As is the case with all transfers over USB the host must initiate the interrupt transfer.  The device can make the interrupt transfer data available when a system event occurs, but the  transfer does not start until the host requests the data. The way that interrupt transfer latency is  guaranteed is that the host is obligated to poll for interrupt transfer data at a requested periodic  interval. This interval is determined during enumeration and the host polls the device at this  interval continuously after enumeration is complete.

The allowable range for interrupt transfer latency or host polling interval varies with USB bus  speed. The following table shows the possible settings for each bus speed.

Bus Speed Maximum Latency bInterval units
High 125 usec – 4 sec 125 usec
Full 1 - 255 msec 1 msec
Low 10 - 255 msec 1 msec

From the table it can be seen that the smallest possible interrupt latency that can be achieved
between a device and the host is 125 usec. The device requests interrupt latency by setting the  bInterval field of the endpoint descriptor for the corresponding interrupt endpoint. The requested  latency can be calculated by (bInterval) x (bInterval units).

The sum of all low- and full-speed interrupt and isochronous transfers is limited to consuming  90% of the USB bus bandwidth. For high speed, the limit is 80%. If a device is enumerated and  its bInterval request puts the bus utilization over these limits, the host will refuse to configure the  device.

Depending on many factors the host processor may not be able to transfer the interrupt data at  the requested interval. OS design, driver design, application software, CPU speed, and bus  bandwidth may all limit the host’s ability to meet the obligation to poll for interrupt transfer data  within the required interval.

A few large interrupt transfers are more efficient than a larger number of smaller interrupt  transfers. Since most USB devices today are developed using a microcontroller, the  microcontroller can be used to queue up the data and make it available to the host in larger  transfers, thereby decreasing the number of transfers and increasing the size of each transfer  and increasing efficiency.

深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值