The Frame Buffer Device

            The Frame Buffer Device
            -----------------------

Maintained by Geert Uytterhoeven <geert@linux-m68k.org>
Last revised: May 10, 2001


0. Introduction
---------------

The frame buffer device provides an abstraction抽象 for the graphics hardware. It
represents描绘出 the frame buffer of some video hardware and allows application
software to access the graphics hardware through a well-defined interface, so
the software doesn't need to know anything about the low-level (hardware
register) stuff乱七八糟的理不出头绪的东西.

The device is accessed through special device nodes, usually located in the
/dev directory, i.e. /dev/fb*.


1. User's View of /dev/fb*
--------------------------

From the user's point of view, the frame buffer device looks just like any
other device in /dev. It's a character device using major 29; the minor
specifies 指定 the frame buffer number.

By convention按照惯例, the following device nodes are used (numbers indicate指的是 the device
minor numbers):

      0 = /dev/fb0    First frame buffer
      1 = /dev/fb1    Second frame buffer
      ...
     31 = /dev/fb31    32nd frame buffer

For backwards compatibility考虑到向后兼容性, you may want to create the following symbolic
links:

    /dev/fb0current -> fb0
    /dev/fb1current -> fb1

and so on...

The frame buffer devices are also `normal' memory devices, this means, you can
read and write their contents尽情地随意地. You can, for example, make a screen snapshot by

  cp /dev/fb0 myfile

There also can be more than on

e frame buffer at a time, e.g. if you have a
graphics card in addition to the built-in hardware. The corresponding frame
buffer devices (/dev/fb0 and /dev/fb1 etc.) work independently.
同时可以有多个帧缓冲设 备,比如说你还有一个除了主板内建之外的一个显卡,那么其对应的帧缓冲设备可以独立工作

Application software that uses the frame buffer device (e.g. the X server) will
use /dev/fb0 by default (older software uses /dev/fb0current). You can specify详细指定
an alternative二选一的 frame buffer device by setting the environment variable
$FRAMEBUFFER to the path name of a frame buffer device, e.g. (for sh/bash
users):

    export FRAMEBUFFER=/dev/fb1

or (for csh users):

    setenv FRAMEBUFFER /dev/fb1

After this the X server will use the second frame buffer.


2. Programmer's View of /dev/fb*
--------------------------------

As you already know, a frame buffer device is a memory device like /dev/mem and
it has the same features. You can read it, write it, seek to some location in
it and mmap() it (the main usage). The difference is just that the memory that
appears in the special file is not the whole memory, but the frame buffer of
some video hardware.唯一不同的是它只是特定文件对应的一小部分内存,而不是整个内存,这就是所谓显卡的帧缓冲!

/dev/fb* also allows several ioctls on it, by which lots of information about
the hardware can be queried and set. The color map handling操纵 works via ioctls,
too. Look into <linux/fb.h> for more information on what ioctls exist and on
which data structures they work. Here's just a brief简洁的 overview:

  - You can request unchangeable information about the hardware, like name,
    organization of the screen memory (planes, packed pixels, ...) and address
    and length of the screen memory.

//很搞不明白,frame buffer和screen memory 有什么关系呢?????????????????????

  - You can request and change variable information about the hardware, like
    visible可见的 and virtual geometry几何, depth, color map format, timing, and so on.
    If you try to change that information, the driver maybe will round up集合,聚集 some
    values to meet the hardware's capabilities (or return EINVAL if that isn't
    possible).

  - You can get and set parts of the color map. Communication is done with 16
    bits per color part (red, green, blue, transparency透明度) to support all
    existing hardware. The driver does all the computations needed to apply
    it to the hardware (round it down to less bits, maybe throw away
    transparency).你可以获取或者设置color map的每一个分量,已经做好了每个分量16bits用以支持所有现有的显示硬件(这意味着每个像素的最大为数为16 X 4位),驱动会把它计算成硬件所能接受的。为了减少位数,可能会将透明度信息丢弃

All this hardware abstraction makes the implementation实现 of application programs
easier and more portable可移植的. E.g. the X server works completely on /dev/fb* and
thus doesn't need to know, for example, how the color registers of the concrete实例
hardware are organized. XF68_FBDev is a general X server for bitmapped,
unaccelerated未加速的 video hardware. The only thing that has to be built into
application programs is the screen organization (bitplanes位面 or chunky pixels像素块
etc.), because it works on the frame buffer image data directly.因为它直接影响framebuffer映像数据

For the future it is planned that frame buffer drivers for graphics cards and
the like can be implemented提供..手段 as kernel modules that are loaded at runtime. Such
a driver just has to call register_framebuffer() and supply some functions.
Writing and distributing such drivers independently from the kernel will save
much trouble... 写一个并且发布这样一个完全独立于kernel的驱动会很麻烦的……


3. Frame Buffer Resolution Maintenance分辨率维护
--------------------------------------

Frame buffer resolutions are maintained维持 using the utility `fbset'. It can
change the video mode properties性质 of a frame buffer device. Its main usage is
to change the current video mode, e.g. during boot up in one of your /etc/rc.*
or /etc/init.d/* files.

Fbset uses a video mode database stored in a configuration file, so you can
easily add your own modes and refer to them with a simple identifier.


4. The X Server
---------------

The X server (XF68_FBDev) is the most notable典型的 application program for the frame
buffer device. Starting with XFree86 release 3.2, the X server is part of
XFree86 and has 2 modes:

  - If the `Display' subsection段落 for the `fbdev' driver in the /etc/XF86Config
    file contains a

    Modes "default"

    line, the X server will use the scheme方案 discussed above, i.e. it will start
    up in the resolution determined决定 by /dev/fb0 (or $FRAMEBUFFER, if set). You
    still have to specify指定 the color depth (using the Depth keyword) and virtual
    resolution (using the Virtual keyword) though. This is the default for the
    configuration file supplied with XFree86. It's the most simple
    configuration, but it has some limitations限制.

  - Therefore it's also possible to specify resolutions in the /etc/XF86Config
    file. This allows for on-the-fly resolution switching while retaining保持 the
    same virtual desktop size. The frame buffer device that's used is still
    /dev/fb0current (or $FRAMEBUFFER), but the available resolutions are
    defined by /etc/XF86Config now. The disadvantage不利 is that you have to
    specify the timings in a different format (but `fbset -x' may help).

To tune调整 a video mode, you can use fbset or xvidtune. Note that xvidtune doesn't
work 100% with XF68_FBDev: the reported clock values are always incorrect.


5. Video Mode Timings
---------------------

A monitor draws an image on the screen by using an electron beam电子束 (3 electron
beams for color models, 1 electron beam for monochrome单色 monitors). The front of
the screen is covered by a pattern of colored phosphors磷光质 (pixels). If a phosphor
is hit by an electron, it emits发出 a photon光子 and thus 如 此这般 becomes visible.

The electron beam draws horizontal水平 lines (scanlines) from left to right, and
from the top to the bottom of the screen. By modifying修改 the intensity强度 of the
electron beam, pixels with various colors and intensities can be shown.

After each scanline the electron beam has to move back to the left side of the
screen and to the next line: this is called the horizontal retrace水平回扫. After the
whole screen (frame) was painted, the beam moves back to the upper left corner:
this is called the vertical retrace垂直回扫. During both the horizontal and vertical
retrace, the electron beam is turned off (blanked).

The speed at which the electron beam paints the pixels is determined by the
dotclock in the graphics board. For a dotclock of e.g. 28.37516 MHz (millions
of cycles per second), each pixel is 35242 ps (picoseconds) long:

    1/(28.37516E6 Hz) = 35.242E-9 s

If the screen resolution is 640x480, it will take

    640*35.242E-9 s = 22.555E-6 s

to paint the 640 (xres) pixels on one scanline. But the horizontal retrace
also takes time (e.g. 272 `pixels'), so a full scanline takes

    (640+272)*35.242E-9 s = 32.141E-6 s

We'll say that the horizontal scanrate is about 31 kHz:

    1/(32.141E-6 s) = 31.113E3 Hz

A full screen counts 480 (yres) lines, but we have to consider the vertical
retrace too (e.g. 49 `lines'). So a full screen will take

    (480+49)*32.141E-6 s = 17.002E-3 s

The vertical scanrate is about 59 Hz:

    1/(17.002E-3 s) = 58.815 Hz  (这里算出来的是屏幕的刷新率)

This means the screen data is refreshed about 59 times per second. To have a
stable稳定的 picture without visible flicker闪烁, VESA recommends推荐 a vertical scanrate of
at least 72 Hz. But the perceived察觉 flicker is very human dependent: some people
can use 50 Hz without any trouble, while I'll notice if it's less than 80 Hz.

Since the monitor doesn't know when a new scanline starts, the graphics board
will supply a synchronization pulse同步脉冲 (horizontal sync or hsync水平同步) for each
scanline.  Similarly类似地 it supplies a synchronization pulse (vertical sync or帧同步
vsync) for each new frame. The position of the image on the screen is
influenced影响 by the moments at which the synchronization pulses occur.
图像在显示器上的位置会受到同步脉冲发生时机的影响

The following picture summarizes总结 all timings. The horizontal retrace time is
the sum of the left margin, the right margin and the hsync length, while the
vertical retrace time is the sum of the upper margin, the lower margin and the
vsync length.

 
+----------+---------------------------------------------+----------+-------+
  |          |                ↑                            |          |       |
  |          |                |upper_margin                |          |       |
  |          |                ↓                            |          |       |
  +----------###############################################----------+-------+
  |          #                ↑                            #          |       |
  |          #                |                            #          |       |
  |          #                |                            #          |       |
  |          #                |                            #          |       |
  |   left   #                |                            #  right   | hsync |
  |  margin  #                |       xres                 #  margin  |  len  |
  |<-------->#<---------------+--------------------------->#<-------->|<----->|
  |          #                |                            #          |       |
  |          #                |                            #          |       |
  |          #                |                            #          |       |
  |          #                |yres                        #          |       |
  |          #                |                            #          |       |
  |          #                |                            #          |       |
  |          #                |                            #          |       |
  |          #                |                            #          |       |
  |          #                |                            #          |       |
  |          #                |                            #          |       |
  |          #                |                            #          |       |
  |          #                |                            #          |       |
  |          #                ↓                            #          |       |
  +----------###############################################----------+-------+
  |          |                ↑                            |          |       |
  |          |                |lower_margin                |          |       |
  |          |                ↓                            |          |       |
  +----------+---------------------------------------------+----------+-------+
  |          |                ↑                            |          |       |
  |          |                |vsync_len                   |          |       |
  |          |                ↓                            |          |       |
  +----------+---------------------------------------------+----------+-------+

The frame buffer device expects all horizontal timings in number of dotclocks
(in picoseconds, 1E-12 s), and vertical timings in number of scanlines.


6. Converting XFree86 timing values info frame buffer device timings
--------------------------------------------------------------------

An XFree86 mode line consists of the following fields:
 "800x600"     50      800  856  976 1040    600  637  643  666
 < name >     DCF       HR  SH1  SH2  HFL     VR  SV1  SV2  VFL

The frame buffer device uses the following fields:

  - pixclock: pixel clock in ps (pico seconds)
  - left_margin: time from sync to picture //以像素时钟作为度量单位,所以这个值每增加一个会使图像整体右移一个像素
  - right_margin: time from picture to sync  //每增加一个会使图像整体左移一个像素
  - upper_margin: time from sync to picture  //每增加一个,图像整体下移一个像素
  - lower_margin: time from picture to sync   //每增加一个,图像整体上移一个像素
  - hsync_len: length of horizontal sync
  - vsync_len: length of vertical sync

1) Pixelclock:
   xfree: in MHz
   fb: in picoseconds (ps)

   pixclock = 1000000 / DCF

2) horizontal timings:
   left_margin = HFL - SH2
   right_margin = SH1 - HR
   hsync_len = SH2 - SH1

3) vertical timings:
   upper_margin = VFL - SV2
   lower_margin = SV1 - VR
   vsync_len = SV2 - SV1

Good examples for VESA timings can be found in the XFree86 source tree,
under "xc/programs/Xserver/hw/xfree86/doc/modeDB.txt".


7. References参考
-------------

For more specific information about the frame buffer device and its
applications, please refer to the Linux-fbdev website:

    http://linux-fbdev.sourceforge.net/

and to the following documentation:

  - The manual pages for fbset: fbset(8), fb.modes(5)
  - The manual pages for XFree86: XF68_FBDev(1), XF86Config(4/5)
  - The mighty kernel sources:
      o linux/drivers/video/
      o linux/include/linux/fb.h
      o linux/include/video/



8. Mailing list
---------------

There are several frame buffer device related mailing lists at SourceForge:
  - linux-fbdev-announce@lists.sourceforge.net, for announcements,
  - linux-fbdev-user@lists.sourceforge.net, for generic user support,
  - linux-fbdev-devel@lists.sourceforge.net, for project developers.

Point your web browser to http://sourceforge.net/projects/linux-fbdev/ for
subscription information and archive browsing.


9. Downloading
--------------

All necessary files can be found at

    ftp://ftp.uni-erlangen.de/pub/Linux/LOCAL/680x0/

and on its mirrors.

The latest version of fbset can be found at

    http://home.tvd.be/cr26864/Linux/fbdev/

 
10. Credits                                                       
----------                                                       
                
This readme was written by Geert Uytterhoeven, partly based on the original
`X-framebuffer.README' by Roman Hodek and Martin Schaller. Section 6 was
provided by Frank Neumann.

The frame buffer device abstraction was designed by Martin Schaller
1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。
应用背景为变电站电力巡检,基于YOLO v4算法模型对常见电力巡检目标进行检测,并充分利用Ascend310提供的DVPP等硬件支持能力来完成流媒体的传输、处理等任务,并对系统性能做出一定的优化。.zip深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值