眼移动的类型

The aim of this page is top give a brief description of the different main types of eye movements and their function. 

The spatial and temporal sampling ability of the human eye limits the manner in which we extract visual information from events in the world. Because visual acuity decreases rapidly when we move away from the center of our visual field, we possess a repertoire of eye movements that allow us to point our eyes at target locations of interest.

Fixations are the most common feature of looking that eye tracking researchers analyze to make inferences about cognitive processes or states that they are interested in probing. Fixations are those times when our eyes essentially stop scanning about the scene, holding the central foveal vision in place so that the visual system can take in detailed information about what is being looked at. But how do we get fixations? Let’s start with the raw material from which fixations are built: gaze points. Gaze points are the instantaneous spatial locations of the visual axis landing on the stimulus. As such, they have an (x, y) coordinate and a timestamp corresponding to its measurement. Think of gaze points as an output of the eye tracking hardware. If the device is operating at 60 Hz, a gaze point will be reported every 16.7 milliseconds. At 300 Hz, gaze points are spaced a mere 3 milliseconds apart. Thus, the number of gaze points is partially an artifact of the eye tracker and bears little to no direct relation to the things that researchers are interested in. Fixations have two characteristics distinguishing them from gaze points. The first is that since they are made up of multiple gaze points, fixations have duration in addition to a spatial (x, y) location and start and end timestamps. The second is that fixations are not real in the sense of being directly measurable. Fixations are constructions, outputs of a mathematical algorithm that translates the sequence of raw gaze points into an associated sequence of fixations. Paradoxically, fixations are real in the sense that they are meaningful episodes of looking generated by our visual system. These episodes have specific dynamic characteristics that the gaze point-to-fixation conversion algorithm, or fixation filter, is designed to model. So, putting the eye tracker-based raw gaze stream through the fixation filter is an attempt to reconstruct these meaningful eye movements as faithfully as possible. As noted earlier, they have characteristics that can reveal useful information about attention, visibility, mental processing, and understanding. For example, an increase in the time taken to make a first fixation on a target suggests a decrease in the salience or visual attractive power of that feature. An increase in average fixation duration on a target or area could signal greater effort required to make sense of something or could suggest that what is looked at is more engaging. The underlying goal of all eye tracking researchers is to identify which ingredients (eye tracking metrics) to use in building a recipe for the study of the attentional, cognitive states, or processes in which they are interested.

Keep in mind that this recipe could include multiple eye tracking measures and potentially additional triangulating data streams as well. For example, to bolster judgments about cognitive effort, one might simultaneously measure pupil dilation or electrodermal activity (i.e., skin conductance or sweat response). Questionnaire instruments, self-reports, or aided/unaided recall tasks could all serve as useful ingredients in a research recipe that yields strong, defensible conclusions.

Saccades are the type of eye movement used to move the fovea rapidly from one point of interest to another, while a fixation is the period of time where the eye is kept aligned with the target for a certain duration, allowing for the image details to be processed. Our perception is guided by alternating these sequences of fixations and saccades (see figure on the left). Due to the fast movement during a saccade, the image on the retina is of poor quality and information intake thus happens mostly during the fixation period.

Saccade facts:

  • can be triggered voluntarily or involuntarily
  • both eyes move in the same direction
  • the time to “plan” a saccade (latency) is task dependent and varies between 100-1000 ms
  • the average duration of a saccade is 20-40 ms
  • the duration of a saccade and its amplitude are linearly correlated, i.e. larger jumps produce longer durations
  • the end point of a saccade cannot be changed when the eye is moving
  • Saccades do not always have simple linear trajectories

Fixation facts:

  • a fixation is composed of slower and minute movements (microsaccades, tremor and drift) that help the eye align with the target and avoid perceptual fading (fixational eye movements)
  • the duration varies between 50-600 ms (however longer fixations have been reported)
  • the minimum duration required for information intake depends on the task and stimulus

Dynamic stimuli and "real world" recordings

When we look at a static object with our heads relatively still, we mainly perform saccades and fixational eye movements. However in more dynamic situations where either we are moving, or the object itself is moving, other eye movements are triggered to keep the fovea aligned with the point of interest. Vergence movements are recruited to help us focus on objects placed at different distances, smooth pursuit is used to keep the fovea aligned with moving objects and the vestibular ocular reflex is used to maintain our fovea pointed at a point of interest when our head and body are moving.

Vergence facts:

  • the left and right eye move in opposite directions
  • can be classified into two types of movements - far-to-near focus triggers convergent movements and near-to-far focus triggers divergent movements
  • are generally slower than saccades

Smooth pursuit facts:

  • cannot be triggered voluntarily, in absence of a moving target
  • eye velocity is most often less than 30 deg/sec (however some individuals can smooth pursuit at velocities as high as 100 deg/sec)
  • when the target moves at a higher speed than 30 deg/sec we start to employ catch up saccades to keep up with the target

Vestibular ocular reflex

  • the eyes move in the opposite direction of the head
  • normally the speed of the eye equals the speed of the head

Recommended reading

  • Rayner, K. 2009. Eye Movements and Attention in Reading, Scene Perception, and Visual Search. Quarterly Journal of Experimental Psychology, 62, 1457-1506. http://dx.doi.org/10.1080/17470210902816461
  • Land M, Tatler B. Looking and Acting: Vision and eye movements in natural behaviour. Oxford University Press; 2009. http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780198570943.001.0001/acprof-9780198570943. Accessed March 6, 2018.
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
要在虚幻内使用XR镜,首先需要确保XR镜能够与电脑连接,可以通过有线或无线连接方式。接下来,需要安装并配置VR或AR设备的驱动程序和软件。 在虚幻引擎中使用XR镜需要进行一些设置。首先,需要将XR镜连接到电脑,并确保驱动程序已成功安装。然后,打开虚幻引擎,在编辑器中选择“编辑”菜单下的“项目设置”。 在项目设置中,选择“XR”选项卡。在这里,可以选择XR设备类型,例如VR或AR设备。选择正确的设备类型后,点击“启用虚拟现实”或“启用增强现实”选项。 接着,在“默认平台设置”下,可以设置一些XR设备的特定属性,如视场角、渲染尺寸和渲染质量等。这些设置可以根据具体需求进行调整。 完成以上设置后,可以在虚幻引擎中载入场景,然后将XR镜穿戴好。通过XR镜可以看到虚拟现实或增强现实场景,可以与虚拟场景中的元素进行交互,如移动、旋转、放大、缩小等。 此外,在虚幻引擎中还可以添加不同的交互元素和功能,使XR镜的使用更加生动和多样化。例如,可以添加虚拟按钮、手势识别功能或视觉反馈等,从而实现更加丰富的用户体验。 总之,使用XR镜在虚幻内使用需要正确配置XR设备和虚幻引擎,然后加载场景并进行交互。通过合理设置和增加交互元素,可以实现更加真实和沉浸式的虚拟现实或增强现实体验。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值