Making Sense of Multitouch

英文连接地址

    http://android-developers.blogspot.sg/2010/06/making-sense-of-multitouch.html

The word “multitouch” gets thrown around quite a bit and it’s not always clear what people are referring to. For some it’s about hardware capability, for others it refers to specific gesture support in software. Whatever you decide to call it, today we’re going to look at how to make your apps and views behave nicely with multiple fingers on the screen.

This post is going to be heavy on code examples. It will cover creating a custom View that responds to touch events and allows the user to manipulate an object drawn within it. To get the most out of the examples you should be familiar with setting up an Activity and the basics of the Android UI system. Full project source will be linked at the end.

We’ll begin with a new View class that draws an object (our application icon) at a given position:

多点触控技术涉及到设备的硬件能力,以及软件层面的特定手势支持。无论你决定怎么使用它,目的是当你的应用程序有多个手指在屏幕上时,程序表现的行为更好。

我们将会自定义一个View来绘制一个物体到指定的位置:

public class TouchExampleView extends View {
    private Drawable mIcon;
    private float mPosX;
    private float mPosY;
    
    private float mLastTouchX;
    private float mLastTouchY;
    
    public TouchExampleView(Context context) {
        this(context, null, 0);
    }
    
    public TouchExampleView(Context context, AttributeSet attrs) {
        this(context, attrs, 0);
    }
    
    public TouchExampleView(Context context, AttributeSet attrs, int defStyle) {
        super(context, attrs, defStyle);
        mIcon = context.getResources().getDrawable(R.drawable.icon);
        mIcon.setBounds(0, 0, mIcon.getIntrinsicWidth(), mIcon.getIntrinsicHeight());
    }

    @Override
    public void onDraw(Canvas canvas) {
        super.onDraw(canvas);
        
        canvas.save();
        canvas.translate(mPosX, mPosY);
        mIcon.draw(canvas);
        canvas.restore();
    }

    @Override
    public boolean onTouchEvent(MotionEvent ev) {
        // More to come here later...
        return true;
    }
}

MotionEvent

The Android framework’s primary point of access for touch data is the android.view.MotionEvent class. Passed to your views through the onTouchEvent and onInterceptTouchEvent methods, MotionEvent contains data about “pointers,” or active touch points on the device’s screen. Through a MotionEvent you can obtain X/Y coordinates as well as size and pressure for each pointer. MotionEvent.getAction() returns a value describing what kind of motion event occurred.

MotionEvent是android框架中访问触摸数据最重要的一个类。它通过android框架中的   onTouchEvent  和  onInterceptTouchEvent方法回调给客户端开发人员。MotionEvent包含了很多关于手指当前触摸点的信息。

One of the more common uses of touch input is letting the user drag an object around the screen. We can accomplish this in our View class from above by implementing onTouchEvent as follows:

比如我们在屏幕上拖动一个物体,可以在 onTouchEvent 方法中实现这个功能:

@Override
public boolean onTouchEvent(MotionEvent ev) {
    final int action = ev.getAction();
    switch (action) {
    case MotionEvent.ACTION_DOWN: {
        final float x = ev.getX();
        final float y = ev.getY();
        
        // Remember where we started
        mLastTouchX = x;
        mLastTouchY = y;        break;
    }
        
    case MotionEvent.ACTION_MOVE: {
        final float x = ev.getX();
        final float y = ev.getY();
        
        // Calculate the distance moved
        final float dx = x - mLastTouchX;
        final float dy = y - mLastTouchY;
        
        // Move the object
        mPosX += dx;
        mPosY += dy;
        
        // Remember this touch position for the next move event
        mLastTouchX = x;
        mLastTouchY = y;
        
        // Invalidate to request a redraw
        invalidate();
        break;
    }
    }
    
    return true;
}

The code above has a bug on devices that support multiple pointers. While dragging the image around the screen, place a second finger on the touchscreen then lift the first finger. The image jumps! What’s happening? We’re calculating the distance to move the object based on the last known position of the default pointer. When the first finger is lifted, the second finger becomes the default pointer and we have a large delta between pointer positions which our code dutifully applies to the object’s location.

上面的代码存在一个Bug,当拖动图片时,我们放置第2个手指在屏幕上,然后松开第一个手指。我们会发现图片跳跃了一些距离,到底发生了什么呢?我们计算图片移动的距离依靠的是默认手指的位置。当我们第一个手指离开屏幕后,这时第2个手指变成了默认的点,因此我们会有一个很大距离的移动。

If all you want is info about a single pointer’s location, the methods MotionEvent.getX() and MotionEvent.getY() are all you need. MotionEvent was extended in Android 2.0 (Eclair) to report data about multiple pointers and new actions were added to describe multitouch events. MotionEvent.getPointerCount() returns the number of active pointers. getX and getY now accept an index to specify which pointer’s data to retrieve.

如果你想要一个单个手指的位置在屏幕上,MotionEvent.getX() and MotionEvent.getY() 就是你所需要的。Android2.0上支持了多点摸摸,MotionEvent.getPointerCount()返回屏幕上手指的数量。getX和getY方法可以接受一个手指的索引作为参数。

Index vs. ID

At a higher level, touchscreen data from a snapshot in time may not be immediately useful since touch gestures involve motion over time spanning many motion events. A pointer index does not necessarily match up across complex events, it only indicates the data’s position within the MotionEvent. However this is not work that your app has to do itself. Each pointer also has an ID mapping that stays persistent across touch events. You can retrieve this ID for each pointer usingMotionEvent.getPointerId(index) and find an index for a pointer ID using MotionEvent.findPointerIndex(id).

触摸屏幕的数据不会及时被使用。每个手指有一个ID来映射保持触摸事件数据的持久性。可以使用 MotionEvent.getPointerId(index)  获取触摸点的ID以及通过MotionEvent.findPointerIndex(id)来找到一个手指的索引。

Feeling Better?

Let’s fix the example above by taking pointer IDs into account.

修复上面例子的问题,利用手指触摸点的ID实现。

private static final int INVALID_POINTER_ID = -1;

// The ‘active pointer’ is the one currently moving our object.
private int mActivePointerId = INVALID_POINTER_ID;

// Existing code ...

@Override
public boolean onTouchEvent(MotionEvent ev) {
    final int action = ev.getAction();
    switch (action & MotionEvent.ACTION_MASK) {
    case MotionEvent.ACTION_DOWN: {
        final float x = ev.getX();
        final float y = ev.getY();
        
        mLastTouchX = x;
        mLastTouchY = y;

        // Save the ID of this pointer
        mActivePointerId = ev.getPointerId(0);
        break;
    }
        
    case MotionEvent.ACTION_MOVE: {
        // Find the index of the active pointer and fetch its position
        final int pointerIndex = ev.findPointerIndex(mActivePointerId);
        final float x = ev.getX(pointerIndex);
        final float y = ev.getY(pointerIndex);
        
        final float dx = x - mLastTouchX;
        final float dy = y - mLastTouchY;
        
        mPosX += dx;
        mPosY += dy;
        
        mLastTouchX = x;
        mLastTouchY = y;
        
        invalidate();
        break;
    }
        
    case MotionEvent.ACTION_UP: {
        mActivePointerId = INVALID_POINTER_ID;
        break;
    }
        
    case MotionEvent.ACTION_CANCEL: {
        mActivePointerId = INVALID_POINTER_ID;
        break;
    }
    
    case MotionEvent.ACTION_POINTER_UP: {
        // Extract the index of the pointer that left the touch sensor
        final int pointerIndex = (action & MotionEvent.ACTION_POINTER_INDEX_MASK) 
                >> MotionEvent.ACTION_POINTER_INDEX_SHIFT;
        final int pointerId = ev.getPointerId(pointerIndex);
        if (pointerId == mActivePointerId) {
            // This was our active pointer going up. Choose a new
            // active pointer and adjust accordingly.
            final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
            mLastTouchX = ev.getX(newPointerIndex);
            mLastTouchY = ev.getY(newPointerIndex);
            mActivePointerId = ev.getPointerId(newPointerIndex);
        }
        break;
    }
    }
    
    return true;
}

There are a few new elements at work here. We’re switching on action & MotionEvent.ACTION_MASK now rather than just action itself, and we’re using a new MotionEvent action constant, MotionEvent.ACTION_POINTER_UP.ACTION_POINTER_DOWN and ACTION_POINTER_UP are fired whenever a secondary pointer goes down or up. If there is already a pointer on the screen and a new one goes down, you will receive ACTION_POINTER_DOWN instead of ACTION_DOWN. If a pointer goes up but there is still at least one touching the screen, you will receive ACTION_POINTER_UP instead of ACTION_UP.

这里有一些新的元素出现。比如MotionEvent.ACTION_POINTER_UP.ACTION_POINTER_DOWN。如果你已经有了一个手指在屏幕上,当另外一个手指放到屏幕上时,你会接受到ACTION_POINTER_DOWN 事件而不是ACTION_DOWN。如果一个手指离开屏幕但是屏幕上还有另外手指,你将会接受到ACTION_POINTER_UP ,而不是ACTION_UP。

The ACTION_POINTER_DOWN and ACTION_POINTER_UP events encode extra information in the action value. ANDing it with MotionEvent.ACTION_MASK gives us the action constant while ANDing it with ACTION_POINTER_INDEX_MASK gives us the index of the pointer that went up or down. In the ACTION_POINTER_UP case our example extracts this index and ensures that our active pointer ID is not referring to a pointer that is no longer touching the screen. If it was, we select a different pointer to be active and save its current X and Y position. Since this saved position is used in the ACTION_MOVE case to calculate the distance to move the onscreen object, we will always calculate the distance to move using data from the correct pointer.

ACTION_POINTER_DOWN and ACTION_POINTER_UP 事件中包含了一些额外的信息。比如当前手指的索引等信息。你可以使用这些额外的信息。

This is all the data that you need to process any sort of gesture your app may require. However dealing with this low-level data can be cumbersome when working with more complex gestures. Enter GestureDetectors.

如果在一个复杂的手势里面处理这些低层次的数据没有什么意义。

GestureDetectors

Since apps can have vastly different needs, Android does not spend time cooking touch data into higher level events unless you specifically request it. GestureDetectors are small filter objects that consume MotionEvents and dispatch higher level gesture events to listeners specified during their construction. The Android framework provides two GestureDetectors out of the box, but you should also feel free to use them as examples for implementing your own if needed. GestureDetectors are a pattern, not a prepacked solution. They’re not just for complex gestures such as drawing a star while standing on your head, they can even make simple gestures like fling or double tap easier to work with.

android.view.GestureDetector generates gesture events for several common single-pointer gestures used by Android including scrolling, flinging, and long press. For Android 2.2 (Froyo) we’ve also added android.view.ScaleGestureDetectorfor processing the most commonly requested two-finger gesture: pinch zooming.

因为应用有各种不同的需求,Android系统不会花时间缓存触摸的数据除非你特殊的方式去请求它。手势检测是通过回调消费MotionEvent事件。手势检测只是一个模型,而不是预包装数据的觉解方案。android.view.GestureDetector 用于单个手指滚动、长按等。Android2.2增加了android.view.ScaleGestureDetector支持双指缩放。

Gesture detectors follow the pattern of providing a method public boolean onTouchEvent(MotionEvent). This method, like its namesake in android.view.View, returns true if it handles the event and false if it does not. In the context of a gesture detector, a return value of true implies that there is an appropriate gesture currently in progress. GestureDetector and ScaleGestureDetector can be used together when you want a view to recognize multiple gestures.

To report detected gesture events, gesture detectors use listener objects passed to their constructors. ScaleGestureDetector uses ScaleGestureDetector.OnScaleGestureListener.ScaleGestureDetector.SimpleOnScaleGestureListener is offered as a helper class that you can extend if you don’t care about all of the reported events.

Since we are already supporting dragging in our example, let’s add support for scaling. The updated example code is shown below:

private ScaleGestureDetector mScaleDetector;
private float mScaleFactor = 1.f;

// Existing code ...

public TouchExampleView(Context context, AttributeSet attrs, int defStyle) {
    super(context, attrs, defStyle);
    mIcon = context.getResources().getDrawable(R.drawable.icon);
    mIcon.setBounds(0, 0, mIcon.getIntrinsicWidth(), mIcon.getIntrinsicHeight());
    
    // Create our ScaleGestureDetector
    mScaleDetector = new ScaleGestureDetector(context, new ScaleListener());
}

@Override
public boolean onTouchEvent(MotionEvent ev) {
    // Let the ScaleGestureDetector inspect all events.
    mScaleDetector.onTouchEvent(ev);
    
    final int action = ev.getAction();
    switch (action & MotionEvent.ACTION_MASK) {
    case MotionEvent.ACTION_DOWN: {
        final float x = ev.getX();
        final float y = ev.getY();
        
        mLastTouchX = x;
        mLastTouchY = y;
        mActivePointerId = ev.getPointerId(0);
        break;
    }
        
    case MotionEvent.ACTION_MOVE: {
        final int pointerIndex = ev.findPointerIndex(mActivePointerId);
        final float x = ev.getX(pointerIndex);
        final float y = ev.getY(pointerIndex);

        // Only move if the ScaleGestureDetector isn't processing a gesture.
        if (!mScaleDetector.isInProgress()) {
            final float dx = x - mLastTouchX;
            final float dy = y - mLastTouchY;

            mPosX += dx;
            mPosY += dy;

            invalidate();
        }

        mLastTouchX = x;
        mLastTouchY = y;

        break;
    }
        
    case MotionEvent.ACTION_UP: {
        mActivePointerId = INVALID_POINTER_ID;
        break;
    }
        
    case MotionEvent.ACTION_CANCEL: {
        mActivePointerId = INVALID_POINTER_ID;
        break;
    }
    
    case MotionEvent.ACTION_POINTER_UP: {
        final int pointerIndex = (ev.getAction() & MotionEvent.ACTION_POINTER_INDEX_MASK) 
                >> MotionEvent.ACTION_POINTER_INDEX_SHIFT;
        final int pointerId = ev.getPointerId(pointerIndex);
        if (pointerId == mActivePointerId) {
            // This was our active pointer going up. Choose a new
            // active pointer and adjust accordingly.
            final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
            mLastTouchX = ev.getX(newPointerIndex);
            mLastTouchY = ev.getY(newPointerIndex);
            mActivePointerId = ev.getPointerId(newPointerIndex);
        }
        break;
    }
    }
    
    return true;
}

@Override
public void onDraw(Canvas canvas) {
    super.onDraw(canvas);
    
    canvas.save();
    canvas.translate(mPosX, mPosY);
    canvas.scale(mScaleFactor, mScaleFactor);
    mIcon.draw(canvas);
    canvas.restore();
}

private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
    @Override
    public boolean onScale(ScaleGestureDetector detector) {
        mScaleFactor *= detector.getScaleFactor();
        
        // Don't let the object get too small or too large.
        mScaleFactor = Math.max(0.1f, Math.min(mScaleFactor, 5.0f));

        invalidate();
        return true;
    }
}
This example merely scratches the surface of what ScaleGestureDetector offers. The listener methods receive a reference to the detector itself as a parameter that can be queried for extended information about the gesture in progress. See the ScaleGestureDetector  API documentation for more details.It requires the Android 2.2 SDK (API level 8) to build and a 2.2 (Froyo) powered device to run.

这个例子仅仅缩放自定义view的表面,回调接口可以提供当前缩放的比例值。它需要运行在Android2.2系统上。

From Example to Application

In a real app you would want to tweak the details about how zooming behaves. When zooming, users will expect content to zoom about the focal point of the gesture as reported by ScaleGestureDetector.getFocusX() and getFocusY(). The specifics of this will vary depending on how your app represents and draws its content.

在你自己的应用中你可能想要调整缩放行为的一些细节。比如当变焦时,用户期望得当前手势的焦点可以通过ScaleGestureDetector.getFocusX() and getFocusY()获取。

Different touchscreen hardware may have different capabilities; some panels may only support a single pointer, others may support two pointers but with position data unsuitable for complex gestures, and others may support precise positioning data for two pointers and beyond. You can query what type of touchscreen a device has at runtime usingPackageManager.hasSystemFeature().

不同的触摸硬件具有不同的特性。你可以查询当前设备的出屏特性在程序运行是利用PackageManager.hasSystemFeature().

As you design your user interface keep in mind that people use their mobile devices in many different ways and not all Android devices are created equal. Some apps might be used one-handed, making multiple-finger gestures awkward. Some users prefer using directional pads or trackballs to navigate. Well-designed gesture support can put complex functionality at your users’ fingertips, but also consider designing alternate means of accessing application functionality that can coexist with gestures.

设计你的用户交互界面请记住,用户使用移动设备会有不同方式,比如轨迹球等等方式。请考虑你的应用同时存在这些共存的手势。






### 回答1: STM32是一款基于ARM Cortex-M内核的微控制器系列,具有广泛的应用领域。在STM32系列中,HID(Human Interface Device)是一种常见的通信协议,用于实现人机交互设备的连接和通信。 HID(Human Interface Device)是一种在计算机和外部设备之间进行数据交互的标准协议。通过HID协议,外部设备(如鼠标、键盘、触摸屏等)可以与计算机进行数据的传输和交互。其中,HID Multitouch则表示一种支持多点触控的HID设备。 在使用STM32微控制器实现HID Multitouch功能时,首先需要将相应的触摸屏模块与STM32微控制器进行连接,并利用相应的通信接口(如SPI、I2C等)进行数据的传输。接着,通过编程的方式,可以实现与触摸屏的通信和数据处理。 在数据处理方面,通常需要解析触摸屏传输过来的数据,并对其进行分析和处理。对于HID Multitouch设备而言,传输的数据中会包含多个触摸点的信息,因此需要对这些数据进行解析和解码,从中提取出每个触摸点的坐标、压力等信息。 一旦获取到每个触摸点的信息,就可以根据具体的应用需求进行相应的处理和响应。例如,可以将触摸点的坐标信息用于控制显示屏上的光标位置,或者根据触摸点的压力信息实现各种手势识别功能,从而实现更加灵活和智能的人机交互。 总的来说,STM32微控制器可以实现HID Multitouch功能,通过与触摸屏模块的连接和数据处理,实现多点触控的应用。这为开发各种交互式应用提供了更加便捷和灵活的途径,为人机交互带来更多可能性。 ### 回答2: STM32 HID Multitouch是指基于STM32单片机的HID(Human Interface Device)多点触摸技术。 HID是一种可以与人机交互的设备,如键盘、鼠标、触摸屏等。STM32是ST公司推出的一系列32位ARM Cortex-M内核的微控制器。 STM32 HID Multitouch技术允许使用者通过多点触摸来与设备进行交互。它可以支持多指手势,比如滑动、缩放和旋转等操作。 通过STM32微控制器的高性能和丰富的外设资源,可以实现灵活且高效的多点触摸处理。与传统的单点触摸技术相比,HID Multitouch在用户交互方面提供了更多的便利性和功能。 STM32 HID Multitouch技术可以广泛应用于各种领域,如智能手机、平板电脑、工控设备等。它可以提升用户体验,使用户能够更加自由地操作设备。同时,该技术还可以提高设备的可靠性和鲁棒性。 总而言之,STM32 HID Multitouch是一种基于STM32微控制器的多点触摸技术,可以为各种设备提供更高效、更灵活的人机交互功能。它在现代科技应用中有着广泛的应用前景。 ### 回答3: STM32 HID(Human Interface Device)是一种基于STM32微控制器的人机交互设备。HID是一种通用的接口协议,使设备能够与计算机进行通信,如键盘、鼠标、触摸屏等。而Multi-Touch是指设备可以同时检测和跟踪多个触摸点。 STM32 HID Multitouch可以通过STM32微控制器实现多点触控功能。通过连接触摸屏和STM32芯片,可以实现同时检测多个触摸点的位置和动作。这使得设备能够支持多点手势操作,如缩放、旋转和拖动等,提供更流畅和直观的用户体验。 实现STM32 HID Multitouch的关键是需要使用适当的驱动程序和固件库。例如,可以使用STM32Cube软件包中的USB HID库来实现HID功能,并使用触摸屏驱动程序来跟踪和传输多个触摸点的信息。在开发过程中,需要正确配置IO口、时钟和中断等,以实现对触摸屏的读取和数据处理。 通过利用STM32的高性能处理能力和丰富的外设资源,STM32 HID Multitouch能够满足对实时性和精确性要求较高的应用场景。例如,可以广泛应用于工控设备、医疗设备、智能家居和智能交互设备等。此外,由于STM32的开源生态系统,开发者可以轻松获取相关资料和社区支持,从而提高开发效率。 总而言之,STM32 HID Multitouch是一种基于STM32微控制器的实现多点触控功能的人机交互设备。它通过适当的硬件和软件配置,能够支持多点手势操作,提供更好的用户体验。在各种应用领域中,STM32 HID Multitouch有着广泛的应用前景。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值