summary
This article is a complete learning record of gesture recognition input event processing. The content includes input event InputEvent response method, the concept and use of touch event MotionEvent, action classification of touch event, and multi-touch. The general process of recognition processing of touch gesture Touch Gesture was analyzed based on the case and API. Related GestureDetector, Scroller and VelocityTracker are introduced. Finally, we analyze the recognition of some gestures such as drag and scale.
Input source classification
Although Android itself is a complete system, its main feature of running on mobile devices determines that the vast majority of apps we open on it belong to client programs, and its main goal is to display interfaces to handle interactions, which is similar to applications on the web front-end and desktop.
As a "client program", most of the functions written are to handle user interaction. Different systems (corresponding to different devices) can support different user interactions.
Android can run on multiple devices, from the perspective of interactive input,InputDevice.SOURCE_CLASS_xxx
Constants identify devices with several different input sources supported by SDK. There are: touch screen, physical/virtual keys, rocker, mouse, etc. The following discussion is aimed at the widest interaction - touch screen (SOURCE_TOUCHSCREEN).
From the perspective of interactive design, touch screen devices are all kinds of gestures, including interactive definitions such as click, double-click, slide, drag, and zoom. In essence, they are all basic combinations of different modes of touch events.
In the Android touch screen system, single point and multiple point (the point is usually the finger) touch is supported, and each point has to be pressed, moved and raised.
The processing of touch screen interaction is divided into different touch screen operations - recognition of gestures, and then different processing is carried out according to the service correspondence. In order to respond to different gestures, they need to be recognized first. The recognition process is to track the "basic events" that are provided by the collection system in real time to reflect the user's actions on the screen, and then determine various high-level "actions" based on these data (event sets).
It provides monitoring of several of the most common actions such as onScroll, onLongPress, onFling, etc. Your app can identify interactive actions like Drag and Scale by implementing its own GestureDetector type according to your needs.
Gesture recognition is the mainstream interaction/input method for touch screen devices such as smartphones and tablets, unlike keyboards and mouses on PCs.
Enter an event
The input events generated by user interaction are ultimately represented by the subclass of InputEvent, which currently include KeyEvent (Object used to report key and button events) and MotionEvent (Object used to report movement (mouse, pen, finger, trackball) events.).
There are many places to receive InputEvent. According to the framework, the propagation paths of events include Activity, Window, and View (a path of ViewTree: view stack).
In most cases, these input events are received and processed in the specific view of the user interaction.
There are two ways to handle events in Views. One is to add an event listener, and the other is to rewrite the processor method (event handler). The former is more convenient, while the latter is rewritten as needed when customizing the View, and CustomView can also define its own processor methods as needed or provide a listening interface.
Event listening
Event listening interfaces are interfaces that only contain one method, such as:
// Inpublic interface OnTouchListener { boolean onTouch(View v, MotionEvent event); } public interface OnLongClickListener { boolean onLongClick(View v); } public interface OnClickListener { void onClick(View v); } public interface OnKeyListener { boolean onKey(View v, int keyCode, KeyEvent event); }
Register listening in places such as Activity or implement the corresponding interface (excluding the allocation of new types and objects) and then call...Listener().
According to the delivery mechanism of android's ui-events (input events), the listener's callback method will be executed before various corresponding processor methods. For those callback methods that return boolean values, the return value indicates whether the event will continue to be propagated. Therefore, the return value should be designed carefully as needed, otherwise the execution of other processing will be blocked.
For example, when setting OnTouchListener for View, if the callback method onTouch returns true, then in the Viewboolean dispatchTouchEvent(MotionEvent event)
After the callback method is executed, the processor method in the View is no longer executedboolean onTouchEvent(MotionEvent event)
。
Event handler
The event handler is the default method called when "event delivery" passes through the current View. Usually, it is the implementation of the behavior logic corresponding to the specific view (you should know that the listener is not necessary, and it can even be defined, and any View will provide processing for events of interest).
You can write a whole article about messaging. Skip it here and just know that the input event will pass through many "related" views along the ViewTree from top to bottom, and then these views will process or continue to pass events. Before the event arrives at ViewTree, it will pass through Activity and Window. The final origin is of course the hardware events collected by the system, which will be sent from the "event manager" to a certain class related to the interface in the interaction, and will begin to propagate.
The View class includes the following event handling methods:
-
onKeyDown(int, KeyEvent)
- Called when a new key event occurs. -
onKeyUp(int, KeyEvent)
- Called when a key up event occurs. -
onTrackballEvent(MotionEvent)
- Called when a trackball motion event occurs. -
onTouchEvent(MotionEvent)
- Called when a touch screen motion event occurs. -
onFocusChanged(boolean, int, Rect)
- Called when the view gains or loses focus.
The above processor method is to process it on the current node of the event propagation pipeline, that is, the processing only needs to consider the functional logic provided by the current View and inform the caller whether the processing has ended - it needs to continue passing? For the ViewGroup class, it also undertakes the task of passing events to childView. The following methods are closely related to event delivery:
-
(MotionEvent)
- This allows your Activity to intercept all touch events before they are dispatched to the window. -
(MotionEvent)
- This allows a ViewGroup to watch events as they are dispatched to child Views. -
(boolean)
- Call this upon a parent View to indicate that it should not intercept touch events with onInterceptTouchEvent(MotionEvent).
Understanding where events can be received and when to handle consumption events is an important aspect of interface programming, but the "transmission process of input events" is an important and complex topic. This article focuses on various gesture recognition of touch screen events, and related knowledge only occupies a certain amount of space based on "completeness and organization of understanding".
TouchMode
For touch screen devices, the interface will be in the interactive state of TouchMode during the user's touch until he leaves the screen (press->lift). Generally speaking, all Views are responding to touch events or other KeyEvent (keys, buttons, etc.) events. The two are completely different in interaction. The state maintenance of touch mode runs through the entire system, including all Window and Activity objects (mainly the control of the distribution of touch events), through the View classpublic boolean isInTouchMode ()
Methods can check whether the current device is in touch mode.
Gestures
The process of pressing the finger (one or more) and finally leaving the screen completely is a touch screen operation. Each operation can be classified as a different touch pattern and is ultimately defined as a different gesture (the definition of gestures and modes is designed, and the user will learn different gestures after using any touch screen device). The main gestures supported by Android are:
- Touch
- Long press
- Swipe or drag
- Long press drag
- Double touch
- Double touch drag
- Pinch open
- Pinch close
The app needs to respond to these gestures based on the API provided by the system.
Gesture recognition process
In order to realize the response processing of gestures, it is necessary to understand the representation of touch events. The specific process of identifying gestures includes:
- Get touch event data.
- Analyze whether a supported gesture matches.
MotionEvent
The input event triggered by the touch action is represented by MotionEvent, which implements the Parcelable interface - IPC requirements.
Almost all current devices support multi-touch, and each finger in the touch is treated as a pointer. MotionEvent records all the current touching poiners, including their respective X, Y coordinates, pressure, contact area, etc.
Each finger press, move, and lift produces an event object. Each event corresponds to an "action", which is represented by the constant of MotionEvent.ACTION_xxx:
- When the first finger is pressed, ACTION_DOWN is triggered
- ACTION_POINTER_DOWN is triggered when the finger is pressed later
- Any finger movement triggers ACTION_MOVE
- Non-last finger lift triggers ACTION_POINTER_UP
- Trigger ACTION_UP when leaving the screen last
- ACTION_CANCEL is triggered when the touch event sequence is interrupted. It is generally blocked by the parent corresponding to the View, such as when the touch exceeds the area.
The down, move and up of each finger will generate events. For performance reasons, because the movement process will generate a large number of ACTION_MOVE events, which are sent in "batch", that is, a MotionEvent will contain several actual ACTION_MOVE event data. Obviously, these events are MOVE actions, and the number of points is the same - any pointer addition and removal will trigger DOWN and UP events, so that it is not a continuous MOVE event.
Compared with the previous MotionEvent data, all data of the current MotionEvent is the latest. Packaged data forms an array based on time, and the latest data is used as current data. Can be passedgetHistorical
Series of methods access data for "historical events".
Here is the standard form to obtain the coordinates of each pointer for all events in the current MotionEvent:
void printSamples(MotionEvent ev) { final int historySize = (); final int pointerCount = (); for (int h = 0; h < historySize; h++) { ("At time %d:", (h)); for (int p = 0; p < pointerCount; p++) { (" pointer %d: (%f,%f)", (p), (p, h), (p, h)); } } ("At time %d:", ()); for (int p = 0; p < pointerCount; p++) { (" pointer %d: (%f,%f)", (p), (p), (p)); } }
As mentioned earlier, events have action classification, and each event object contains all the relevant data of points. The way to get the action is:
action = () & MotionEvent.ACTION_MASK;
getAction and getActionMasked
The int value returned by getAction() may contain pointerIndex information (here should be a similar approach to using bit bits to improve performance): Corresponding to the ACTION_POINTER_DOWN and ACTION_POINTER_UP actions, the return value contains the index value of the "current" pointer that triggers UP and DOWN, and can then be used as pointerIndex parameters in the methods getPointerId(int), getX(int), getY(int), getPressure(int), and getSize(int). methodgetActionIndex()
It is used to obtain the pointerIndex. andgetActionMasked()
The execution logic is the same as the above statement - returns the action constant value that does not contain pointerIndex. Corresponding to the case where there is only one finger, it is obvious that getAction() and getActionMasked() are the same, because the return value itself does not have additional pointerIndex data. Getting event action should use getActionMasked - more accurate.
The way to obtain data from a pointer is also quite special, such as obtaining the X coordinates of each pointer:
final int pointerCount = (); // p is pointerIndexfor (int p = 0; p < pointerCount; p++) { (" pointer %d: (%f,%f)", (p), (p), (p)); }
During a gesture operation, the number of points may change. Each pointer obtains an associated id during the DOWN event, which can be used as its valid identification until after UP or CANCEL (pointerCount changes).
In a single MotionEvent object,getPointerCount()
Returns the total number of points in touch, the value of 0~getPointerCount()-1 is the pointerIndex of all points currently. methodfloat getX(int pointerIndex)
Receive index to obtain the X coordinate value of the corresponding pointer.
Similarly, other methods that receive pointerIndex parameters are used to obtain other properties of pointer. If you need to pay attention to the continuous movement of a certain finger, such as the first finger pressed, you can use the methodint getPointerId(int pointerIndex)
Get the id of pointerIndex, record this id, and then pass the method when each MotionEvent data is checkedint findPointerIndex(int pointerId)
Get the pointerIndex corresponding to the id in the current MotionEvent data, and you can access the attributes of the pointer specified in the continuous event.
Finally, the following methods of MotionEvent are often used:
-
long getEventTime()
Get the time when the event occurred. -
long getDownTime()
Obtain the time when the first touch event sequence - the finger press (ACTION_DOWN) occurs. -
int getAction()
、int getActionMasked()
、int getActionIndex()
、int getPointerCount()
、int getPointerId(int pointerIndex)
、float getX()
、float getX(int pointerIndex)
wait.
Receive event data
A series of MotionEvent objects generated by gesture operations are distributed in sequence, passed and passed through some UI-related objects. Generally, they will eventually pass through the corresponding Activity and the View objects related to the current touch screen that make up the interface - each parent that goes up from the View where the event is located along the ViewTree.
In the Activity of the current interface, you can rewrite the Activity'sboolean onTouchEvent(MotionEvent event)
Methods to receive touch events, more often, because View is the place where UI interaction is implemented, soboolean onTouchEvent(MotionEvent event)
Receive events in the method.
A touch operation will send a series of events, so the onTouchEvent will be called "many times".
@Override public boolean onTouchEvent(MotionEvent event) { int action = () & MotionEvent.ACTION_MASK; switch (action) { case MotionEvent.ACTION_DOWN: (TAG, "ACTION_DOWN"); return true; case MotionEvent.ACTION_POINTER_DOWN: (TAG, "ACTION_POINTER_DOWN"); return true; case MotionEvent.ACTION_MOVE: (TAG, "ACTION_MOVE"); return true; case MotionEvent.ACTION_UP: (TAG, "ACTION_UP"); return true; case MotionEvent.ACTION_POINTER_UP: (TAG, "ACTION_POINTER_UP"); return true; case MotionEvent.ACTION_CANCEL: (TAG, "ACTION_CANCEL"); return true; default: (TAG, "default: action = " + action); return (event); } }
You can also receive touch events by setting up a listener, which is for specific View objects:
(new OnTouchListener() { public boolean onTouch(View v, MotionEvent event) { // ... Respond to touch events return true; } });
It should be noted that no matter which gesture operation is recognized, the ACTION_DOWN action must return true. Otherwise, according to the calling convention, it will be considered that the current processing ignores the event sequence of this touch operation and subsequent events will not be received.
Detection gestures
Various gestures can be determined based on the received sequence of events in the rewritten onTouch callback method. For example, an ACTION_DOWN followed by a series of ACTION_MOVE, followed by ACTION_UP, such a sequence is usually a scroll/drag gesture. In general, when implementing the logic of identifying gestures, it is necessary to "well-design" the code. It is often necessary to consider how many offsets are used to be effective sliding, and how many time gaps are down and up to be considered taps.Provides recognition of the most common gestures. The following are the key related types of gesture recognition.
GestureDetector
Its function is to identify onScroll, onFling onDown(), onLongPress() and other operations. Pass the received MotionEvent sequence to the GestureDetector, which then triggers the callback method corresponding to the different gestures.
The use process is:
1. Prepare the GestureDetector object and provide a listener that responds to various gesture callback methods. OnGestureListener is a callback interface for different gestures, which is easy to understand.
// public GestureDetector(Context context, OnGestureListener listener); mDetector = new GestureDetector(this, mGestureListener);
Pass the received event to the GestureDetector in the onTouch method.
@Override public boolean onTouchEvent(MotionEvent event) { boolean handled = (event); return handled || (event); }
If you are only interested in the callbacks of individual gestures of GestureDetector, the listener can inherit. True needs to be returned in the onDown method, otherwise subsequent events will be ignored.
Gesture movement
Gestures can be divided into sporty and non-sporty. For example, tap (tap) does not move, while scroll requires a certain distance to move the finger. There is a critical value for determining whether the finger is moving: touch slip, which can be obtained through #getScaledTouchSlop, indicating that the touch is determined to be the minimum distance to slide.
For non-sports gestures, such as click type, the recognition logic is mainly to detect "time gap". Sports gestures are a little more complicated, and the judgment of movement can be obtained according to actual functional needs:
- Point start and end positions.
- The movement direction calculated based on the x and y coordinates of the touch.
- By getHistorical
- The speed of the pointer when moving.
VelocityTracker
Sometimes you are interested in the speed during gesture movement, and you can calculate the speed during movement based on the collected event data:
public class MainActivity extends Activity { private static final String DEBUG_TAG = "Velocity"; ... private VelocityTracker mVelocityTracker = null; @Override public boolean onTouchEvent(MotionEvent event) { int index = (); int action = (); int pointerId = (index); switch(action) { case MotionEvent.ACTION_DOWN: if(mVelocityTracker == null) { // Retrieve a new VelocityTracker object to watch the velocity of a motion. mVelocityTracker = (); } else { // Reset the velocity tracker back to its initial state. (); } // Add a user's movement to the tracker. (event); break; case MotionEvent.ACTION_MOVE: (event); // When you want to determine the velocity, call // computeCurrentVelocity(). Then call getXVelocity() // and getYVelocity() to retrieve the velocity for each pointer ID. (1000); // Log velocity of pixels per second // Best practice to use VelocityTrackerCompat where possible. ("", "X velocity: " + (mVelocityTracker, pointerId)); ("", "Y velocity: " + (mVelocityTracker, pointerId)); break; case MotionEvent.ACTION_UP: case MotionEvent.ACTION_CANCEL: // Return a VelocityTracker object back to be re-used by others. (); break; } return true; } }
Scroller
Under the imperfect distinction, scroll can be divided into drag and additional deceleration sliding after fingers are swiped across the screen.
Usually, it is necessary to respond to gesture movements, such as the screen moves (translation) with the movement of the finger. The simple implementation is to instantly offset the corresponding x and y in ACTION_MOVE. In this case, the "response timing" to the action is obvious. In other cases, smooth sliding effects are required, but the timing and increments of each sliding are calculated. For example, the scrolling page turn effect performed after clicking on the previous page and next page buttons - similar to the animation effect of ViewPager. Another situation is that after your fingers quickly stroke the screen, you need to let the displayed content continue to slide and then gradually stop - the fling effect. In these cases, it is necessary to continuously adjust the screen in the future to achieve the scroll animation effect - the timing and offset of each sliding need to be calculated. You can use Scroller to complete animation effects like "smoothly move".
It is recommended to use, it has good compatibility and supports edge effects. Like VelocityTracker, Scroller is a "computing tool" that supports two sliding effects: startScroll and fling, corresponding to the above example. Design-wise, it is independent of the execution of the scrolling effect and only provides calculations and state judgments of the scrolling animation process.
Scroller usage process:
Prepare the Scroller object.
// In the appropriate initialization place such as constructor, onCreate, etc.mScroller = new OverScroller(context);
Turn on scrolling animation when appropriate. Generally, the fling effect will be combined with GestureDetector to recognize the fling gesture of the finger and then turn on the scroll animation: execute the () method in onFling in OnGestureListener.
The "smooth sliding effect" enabled by () can be executed when sliding is required.
(startX, startY, velocityX, velocityY, minX, maxX, minY, maxY, overX, overY); (startX, startY, dx, dy, duration);
At the execution moment of each frame of the animation, the scrolling increment is calculated and applied to the specific View object. When customizing the View, you can rely on the #postOnAnimation and #postInvalidateOnAnimation() method to simply trigger the next animation frame to perform animation operations. Or use Animation and other mechanisms to obtain the frequency of animation frame execution. The View itself has a computeScroll() method that allows subclasses to perform animation scrolling logic - combined with postInvalidateOnAnimation().
boolean animEnd = false; if (()) { int currX = (); int currY = (); // Modify the Viewx and y position, you can use the View's scroll method} else { animEnd = false; } if (!animEnd) { postInvalidateOnAnimation(); }
Like ScrollView, HorizontalScrollView itself provides scrolling function, and ViewPager also uses Scroller to complete smooth sliding behavior. Generally, Scroller is used when customizing controls with sliding behavior. Several controls of the framework use EdgeEffect to accomplish some edge effects.
Multi-Touch
As you can see in the above introduction to MotionEvent, each finger at touch is treated as a pointer. Currently, most mobile devices almost support 10-point touch.
Whether to consider multi-touch is determined by the function of the View. For example, scrolling is usually done with one finger, while scale requires more than 2 fingers.
The getPointerId and findPointerIndex methods of MotionEvent provide the identification of each pointer of the current event data. According to pointerIndex, other methods with it as parameters can be called to obtain the values of different aspects of the corresponding pointer. The pointerId can be used as a unique identifier during a pointer touch screen.
private int mActivePointerId; public boolean onTouchEvent(MotionEvent event) { .... // Get the pointer ID mActivePointerId = (0); // ... Many touch events later... // Use the pointer ID to find the index of the active pointer // and fetch its position int pointerIndex = (mActivePointerId); // Get the pointer's current position float x = (pointerIndex); float y = (pointerIndex); }
For single touch, the corresponding action can usually be determined according to getAction in the onTouchEvent method. When multi-touch, you need to use the getActionMasked method. The difference mentioned earlier, the following code snippet gives a general API about multi-touch:
int action = (event); // Get the index of the pointer associated with the action. int index = (event); int xPos = -1; int yPos = -1; (DEBUG_TAG,"The action is " + actionToString(action)); if (() > 1) { (DEBUG_TAG,"Multitouch event"); // The coordinates of the current screen contact, relative to // the responding View or Activity. xPos = (int)(event, index); yPos = (int)(event, index); } else { // Single touch event (DEBUG_TAG,"Single touch event"); xPos = (int)(event, index); yPos = (int)(event, index); } ... // Given an action int, returns a string description public static String actionToString(int action) { switch (action) { case MotionEvent.ACTION_DOWN: return "Down"; case MotionEvent.ACTION_MOVE: return "Move"; case MotionEvent.ACTION_POINTER_DOWN: return "Pointer Down"; case MotionEvent.ACTION_UP: return "Up"; case MotionEvent.ACTION_POINTER_UP: return "Pointer Up"; case MotionEvent.ACTION_OUTSIDE: return "Outside"; case MotionEvent.ACTION_CANCEL: return "Cancel"; } return ""; }
The MotionEventCompat class provides some multi-touch-related auxiliary methods, compatible versions.
ViewConfiguration
This class provides some UI-related constants, about timeout time, size, and distance, etc. It will provide unified standard reference values based on the system version and the device environment running, such as resolution, size, etc., to provide a consistent interactive experience for UI elements.
- Touch Slop: Indicates that the pointer is regarded as the minimum movement distance to a scroll gesture.
- Fling Velocity: Indicates the critical velocity at which finger movement is considered to trigger fling.
ViewGroup Management TouchEvent
Event Interception
The "responsibilities" of responding to touch events in non-ViewGroup Views are relatively single, which is to identify and execute interaction logic based on the interactive needs of the current View. That is, you only need to process the sequence of events generated by touch in #onTouchEvent.
ViewGroup inherits View, so it can handle events in onTouchEvent() as needed. On the other hand, as a parent of other Views, it must perform layout on childrenViews, and there is a method onInterceptTouchEvent() that controls the MotionEvent passed to the target childView. Note that ViewGroup itself can handle events, because it is also a qualified View subclass. Different depending on the function of the class, for example, the ViewPager will handle events that slide left and right, but pass events that slide up and down to the childView. You should know that ViewGroup can contain View or not. So some actual events are what the childView should handle, and some are "falling" in the ViewGroup itself area.
Related Methods
The mechanism for event distribution is simply mentioned here. ViewGroup can manage the delivery of MotionEvent. The following methods are involved:
boolean onInterceptTouchEvent(MotionEvent ev)
This method is used to intercept the MotionEvent event passed to the target childView (can be a ViewGroup, which is not necessarily the final target view of the event, but the next view after the event delivery path passes through the current ViewGroup). You can do some extra operations, and even prevent the event from being passed by yourself. If ViewGroup wants its onTouchEvent() to handle gesture events, you can override this method and complete the expected gesture processing in onTouchEvent().
(1) The order of events passing through ViewGroup
- The down event is received in onInterceptTouchEvent() as the starting point for subsequent events.
- The down event can be processed by the childView, or handled by the onTouchEvent() method of the current ViewGroup. When processing it yourself, onInterceptTouchEvent() returns true, and the corresponding onTouchEvent() should also return true, so that ViewGroup can receive subsequent events. Otherwise—onInterceptTouchEvent() returns true, and onTouchEvent() returns false—the subsequent events will be handed over to the parent of ViewGroup. After both methods return true, the subsequent events are directly handed over to the ViewGroup's onTouchEvent() for processing, and onInterceptTouchEvent() no longer receives subsequent events.
- This method returns false in the donw event. All subsequent events are first passed to the method, and then the onTouchEvent() or onInterceptTouchEvent() methods to the corresponding target childView: - the same event consumption rule as the current ViewGroup.
- After the method returns true, the target view receives the same event as the last event, the action becomes CANCEL, and the subsequent events are processed by the ViewGroup's onTouchEvent(), and the method is no longer received.
(2) Return value
Return true to steal motion events from the children and have them dispatched to this ViewGroup through onTouchEvent(). The current target will receive an ACTION_CANCEL event, and no further messages will be delivered here.
Note: The impact of the cooperation between onInterceptTouchEvent() and onTouchEvent() in ViewGroup on event delivery is mainly reflected in the processing of down events, and subsequent events are affected by this.
boolean onTouchEvent(MotionEvent event)
ViewGroup inherits View's onTouchEvent() without any change.
The return value is as follows:
True means that the event is processed (consuming), so that the subsequent event delivery is terminated.
false means that it is not processed, and it will be returned to parent in sequence along the path passed by the event to process - parent's onTouchEvent() is executed until a parent's onTouchEvent() returns true.
void requestDisallowInterceptTouchEvent(boolean disallowIntercept)
This method is called by childView. After the childView is called and true is passed, the parent will be notified to set a touch-related tag FLAG_DISALLOW_INTERCEPT along the view in ViewTree and the touch operation will be cleared.
When the file ViewGroup contains this tag, its default behavior is to ignore the call onInterceptTouchEvent() when distributing events through the method boolean dispatchTouchEvent(MotionEvent ev) to intercept events.
Expansion: Dragging and Scaling
Drag operation
Android 3.0 or above provides API support for drag and drop, see. Next, we will handle the onTouchEvent() method yourself to respond to the drag operation and move the target View.
The focus of implementation is to detect the movement distance. According to the design, starting from the first finger touching the target view to trigger the down operation. As long as the finger is still in the touch state, the movement of the corresponding finger is detected to move the view. The distance to move is the distance to calculate the x and y coordinates of the pointer's MOVE action. It should be noted that the same pointer must be detected, because multi-touch is allowed, then a pointer that is used as a mobile reference needs to be recorded - defined as an activePointer. The rule is: when the first finger ACTION_DOWN is recorded as the activePointer, the corresponding pointerId is recorded as the activePointer, if a finger leaves, the remaining pointer is recorded as the new activePointer.
Get the new x, y and the last coordinates (record the corresponding x, y as the last coordinates every time you set the activePointer) to compare the distance generated by the calculation.
// The ‘active pointer' is the one currently moving our object. private int mActivePointerId = INVALID_POINTER_ID; @Override public boolean onTouchEvent(MotionEvent ev) { // Let the ScaleGestureDetector inspect all events. (ev); final int action = (ev); switch (action) { case MotionEvent.ACTION_DOWN: { final int pointerIndex = (ev); final float x = (ev, pointerIndex); final float y = (ev, pointerIndex); // Remember where we started (for dragging) mLastTouchX = x; mLastTouchY = y; // Save the ID of this pointer (for dragging) mActivePointerId = (ev, 0); break; } case MotionEvent.ACTION_MOVE: { // Find the index of the active pointer and fetch its position final int pointerIndex = (ev, mActivePointerId); final float x = (ev, pointerIndex); final float y = (ev, pointerIndex); // Calculate the distance moved final float dx = x - mLastTouchX; final float dy = y - mLastTouchY; mPosX += dx; mPosY += dy; invalidate(); // Remember this touch position for the next move event mLastTouchX = x; mLastTouchY = y; break; } case MotionEvent.ACTION_UP: { mActivePointerId = INVALID_POINTER_ID; break; } case MotionEvent.ACTION_CANCEL: { mActivePointerId = INVALID_POINTER_ID; break; } case MotionEvent.ACTION_POINTER_UP: { final int pointerIndex = (ev); final int pointerId = (ev, pointerIndex); if (pointerId == mActivePointerId) { // This was our active pointer going up. Choose a new // active pointer and adjust accordingly. final int newPointerIndex = pointerIndex == 0 ? 1 : 0; mLastTouchX = (ev, newPointerIndex); mLastTouchY = (ev, newPointerIndex); mActivePointerId = (ev, newPointerIndex); } break; } } return true; }
The above method sets mActivePointerId in ACTION_DOWN and ACTION_POINTER_UP, respectively, and the last touch position. Record the position moved to in ACTION_MOVE and update the last touch position. Finally, clear the recorded pointerId in UP and CANCEL.
It can be seen that the focus of the recognition of drag gestures is to record the pointerId as a mobile reference, which must be continuous.
For the recognition and response of drag operations, you can directly use the GestureDetector to respond to the onScroll() method.
scroll, drag and pan are all the same gestures/operations.
Scale
ScaleGestureDetector can be used to detect scaling actions. The following example is a code example recognized by drag and scale together, paying attention to the order in which the recognition operation consumes events:
private ScaleGestureDetector mScaleDetector; private GestureDetector mGestureDetector; private float mScaleFactor = ; public MyCustomView(Context mContext){ ... mScaleDetector = new ScaleGestureDetector(context, new ScaleListener()); } ... public boolean onTouchEvent(MotionEvent event) { boolean retVal = (event); retVal = (event) || retVal; return retVal || (event); } private class ScaleListener extends { @Override public boolean onScale(ScaleGestureDetector detector) { mScaleFactor *= (); // Control the maximum value of scaling mScaleFactor = (0.1f, (mScaleFactor, 5.0f)); // Notify View to repaint after the zoom coefficient changes invalidate(); return true; } }
The usage of GestureDetector is given earlier. The above code snippet only shows the general usage of ScaleGestureDetector.
Note that onTouchEvent() first performs event detection of ScaleGestureDetector, and then GestureDetector. The default behavior of the parent class will be called only when both recognitions are not processed.
summary
Understanding the overall process of gesture recognition is to match different patterns according to the MotionEvent event sequence in onTouchEvent is the goal of the entire article. You should know that the types provided by frameworks such as GestureDetector and ScaleGestureDetector are convenient for everyone to implement gesture recognition functions when customizing Views. As long as you master the idea of gesture recognition, you can identify any desired touch event mode by yourself. However, studying the source code of the framework GestureDetector, as well as handling gesture operations in some open source controls is a good start.
material
Official Documentation
The main content of the article refers to the development documents from API 22.
Using Touch Gestures
File path: /docs/training/gestures/
Input Events
File path: /docs/guide/topics/ui/
Case: PhotoView
When customizing the View, you will need to listen to special gestures as needed. At this time, you need to define your own GestureDetector type. The implementation of the GestureDetector class of the research system is very helpful. If you need to recognize multiple gestures, multiple Detector types can be designed according to actual characteristics to recognize different gestures, but you need to pay attention to the order of consumption of events when using them, such as the sequence of drag and scale gestures.
The open source project PhotoView is used to display pictures and supports various gestures to zoom, pan and other operations on pictures. It contains several gesture recognition classes, and it is recommended that you read its code as a practice for "implementation details" of gesture recognition.
Source code download:Project download
The above is all the content of this article. I hope it will be helpful to everyone's study and I hope everyone will support me more.