introduction
Input system: InputReader handles touch eventsThe process of InputReader's handling of touch events was analyzed. The final result was to package the touch events into NotifyMotionArgs and then distribute them to the next link. according toInput system: Creation and startup of InputManagerServiceIt can be seen that the next link is InputClassifier. However, the system currently does not support the function of InputClassifier, so events will be sent directly to the InputDispatcher.
Input system: key event distributionThe distribution process of key events is analyzed. Although the goal of the analysis is key events, the framework of event distribution is also depicted as a whole. This framework will also be used in this article to analyze the distribution process of touch events, so the duplicate content will not be repeated.
1. InputDispatcher receives a touch event
void InputDispatcher::notifyMotion(const NotifyMotionArgs* args) { if (!validateMotionEvent(args->action, args->actionButton, args->pointerCount, args->pointerProperties)) { return; } uint32_t policyFlags = args->policyFlags; // Motion events from InputReader/InputClassifier are trusted policyFlags |= POLICY_FLAG_TRUSTED; android::base::Timer t; // 1. Execute a truncation strategy for touch events // Before joining the touch event, query the truncation policy and save the query results to the parameter policyFlags mPolicy->interceptMotionBeforeQueueing(args->displayId, args->eventTime, /*byref*/ policyFlags); if (() > SLOW_INTERCEPTION_THRESHOLD) { ALOGW("Excessive delay in interceptMotionBeforeQueueing; took %s ms", std::to_string(().count()).c_str()); } bool needWake; { // acquire lock (); if (shouldSendMotionToInputFilterLocked(args)) { // ... } // Package into MotionEntry // Just enqueue a new motion event. std::unique_ptr<MotionEntry> newEntry = std::make_unique<MotionEntry>(args->id, args->eventTime, args->deviceId, args->source, args->displayId, policyFlags, args->action, args->actionButton, args->flags, args->metaState, args->buttonState, args->classification, args->edgeFlags, args->xPrecision, args->yPrecision, args->xCursorPosition, args->yCursorPosition, args->downTime, args->pointerCount, args->pointerProperties, args->pointerCoords, 0, 0); // 2. Add touch events to your inbox needWake = enqueueInboundEventLocked(std::move(newEntry)); (); } // release lock // 3. If necessary, wake up the thread to handle the touch event if (needWake) { mLooper->wake(); } }
The process flow after receiving a touch event is very similar to the process flow when receiving a key event.
- Perform a truncation policy query on touch events. Reference [1.1 Truncating Policy Query]
- Add the touch event to the InputDispatcher inbox and wake up the thread to process the touch event.
1.1 Truncating policy query
void NativeInputManager::interceptMotionBeforeQueueing(const int32_t displayId, nsecs_t when, uint32_t&amp; policyFlags) { bool interactive = (); if (interactive) { policyFlags |= POLICY_FLAG_INTERACTIVE; } // Trusted and non-injected events if ((policyFlags &amp; POLICY_FLAG_TRUSTED) &amp;&amp; !(policyFlags &amp; POLICY_FLAG_INJECTED)) { if (policyFlags &amp; POLICY_FLAG_INTERACTIVE) { // The device is in an interactive state, and trusted and non-injected events are sent directly to the user without being processed by the truncation policy policyFlags |= POLICY_FLAG_PASS_TO_USER; } else { // Only when the device is in a non-interactive state, the truncation policy is required to be executed when touching events JNIEnv* env = jniEnv(); jint wmActions = env-&gt;CallIntMethod(mServiceObj, , displayId, when, policyFlags); if (checkAndClearExceptionFromCallback(env, "interceptMotionBeforeQueueingNonInteractive")) { wmActions = 0; } handleInterceptActions(wmActions, when, /*byref*/ policyFlags); } } else { // Inject event, or untrusted event // Only in interactive state can it be passed to the user // Note that there is another meaning here: in non-interactive state, it is not sent to the user if (interactive) { policyFlags |= POLICY_FLAG_PASS_TO_USER; } } } void NativeInputManager::handleInterceptActions(jint wmActions, nsecs_t when, uint32_t&amp; policyFlags) { if (wmActions &amp; WM_ACTION_PASS_TO_USER) { policyFlags |= POLICY_FLAG_PASS_TO_USER; } }
A touch event must meet the following three situations before the truncation strategy can be implemented.
- Touch events are trusted. Touch events from input devices are trusted.
- Touch events are non-injected. The principle of monkey is to inject touch events, so its events do not need to be processed by a truncation strategy.
- The device is in a non-interactive state. Generally speaking, the non-interactive state refers to the display being in an out-of-screen state.
It is also important to pay attention to when events do not need to be cut off strategies. There are two situations
- For trusted and non-injected touch events, if the device is in interactive state, it is sent directly to the user. That is, if the display screen is in a light state, the touch event generated by the input device will definitely be sent to the window.
- For untrusted or injected touch events, if the device is in interactive state, it is also sent directly to the user. That is to say, if the display is in a light state, the touch event injected by monkey is also sent directly to the window.
Finally, one thing to note is that if a touch event is an untrusted event or an injection event, and when the device is in a non-interactive state (usually refers to the screen deactivation), it will not be sent to the user without truncation policy, that is, it will be discarded.
Touch events handled in actual work are usually from the input device. It is definitely trusted and not injected. Therefore, it will not execute the truncation strategy only when the device is in a non-interactive state (usually refers to the shutdown). If the device is in an interactive state (usually refers to the light screen), it will be directly distributed to the window.
Now let’s take a look at the specific implementation of the truncation strategy
// public int interceptMotionBeforeQueueingNonInteractive(int displayId, long whenNanos, int policyFlags) { // 1. If the policy requires the screen to be awakened, then cut off this touch event // Generally speaking, the policy of waking up the screen depends on the device's configuration file if ((policyFlags &amp; FLAG_WAKE) != 0) { if (wakeUp(whenNanos / 1000000, mAllowTheaterModeWakeFromMotion, PowerManager.WAKE_REASON_WAKE_MOTION, ":MOTION")) { // Return 0, indicating that the touch event is truncated return 0; } } // 2. Determine whether to truncate events in non-interactive state if (shouldDispatchInputWhenNonInteractive(displayId, KEYCODE_UNKNOWN)) { // Return this value, indicating that the event is not truncated, that is, the event is distributed to the user return ACTION_PASS_TO_USER; } // Ignore theater mode if (isTheaterModeEnabled() &amp;&amp; (policyFlags &amp; FLAG_WAKE) != 0) { wakeUp(whenNanos / 1000000, mAllowTheaterModeWakeFromMotionWhenNotDreaming, PowerManager.WAKE_REASON_WAKE_MOTION, ":MOTION"); } // 3. Default truncated touch events // Return 0, indicating truncated event return 0; } private boolean shouldDispatchInputWhenNonInteractive(int displayId, int keyCode) { // Apply the default display policy to unknown displays as well. final boolean isDefaultDisplay = displayId == DEFAULT_DISPLAY || displayId == INVALID_DISPLAY; final Display display = isDefaultDisplay ? mDefaultDisplay : (displayId); final boolean displayOff = (display == null || () == STATE_OFF); if (displayOff &amp;&amp; !mHasFeatureWatch) { return false; } // displayOff means that the screen is in the off state, but not the off state, which does not mean that the screen is in the bright state. // For doze state, the screen is on, but the screen may still be black // Therefore, as long as the screen is in the on state and the lock screen is displayed, the touch event will not be cut off if (isKeyguardShowingAndNotOccluded() &amp;&amp; !displayOff) { return true; } // For touch events, the value of keyCode is KEYCODE_UNKNOWN if (mHasFeatureWatch &amp;&amp; (keyCode == KeyEvent.KEYCODE_BACK || keyCode == KeyEvent.KEYCODE_STEM_PRIMARY || keyCode == KeyEvent.KEYCODE_STEM_1 || keyCode == KeyEvent.KEYCODE_STEM_2 || keyCode == KeyEvent.KEYCODE_STEM_3)) { return false; } // For the default screen, if the device is in a dream state, the touch event will not be cut off // Because the doze component needs to receive touch events, it may wake up the screen if (isDefaultDisplay) { IDreamManager dreamManager = getDreamManager(); try { if (dreamManager != null &amp;&amp; ()) { return true; } } catch (RemoteException e) { (TAG, "RemoteException when checking if dreaming", e); } } // Otherwise, consume events since the user can't see what is being // interacted with. return false; }
Whether the truncation policy truncates the touch event depends on the return value of the policy. There are two situations.
- Returns 0, indicating that the touch event is truncated.
- Return to ACTION_PASS_TO_USER, which means that the touch event is not cut off, that is, the touch event is distributed to the user/window.
The following lists the situation of truncation of touch events, but one premise should be paid attention to: the device is in a non-interactive state (usually refers to the screen-out state)
- Events will be passed to the user, that is, they will not be cut off, as follows
- There is a lock screen and the display is in a non-off state. Note that the non-off state does not mean that the screen is in the on (bright screen) state, it may also be in the doze state (the screen is in the low battery state), and the doze state screen is also black.
- Dream state. Because the doze component will be run in dream state.
- The event was truncated, as follows
- The policy flag contains FLAG_WAKE , which causes the screen to be awakened, so the touch event needs to be truncated. FLAG_WAKE generally comes from the configuration file of the input device.
- There is no lock screen, no dream, and no FLAG_WAKE, and it will be cut off by default.
Two conclusions can be summarized from the above analysis
- If the system has components running, such as lock screen, doze components, then the touch event needs to be distributed to these components and therefore will not be truncated.
- If no component is running, the touch event will be truncated. The touch event is truncated because it needs to wake up the screen, which is just one special case.
2. InputDispatcher distributes touch events
Depend onInput system: Creation and startup of InputManagerServiceIt can be seen that InputDispatcher processes events in the inbox through thread loops, and can only handle one event at a time.
void InputDispatcher::dispatchOnce() { nsecs_t nextWakeupTime = LONG_LONG_MAX; { // acquire lock std::scoped_lock _l(mLock); mDispatcherIsAlive.notify_all(); if (!haveCommandsLocked()) { // 1. Distribute a touch event dispatchOnceInnerLocked(&amp;nextWakeupTime); } // The process of the touch event distribution will not generate commands if (runCommandsLockedInterruptible()) { nextWakeupTime = LONG_LONG_MIN; } // 2. Calculate the time point when the thread wakes up next time to handle anr const nsecs_t nextAnrCheck = processAnrsLocked(); nextWakeupTime = std::min(nextWakeupTime, nextAnrCheck); if (nextWakeupTime == LONG_LONG_MAX) { mDispatcherEnteredIdle.notify_all(); } } // release lock // 3. The specified duration of thread sleep nsecs_t currentTime = now(); int timeoutMillis = toMillisecondTimeoutDelay(currentTime, nextWakeupTime); mLooper-&gt;pollOnce(timeoutMillis); }
The process of processing touch events in a thread loop is as follows
- Distribute a touch event.
- When the event is distributed to the window, the timeout time of a window feedback will be calculated, and this time will be used to calculate the next time the thread wakes up.
- Using the thread wake-up time calculated in the previous step, we calculate how long the thread will eventually need to sleep. When the thread is awakened, it will check whether the window receiving the touch time is feedback timed out. If it is timed out, ANR will be triggered.
Let's see how to distribute a touch event
void InputDispatcher::dispatchOnceInnerLocked(nsecs_t* nextWakeupTime) { nsecs_t currentTime = now(); if (!mDispatchEnabled) { resetKeyRepeatLocked(); } if (mDispatchFrozen) { return; } // Here is the delay in optimizing app switching // mAppSwitchDueTime is the timeout time for the app switch. If it is less than the current time, it means that the app switch has timed out. // If the app switch timeout, all unprocessed events before the app switch key event will be discarded bool isAppSwitchDue = mAppSwitchDueTime <= currentTime; if (mAppSwitchDueTime < *nextWakeupTime) { *nextWakeupTime = mAppSwitchDueTime; } // mPendingEvent indicates the event being processed if (!mPendingEvent) { if (()) { // ... } else { // 1. Remove events from inbox queue mPendingEvent = (); mInboundQueue.pop_front(); traceInboundQueueLengthLocked(); } // If this event needs to be passed to the user, then the PowerManagerService is the same as the upper level. There is user behavior at this time, and this function is to extend the time of the screen light. if (mPendingEvent->policyFlags & POLICY_FLAG_PASS_TO_USER) { pokeUserActivityLocked(*mPendingEvent); } } ALOG_ASSERT(mPendingEvent != nullptr); bool done = false; // Detect the cause of the discarding event DropReason dropReason = DropReason::NOT_DROPPED; if (!(mPendingEvent->policyFlags & POLICY_FLAG_PASS_TO_USER)) { // Truncated by truncation strategy dropReason = DropReason::POLICY; } else if (!mDispatchEnabled) { // Generally it is because the system is being shut down or is being shut down. dropReason = DropReason::DISABLED; } if (mNextUnblockedEvent == mPendingEvent) { mNextUnblockedEvent = nullptr; } switch (mPendingEvent->type) { // .... case EventEntry::Type::MOTION: { std::shared_ptr<MotionEntry> motionEntry = std::static_pointer_cast<MotionEntry>(mPendingEvent); if (dropReason == DropReason::NOT_DROPPED && isAppSwitchDue) { // The app switch timeout, causing the touch event to be discarded dropReason = DropReason::APP_SWITCH; } if (dropReason == DropReason::NOT_DROPPED && isStaleEvent(currentTime, *motionEntry)) { // Events before 10s have expired dropReason = DropReason::STALE; } // This is a measure to optimize the application's unresponsiveness, which will discard all touch events before mNextUnblockedEvent if (dropReason == DropReason::NOT_DROPPED && mNextUnblockedEvent) { dropReason = DropReason::BLOCKED; } // 2. Distribute touch events done = dispatchMotionLocked(currentTime, motionEntry, &dropReason, nextWakeupTime); break; } // ... } // 3. If the event is processed, reset some states, such as mPendingEvent // Return true, which means that the event has been processed // If the event is discarded or sent, it will return true // Return false, indicating that you don't know how to handle the event for the time being, so the thread will sleep // Then, when the thread is awakened again, we will handle this event again if (done) { if (dropReason != DropReason::NOT_DROPPED) { dropInboundEventLocked(*mPendingEvent, dropReason); } mLastDropReason = dropReason; // Reset mPendingEvent releasePendingEventLocked(); // Wake up immediately to handle the next event *nextWakeupTime = LONG_LONG_MIN; // force next poll to wake up immediately } }
Input system: key event distributionThe thread loop of InputDispatcher has been analyzed. For touch events, they are distributed through InputDispatcher::dispatchMotionLocked()
bool InputDispatcher::dispatchMotionLocked(nsecs_t currentTime, std::shared_ptr<MotionEntry> entry, DropReason* dropReason, nsecs_t* nextWakeupTime) { if (!entry->dispatchInProgress) { entry->dispatchInProgress = true; } // 1. If there is a reason for the touch event, then the subsequent distribution process will not be followed. if (*dropReason != DropReason::NOT_DROPPED) { setInjectionResult(*entry, *dropReason == DropReason::POLICY ? InputEventInjectionResult::SUCCEEDED : InputEventInjectionResult::FAILED); return true; } bool isPointerEvent = entry->source & AINPUT_SOURCE_CLASS_POINTER; std::vector<InputTarget> inputTargets; bool conflictingPointerActions = false; InputEventInjectionResult injectionResult; if (isPointerEvent) { // Find the touched window, save the window to inputTargets // 2. For touch events, find the window for touch // The touched window is saved in inputTargets injectionResult = findTouchedWindowTargetsLocked(currentTime, *entry, inputTargets, nextWakeupTime, &conflictingPointerActions); } else { // ... } if (injectionResult == InputEventInjectionResult::PENDING) { // Return false, indicating that you don't know how to deal with this event for the time being, which will cause the thread to sleep // Wait until the thread is awakened next time, then handle this event return false; } // Walking here means that the touch event has been processed, so the processing result is saved // As long as the returned is not InputEventInjectionResult::PENDING // All indicate that the event is processed, whether it is permission denied, failed, or successful setInjectionResult(*entry, injectionResult); if (injectionResult == InputEventInjectionResult::PERMISSION_DENIED) { ALOGW("Permission denied, dropping the motion (isPointer=%s)", toString(isPointerEvent)); return true; } if (injectionResult != InputEventInjectionResult::SUCCEEDED) { CancelationOptions::Mode mode(isPointerEvent ? CancelationOptions::CANCEL_POINTER_EVENTS : CancelationOptions::CANCEL_NON_POINTER_EVENTS); CancelationOptions options(mode, "input event injection failed"); synthesizeCancelationEventsForMonitorsLocked(options); return true; } // Walking here means that the touch event has successfully found the touch window // Add monitor channels from event's or focused display. // 3. The touch event found the touch window. Before distributing it to the window, save the global monitor into inputTargets // Show taps and Pointer location in the developer options, using global monitor addGlobalMonitoringTargetsLocked(inputTargets, getTargetDisplayId(*entry)); if (isPointerEvent) { // ... omit the code processed by portal window } if (conflictingPointerActions) { // ... } // 4. Distribute events to all windows in inputTargets dispatchEventLocked(currentTime, entry, inputTargets); return true; }
A touch event distribution process can be roughly summarized into the following processes
- If there is a reason that the touch event needs to be discarded, the touch event will not go through the subsequent distribution process, that is, it will be discarded.
- Usually touch events are sent to windows, so you need to find a touch window for touch events. The window is finally saved to inputTargets. Reference [2.1 Finding a Touch Window]
- inputTargets After saving the touch window, you must also save the global monitor window. For example, Show taps and Pointer location in the developer options are implemented using this window.
- Start the distribution loop and distribute the touch event to the window saved by inputTargets. becauseInput system: key event distributionThis process has been distributed and will not be analyzed in this article.
2.1 Finding a touch window
InputEventInjectionResult InputDispatcher::findTouchedWindowTargetsLocked( nsecs_t currentTime, const MotionEntry& entry, std::vector<InputTarget>& inputTargets, nsecs_t* nextWakeupTime, bool* outConflictingPointerActions) { // ... // 6. For non-DOWN events, get the TouchState saved by the DOWN event // TouchState saves the window that receives DOWN events const TouchState* oldState = nullptr; TouchState tempTouchState; std::unordered_map<int32_t, TouchState>::iterator oldStateIt = (displayId); if (oldStateIt != ()) { oldState = &(oldStateIt->second); (*oldState); } // ... // The first condition newGesture means the first finger presses // The following condition indicates that the current window supports split motion, and another finger presses it at this time if (newGesture || (isSplit && maskedAction == AMOTION_EVENT_ACTION_POINTER_DOWN)) { /* Case 1: New splittable pointer going down, or need target for hover or scroll. */ // Getting the touch point x, y coordinates int32_t x; int32_t y; int32_t pointerIndex = getMotionEventActionPointerIndex(action); if (isFromMouse) { // ... } else { x = int32_t([pointerIndex].getAxisValue(AMOTION_EVENT_AXIS_X)); y = int32_t([pointerIndex].getAxisValue(AMOTION_EVENT_AXIS_Y)); } // Check if it is the first finger pressed here bool isDown = maskedAction == AMOTION_EVENT_ACTION_DOWN; // 1. For the DOWN event, look for the touch window based on the x,y coordinates of the touch event // Parameter addOutsideTargets means that only when the first finger is pressed, if no touched window is found, // Then you need to save those windows that can accept OUTSIZE events to tempTouchState newTouchedWindowHandle = findTouchedWindowAtLocked(displayId, x, y, &tempTouchState, isDown /*addOutsideTargets*/, true /*addPortalWindows*/); // Omit ... Handle window exceptions ... // 2. Get all getsture monitors const std::vector<TouchedMonitor> newGestureMonitors = isDown ? selectResponsiveMonitorsLocked( findTouchedGestureMonitorsLocked(displayId, )) : ; // The window where the touch point is not found, nor the gesture monitor is found, then the task of finding the touch window will fail. if (newTouchedWindowHandle == nullptr && ()) { ALOGI("Dropping event because there is no touchable window or gesture monitor at " "(%d, %d) in display %" PRId32 ".", x, y, displayId); injectionResult = InputEventInjectionResult::FAILED; goto Failed; } // Walking here means you have found the touch window, or found the gesture monitor if (newTouchedWindowHandle != nullptr) { // The window is about to be saved, now get the window's flag int32_t targetFlags = InputTarget::FLAG_FOREGROUND | InputTarget::FLAG_DISPATCH_AS_IS; if (isSplit) { targetFlags |= InputTarget::FLAG_SPLIT; } if (isWindowObscuredAtPointLocked(newTouchedWindowHandle, x, y)) { targetFlags |= InputTarget::FLAG_WINDOW_IS_OBSCURED; } else if (isWindowObscuredLocked(newTouchedWindowHandle)) { targetFlags |= InputTarget::FLAG_WINDOW_IS_PARTIALLY_OBSCURED; } // Update hover state. if (maskedAction == AMOTION_EVENT_ACTION_HOVER_EXIT) { newHoverWindowHandle = nullptr; } else if (isHoverAction) { newHoverWindowHandle = newTouchedWindowHandle; } // Update the temporary touch state. // If the window supports split, then when saving the window with tempTouchState, you must save the pointer id specifically. BitSet32 pointerIds; if (isSplit) { uint32_t pointerId = [pointerIndex].id; (pointerId); } // 3. tempTouchState Save the found touch window // If the touch window is really found, then this is saved. If the window that can accept OUTSIDE is found, then this is updated. (newTouchedWindowHandle, targetFlags, pointerIds); } else if (()) { // If no window is touched, set split to true. This will allow the next pointer down to // be delivered to a new window which supports split touch. = true; } if (isDown) { // tempTouchState saves all gesture monitors // 4. When the first finger is pressed, tempTouchState saves the gesture monitor (newGestureMonitors); } } else { // ... } if (newHoverWindowHandle != mLastHoverWindowHandle) { // .... } { // Permission detection... } // Save the window that receives AMOTION_EVENT_ACTION_OUTSIDE if (maskedAction == AMOTION_EVENT_ACTION_DOWN) { // ... } // Save the wallpaper window when the first finger is pressed if (maskedAction == AMOTION_EVENT_ACTION_DOWN) { // // ... } // Walking here means there is no abnormality injectionResult = InputEventInjectionResult::SUCCEEDED; // 5. Save tempTouchState the touch window and gesture monitor into inputTargets for (const TouchedWindow& touchedWindow : ) { addWindowTargetLocked(, , , inputTargets); } for (const TouchedMonitor& touchedMonitor : ) { addMonitoringTargetLocked(, , , inputTargets); } // Drop the outside or hover touch windows since we will not care about them // in the next iteration. (); Failed: // ... // 6. Cache tempTouchState if (maskedAction != AMOTION_EVENT_ACTION_SCROLL) { if ( >= 0) { mTouchStatesByDisplay[displayId] = tempTouchState; } else { (displayId); } } return injectionResult; }
The process of finding a touch window for touch events is extremely complicated. Although I omitted a lot of the process of this code, I guess readers will also feel dizzy.
For DOWN events
- Find the touched window according to the x,y coordinates. Reference [2.1.1 Find the touch window according to coordinates]
- Gets all the gesture monitor windows.
- Save the touch window to tempTouchState.
- Save all the gesture monitor windows to tempTouchState.
- Save all windows for tempTouchState, create an InputTarget object, and save it to the parameter inputTargets. Reference [2.1.2 Save Window]
- Use mTouchStatesByDisplay to cache tempTouchState.
The gesture monitor is a window added to implement gesture functions. What is the gesture function? For example, on the left/right side of the screen, swiping towards the center of the screen will trigger a return gesture. This gesture function is used instead of navigation keys. In the next article, I will analyze the principle of this gesture function.
For non-DOWN events, it is generally MOVE, UP events
- Gets the tempTouchState of the DOWN event cache. Because tempTouchState saves touch windows and gesture monitors that handle DOWN events, non-DOWN events will also be sent to these windows.
- Repeat step 5 of the DOWN event.
When the amount of code analyzed is large, we need to have a holistic concept. Find a touch window for touch events. The final result is to save the found window into the parameter inputTargets, and the event will be distributed to the window saved by inputTargets.
2.1.1 Find the touch window according to the coordinates
// addOutsideTargets pressed on the first finger is true// addPortalWindows value is true// ignoreDragWindow defaults to falsesp<InputWindowHandle> InputDispatcher::findTouchedWindowAtLocked(int32_t displayId, int32_t x, int32_t y, TouchState* touchState, bool addOutsideTargets, bool addPortalWindows, bool ignoreDragWindow) { if ((addPortalWindows || addOutsideTargets) && touchState == nullptr) { LOG_ALWAYS_FATAL( "Must provide a valid touch state if adding portal windows or outside targets"); } // Traverse windows from front to back to find touched window. // From front to back, traverse the window const std::vector<sp<InputWindowHandle>>& windowHandles = getWindowHandlesLocked(displayId); for (const sp<InputWindowHandle>& windowHandle : windowHandles) { // ignoreDragWindow defaults to false if (ignoreDragWindow && haveSameToken(windowHandle, mDragState->dragWindow)) { continue; } // Get window information const InputWindowInfo* windowInfo = windowHandle->getInfo(); // Match windows belonging to specific screens if (windowInfo->displayId == displayId) { auto flags = windowInfo->flags; // The window should be visible if (windowInfo->visible) { // The window should be touchable if (!(InputWindowInfo::Flag::NOT_TOUCHABLE)) { // Detect whether it is a touch model: Focus can be obtained and touch events outside the window are not allowed to be sent to the window behind it bool isTouchModal = !(InputWindowInfo::Flag::NOT_FOCUSABLE) && !(InputWindowInfo::Flag::NOT_TOUCH_MODAL); // The window is a touch model, or the touched coordinate points fall on the window if (isTouchModal || windowInfo->touchableRegionContainsPoint(x, y)) { int32_t portalToDisplayId = windowInfo->portalToDisplayId; // If it is portal window if (portalToDisplayId != ADISPLAY_ID_NONE && portalToDisplayId != displayId) { if (addPortalWindows) { // For the monitoring channels of the display. // touchState save portal window touchState->addPortalWindow(windowHandle); } // Recursively call to get the touch window under portal display id return findTouchedWindowAtLocked(portalToDisplayId, x, y, touchState, addOutsideTargets, addPortalWindows); } // Not portal window, return directly to the found window return windowHandle; } } // When you walk here, it means that the touch window is not found. That is, neither the window to the touch model nor the window to contain the touch points is found // When the first finger is pressed, addOutsideTargets value is true // NOT_TOUCH_MODAL is used with WATCH_OUTSIDE_TOUCH, if it falls outside the window when the first finger is pressed // The window will receive the MotionEvent.ACTION_OUTSIDE event if (addOutsideTargets && (InputWindowInfo::Flag::WATCH_OUTSIDE_TOUCH)) { touchState->addOrUpdateWindow(windowHandle, InputTarget::FLAG_DISPATCH_AS_OUTSIDE, BitSet32(0)); } } } } return nullptr; }
This involves a concept of portal window. Since I didn't find a specific place to use it, I roughly guessed that it means that the device is connected to a screen, and then a window is displayed on the home screen to operate the external screen. In the following analysis, I will skip the portal window part. Of course, touch masters the distribution process of touch events, and when encountering portal windows in the future, it should be no problem to analyze them.
Looking for the window where the touch point is actually to traverse all windows from top to bottom and then find a window that meets the conditions.
The window must first meet the preconditions
- The window is to be on the specified screen.
- The window should be visible.
- The window should be touchable.
After all the preconditions are met, as long as any of the following conditions are met, the window where the touch point is located is found.
- Is the window of the touch model: The focus can be obtained and touch events outside the window are not allowed to be sent to the window behind it.
- The x,y coordinates of the touch point fall in the window coordinate system.
2.1.2 Save the window
// InputDispatcher Save the touch windowvoid InputDispatcher::addWindowTargetLocked(const sp<InputWindowHandle>& windowHandle, int32_t targetFlags, BitSet32 pointerIds, std::vector<InputTarget>& inputTargets) { std::vector<InputTarget>::iterator it = std::find_if((), (), [&windowHandle](const InputTarget& inputTarget) { return ->getConnectionToken() == windowHandle->getToken(); }); const InputWindowInfo* windowInfo = windowHandle->getInfo(); // Create InputTarget and save to parameter inputTargets if (it == ()) { InputTarget inputTarget; std::shared_ptr<InputChannel> inputChannel = getInputChannelLocked(windowHandle->getToken()); if (inputChannel == nullptr) { ALOGW("Window %s already unregistered input channel", windowHandle->getName().c_str()); return; } = inputChannel; = targetFlags; = windowInfo->globalScaleFactor; = int2(windowHandle->getInfo()->displayWidth, windowHandle->getInfo()->displayHeight); inputTargets.push_back(inputTarget); it = () - 1; } ALOG_ASSERT(it->flags == targetFlags); ALOG_ASSERT(it->globalScaleFactor == windowInfo->globalScaleFactor); // After saving InputTarget, save the coordinate conversion parameters of the save window. // This parameter can convert the coordinates of the display screen into the coordinates of the window it->addPointers(pointerIds, windowInfo->transform); } // InputDispatcher Save gesture monitorvoid InputDispatcher::addMonitoringTargetLocked(const Monitor& monitor, float xOffset, float yOffset, std::vector<InputTarget>& inputTargets) { InputTarget target; = ; = InputTarget::FLAG_DISPATCH_AS_IS; ui::Transform t; (xOffset, yOffset); (t); inputTargets.push_back(target); }
For touch events, whether it is a touch window or a gisture monitor, it will be converted into an InputTarget and saved to the parameter inputTargets. When the distribution loop is started later, the touch event will be sent to the window saved by inputTargets.
Finish
This article analyzes the distribution process of touch events as a whole, and many details are not analyzed in depth. For example, how to optimize event distribution when the window is unresponsive. However, as long as you master the basic process, you can analyze these details yourself.
Some of the analysis processes in this article may span a lot, because this knowledge has been mentioned in the previous article. If you feel it is a bit difficult to read this article, please read the previous article first to lay a solid foundation.
Theoretical articles are always a bit boring, but they do not prevent me from moving forward. The next article will use this as a basis to analyze how the gesture function that replaces the system navigation bar is implemented, which will also be the final work of the Input system.
The above is the detailed content of the Android development Input system touch event distribution. For more information about the Android Input touch event distribution, please follow my other related articles!