Javascript touch events and mobile-specific considerations

Anton Ioffe - September 25th 2023 - 19 minutes read

In the evolving landscape of web development, one aspect of JavaScript that is increasingly garnering attention is touch events. Ancillary to modern mobile web applications, understanding touch events and harnessing their functionalities is no longer optional for developers. This comprehensive guide, "Deep Dive into JavaScript Touch Events: Mobile-Specific Considerations and Optimizations," addresses this increasingly relevant topic, offering not just theoretical insights, but also practical applications and best practices for optimizing touch events in your applications.

Throughout the article, we will explore the fundamental touch event interfaces, delve into core touch event types, and give you inside access to how they function in real-world scenarios. More than just defining terms, we will also provide hands-on code examples, discuss common mistakes, and how to avoid them. By the time you reach the end, you will not only be able to handle single touch points, but also gain insight into managing multiple touch scenarios—an essential expertise in today's multi-touch environment.

So, whether you are aiming to enhance the user experience of your web application, or you are simply curious to find out more about JavaScript touch and pointer events, this deep-dive is guaranteed to deliver value. This is more than just an exploration; it is an opportunity to stay ahead in the fast-paced world of modern web development. Let's dive deep into the fascinating world of JavaScript touch events together!

Understanding Touch Events and Their Importance

What are Touch Events?

Touch events are low-level interfaces that interpret finger (or stylus) activity on touch screens or trackpads. This technology has become vital in ensuring that web content is compatible with touch-based user interfaces. Touch events are similar to mouse events, with the added advantage of supporting simultaneous touches at multiple locations on the touch interface. This simultaneous support for multi-touch interactions underscores the importance of touch events in advancing UI experiences, especially for portable devices with touchscreen functionality.

Due to the abundance of touch-enabled devices, many modern web applications directly process touch-based input using touch events instead of relying on interpreted mouse events which do not support concurrent user input. This shift has greatly enhanced user experiences, particularly in applications that require touch-based gestures or other touch-driven interfaces.

Understanding Interfaces

To truly grasp the mechanics of touch events, it's important to understand the three types of touch event interfaces: Touch, TouchEvent, and TouchList.

  • Touch: This interface encapsulates the properties of a single touch point. A touch point represents a point of contact on the touch-sensitive surface, typically referred to as the "surface". This includes information such as the position of the touch point relative to the viewport of the browser.

  • TouchEvent: This interface represents an event occurring when the state of touches on the surface changes. In essence, an application receives touch events during the start (touchstart), move (touchmove), and end (touchend) phases of a touch interaction. According to the standard, a touch interaction starts when a finger (or stylus) first places on the surface, other fingers can subsequently touch or move across the surface, and the interaction ends when the fingers lift away from the surface.

  • TouchList: This interface represents a list of the touch points currently active on the surface. It provides features to access the properties of individual touches within the list.

Why are Touch Events Critical?

In the context of modern web development, touch events play a pivotal role, especially for mobile applications. Given the widespread adoption of touchscreen devices, creating web content that supports touch-based interactions has become non-negotiable.

Touch events open up a range of possibilities for web applications, such as supporting gesture recognition, multi-touch, drag and drop operations, among other touch-specific interactions. They are integral to ensuring applications are intuitive, responsive, and user-friendly.

Every developer must understand how these touch events work; they are the backbone of many user interaction features we take for granted on mobile devices today. Therefore, a deep knowledge of touch events is essential for developing high-quality applications with satisfying, smooth, and accessible user experiences – a standard that all modern web applications should strive to achieve.

The following question might be a good food for thought: How can developers leverage the potential of touch events to further enhance the interactive capabilities of their web applications?

Deciphering Touch Event Types and Their Uses

Let's plunge into the crux of touch events in JavaScript: touchstart, touchmove, touchend, and touchcancel. We are going to explore what each event accomplishes, how to put them in use in coding scenarios, and how to avert issues associated with them.

Touchstart Event

The touchstart event is akin to the initial handshake in touch interaction, triggered when a touch point is first registered on the touch sensor. Think of this as the moment of first contact—when a user’s finger makes the maiden contact with the device's screen.

Here's a concrete example of deploying a touchstart event handler on a chosen DOM element:

someElement.addEventListener('touchstart', (e) => {
    console.log(e.touches); // Logs all current touches on the screen
}, false);

A common irregularity found involves developers bypassing multi-touch conditions within their touchstart event handlers. While single touches are straightforward, validating your code to function accurately with multiple concurrent touch points is essential.

Touchmove Event

The touchmove event is next in line, incited when an active touch point makes a move in any direction. This event provides the user with an interactive interface, identical to the click-and-drag movement in a mouse.

Here is an example appropriately illustrating a touchmove event:

someElement.addEventListener('touchmove', (e) => {
    let touchLocation = e.targetTouches[0]; // Get the first touch point
    console.log(touchLocation.pageX, touchLocation.pageY); // Log the position of the touch
}, false);

Take note, the touchmove event fires at a rapid rate, which could result in bottlenecking performance if your handler performs numerous actions. Always be mindful to keep your touchmove handlers optimized and simplified.

Touchend Event

The touchend event takes the center stage when a touch point is lifted, i.e., when the user lifts their finger off the touch surface. This can be equated to releasing the mouse button in a traditional user interface.

Here is an example showing how to register a touchend event handler:

someElement.addEventListener('touchend', (e) => {
    console.log('Touch ended!');
}, false);

A widespread misconception is that a touchend event will always trail every touchstart event. However, in scenarios where the touch point is abruptly terminated, a touchcancel event occurs instead. Always ensure your code accommodates such potentialities.

Touchcancel Event

Last but not least, the touchcancel event, unique in its right. It comes into notice when an ongoing touch event unexpectedly pauses or gets disrupted due to external factors - perhaps because the touch point drifts away from the scope of the screen or too many touch points are initiated.

Below is a handling mechanism for a touchcancel event:

// Assume an application where your user can draw on a canvas
someElement.addEventListener('touchcancel', (e) => {
    // Once the touch gets canceled, let's clear the canvas 
    context.clearRect(0, 0, canvas.width, canvas.height);
    console.log('Touch was canceled, canvas cleared');
}, false);

Frequently, developers overlook the touchcancel event handlers, considering them non-essential. However, it's paramount to anticipate any ongoing touch operation being interrupted unexpectedly and design your code to handle such scenarios to preclude unforeseen behavior.

Touch events introduce a unique array of interactive possibilities for your users —a multitude of touch traces that users can create as they interact with your site. Are you employing all these touch events efficiently in your code, guaranteeing an optimal mobile browsing experience? Do you see any prospects for refining your touch event handlers, making them more seamless, or simply more engaging for your users? Prompt yourself with these thought-provoking questions. Dive deep into the fine details, and keep those grey cells coding!

Practical Application: From Touch Events to Gestures

Defining gestures through touch events is a key aspect for creating interactive and user-friendly mobile web applications. By leveraging the data provided by touch events, developers can design a broad range of intuitive user gestures.

JavaScript does not natively classify touch events as taps, double taps, swipes, or long presses. These interpretations are achieved using attributes and timing associated with the touch events.

Let's dive deeper into how we can utilize touch events data to interpret varying user gestures.

Analyzing Touch Start and End Position

A crucial step for creating gesture-based interactions involves monitoring the start and end points of a touch. This becomes particularly important while implementing swipe gestures within your application.

Consider the following code snippet:

let initialPoint;

function handleTouchStart(e) {
    // Storing the touch starting point
    initialPoint = e.touches[0].pageX;
}

function handleTouchEnd(e) {
    // Getting the touch ending point
    let finalPoint = e.changedTouches[0].pageX;
    // Calculating the difference between start and end points
    let difference = initialPoint - finalPoint;
    // If the touch moved more than 100 pixels, we consider it as a swipe
    if (Math.abs(difference) > 100) {
        if (difference > 0) {
            console.log('Swipe Left');
        } else {
            console.log('Swipe Right');
        }
    }
}

element.addEventListener('touchstart', handleTouchStart, false);
element.addEventListener('touchend', handleTouchEnd, false);

In this example, the touchstart event is used to store the initial touch point. During the touchend event, the ending touch point is obtained, and the distance traveled is computed. A movement exceeding a certain threshold (100 pixels, in this case) is interpreted as a swipe, with the direction determined accordingly.

Time-Based Touch Interactions

Another critical aspect in interpreting gestures is the duration of the touch. Distinguishing between a 'tap' and a 'long press' is possible by recording the time between touchstart and touchend.

Here's an illustrative example:

let touchStartTime;

function handleTouchStart(e) {
    // Remembering when the touch started
    touchStartTime = Date.now();
}

function handleTouchEnd(e) {
    // Calculating the duration of the touch
    let touchTimeLength = Date.now() - touchStartTime;
    // If touch lasted less than 200 milliseconds, it's a 'Short tap'; Else it's a 'Long press'
    if (touchTimeLength < 200) {
        console.log('Short tap');
    } else {
        console.log('Long press');
    }
}

element.addEventListener('touchstart', handleTouchStart, false);
element.addEventListener('touchend', handleTouchEnd, false);

In the above code, we note the instant the touch starts and ends. A 'short tap' is interpreted if the touch's duration is less than a certain threshold (200 milliseconds, in this case). A longer duration than this threshold is interpreted as a 'long press'.

Multi-Touch Gestures and TouchLists

Involvement of multiple fingers for interactions like pinch or two-finger swipe demand several touch points to be tracked simultaneously. For this, the touches attribute of touch events, or a TouchList, is quite useful.

Despite the power that multi-touch gesture detection and interpretation provides, implementing it can be computationally intense and complicated, requiring advanced logic considerations.

Optimal touch event detection and gesture interpretation strategies are unique for each application and heavily dependent on your specific user expectations and needs.

Imagine the far-reaching impact of proficient gesture detection in shaping a smoother, more engaging user experience! How could effective application of multi-touch gestures take your application's usability to newer heights in today's competitive mobile market?

Using Touch Event Attributes for Gesture Interpretation

Different attributes from a TouchEvent object can come into play for more complex gestures. For example, the targetTouches list can be handy to track all touches within a specific target element, allowing for more localized gesture detection.

To illustrate this, consider an application that allows users to add annotations on an image with a two-finger tap. Here, the targetTouches list could be valuable to distinguish whether the two simultaneous touches were indeed on the image.

Similarly, the changedTouches list provides details about the touches that are new or have changed from the previous touch event. This list can be instrumental in determining whether a new touch point was added or an existing one was moved or removed, enabling us to design nuanced multi-touch gestures.

By creatively using these attributes and their combinations with timing-related aspects, we can unlock a wide array of gesture recognition possibilities and elevate our application's interaction design.

Touch Events Interaction with Browsers

When discussing touch events in JavaScript, the interaction of various browsers with these events plays a critical role in their handling. Primarily, when there is a single active touch point, browsers typically dispatch emulated mouse and click events. Now, while this scenario might work successfully in certain applications, it can introduce potential issues where this synthetic behavior fails to reconcile with your event handling strategy.

A clear understanding of this concept is crucial, especially when considering multi-touch interactions. In case there are two or more active touch points, the browser will usually generate touch events alone, skipping over the emulated mouse events. Processing this distinction properly ensures your applications are well-prepared to manage multiple concurrent inputs, thereby enhancing user experiences.

One of the key instruments to fluently manage the confluence of touch and mouse events is the preventDefault() method. This JavaScript method, when implemented in touch event handlers, prevents the browser from triggering its default action, like sending the emulated mouse events.

Consider the following use case:

function onTouch(evt) {
    evt.preventDefault();
    if (evt.touches.length > 1 || (evt.type === 'touchend' && evt.touches.length > 0)) {
        // Here, other event handling logic can be executed.
    }
}

In this example, invoking preventDefault() during touchstart or on the first touchmove in a series will prevent the subsequent corresponding mouse events from firing. But it's important to understand this might break expected behaviors, like link navigation, because calling preventDefault() on the touchstart will shut off mouse events entirely.

It's a common misconception to preemptively call preventDefault() right on touchstart, which can stifle the consistent behavior of links and other elements that rely on mouse events. An optimal solution is to call the method on touchmove or touchend instead, as these scenarios ensure the proper working of habitual functionalities.

Illustrating the concept clearer, let's rectify the mistake in the following touch event handling example:

Incorrect use:

function onTouch(evt) {
    evt.preventDefault(); // Directly called on touchstart
    // Other event handling logic occurs here.
}

Corrected use:

function onTouch(evt) {
    if (evt.type === 'touchmove' || evt.type === 'touchend') {
        evt.preventDefault(); // Now, it's correctly called on touchmove or touchend
    }
    // Other event handling logic occurs here.
}

By moving the preventDefault() call to occur on 'touchmove' or 'touchend', general behaviors, such as link navigation, will function correctly.

It's also noteworthy to include a fallback mechanism if touch events are not supported by certain devices. We can utilize a simple feature detection in JavaScript for this purpose:

if ('ontouchstart' in window) {
    // This means the browser supports touch events.
} else {
    // This specifies the browser does not support touch events.
}

Finally, recollect that adding touch point handlers directly to their specific target elements and minimizing the work done in touch handlers are parameters of best practices in this area.

However, here's a thought-provoking question: How would you mitigate the scenario where preventDefault() does not consistently prevent the default behavior across different browsers? One feasible strategy could be implementing a feature detection system for each browser and handling the touch events accordingly. Remember, the solution lies in restructuring your codes for such edge cases and managing multi-touch dynamics effortlessly, eventually rendering your applications operational across all devices.

Handling Multiple Touch Points: A Tracker's Guide

The topic at hand is managing multi-touch events in Javascript. This process essentially follows through four principle steps, each of which requires a careful and nuanced understanding. Before we deep dive, it's important to remember that performance is paramount when working with touch events. Any sluggishness in response can significantly depreciate the user experience. Usability-wise, the target touch element should be large enough to accommodate a finger touch, to avoid misfires and undesired triggers of adjacent elements.

Our approach will involve working with an ongoingTouches array. The array is initially empty and attached to the window object as a property for globlal access. We'll be adding, updating, and removing touches from this array as the user interacts with the screen.

Touch Start Phase: touchstart

In the touch start phase, we listen for the touchstart event and add all new touches to the ongoingTouches array. When a touch point is placed on the touch surface, the touchstart event is fired. In the handler for this event, we want to do two main things: add the new touch point to ongoingTouches and visually represent the touch point in our user interface.

Here's what the code might look like:

window.ongoingTouches = [];

function handleStart(evt) {
    evt.preventDefault();
    var touches = evt.changedTouches;

    for (var i = 0; i < touches.length; i++) {
        ongoingTouches.push(copyTouch(touches[i]));
    }
}

function copyTouch(touch) {
    return { identifier: touch.identifier, pageX: touch.pageX, pageY: touch.pageY };
}

In copyTouch(), we're extracting only the properties we need. This is a good practice as it avoids keeping unnecessary references and consuming memory.

Touch Move Phase: touchmove

The next phase is the touch move phase. During this phase, the touchmove event is fired whenever a touch point is moved along the touch surface. Here, we have the responsibility to both update our cached touch information and visually represent the movement of the touch point.

function handleMove(evt) {
    evt.preventDefault();
    var touches = evt.changedTouches;

    for (var i = 0; i < touches.length; i++) {
        var idx = ongoingTouchIndexById(touches[i].identifier);

        if(idx >= 0) {
            ongoingTouches.splice(idx, 1, copyTouch(touches[i])); // swap in the new touch record
        } else {
            console.log('can't figure out which touch to continue');
        }
    }
}

Notice that we only update the touch if it's already in the ongoingTouches array. This is crucial to keep track of the same touch across events.

Touch End Phase: touchend

The touchend event is fired when a touch point is removed from the touch surface. In this phase, we want to remove the touch record from ongoingTouches and draw a line to represent the final movement of the touch in our user interface.

function handleEnd(evt) {
    evt.preventDefault();
    var touches = evt.changedTouches;

    for (var i = 0; i < touches.length; i++) {
        var idx = ongoingTouchIndexById(touches[i].identifier);

        if(idx >= 0) {
            ongoingTouches.splice(idx, 1);  // remove it; we're done
        } else {
            console.log('can't figure out which touch to end');
        }
    }
}

Touch Cancel Phase: touchcancel

Lastly, the touchcancel event is fired when the touch point has been disrupted in an implementation-specific manner such as creation of too many touch points. The primary task in handling touch cancellation is similar to the touch end phase – we'll remove the cancelled touch from ongoingTouches.

function handleCancel(evt) {
    evt.preventDefault();
    var touches = evt.changedTouches;

    for (var i = 0; i < touches.length; i++) {
        ongoingTouches.splice(i, 1);  // remove it; we're done
    }
}

Remembering to handle touchcancel events is one of the common errors developers make when handling touch events. It's easy to forget since there's no visible difference to the user but handling cancellations properly ensures consistent touch tracking underneath the surface.

Through this method, we can maintain a tight grip on the various touch events that occur throughout the lifespan of a user's touch, effectively enabling us to manage and cater to multi-touch scenarios. There is plenty of room for customization in such an approach, catering as per the needs of the project in hand. However, always remember that there is a fine balance between customization and performance, and it is our responsibility as developers to find it.

Touch vs Pointer Events: Which One Triumphs?

As web development continues to evolve, the need for diverse input event handling becomes evident. Two prominent approaches in this regard are Touch Events and Pointer Events. Understanding the pros and cons of each provides a crucial insight into how they can be effectively integrated into modern web development workflows.

Touch Events were designed primarily for devices with touch screens. These events interpret finger or stylus activity on touch screens or trackpads, providing robust support for touch-based user interfaces. When a touch interaction begins, say from the moment a finger first contacts a surface, a series of touch events (touchstart, touchmove, touchend) are triggered. This allows for complex, application-specific multi-touch interactions. The main pro of using Touch Events is that they support multiple simultaneous inputs at different locations on the touch surface, making it favorable for touch-enabled devices.

Despite their utility, the dependence on touch input can lead to limitations. For example, concurrent input handling is a significant challenge as these events are typically modeled on physical touch input. It also falls short when it comes to interacting with non-touch input devices. This is where Pointer Events shine.

On the other hand, Pointer Events were developed as part of the new input mechanisms to reduce application complexity due to various input types. These events create an abstract pointer that unifies input from any device, be it a mouse, a pen/stylus, or a touchscreen. This one-size-fits-all event model simplifies application input by making code handling for pointer events closely resemble mouse handling code. This solves the problem of concurrent input handling and ensures compatibility across input mechanisms. Pointer Events support also boasts high implementation status across main browser platforms like Chrome, Firefox, IE11, and Edge.

Although Pointer Events seem like the ideal solution, some cons need to be addressed. Its model lacks the support for complex multi-touch interactions offered by Touch Events. The simultaneous touch inputs at different locations on the touch surface might not be as seamless as with Touch Events.

In conclusion, the choice between Touch and Pointer Events depends on the requirements of your web application. If you aim for simple, device-agnostic input handling and the need for complex touch gestures is minimal, Pointer Events would be a good choice. Conversely, if your application relies heavily on multi-touch activities, then Touch Events might be the preferred approach.

A good practice in real-world scenarios is to consider implementing fallbacks for both models. For Touch Events, when they are not supported, traditional click events can be used. For Pointer Events, consider handling regular mouse events as a fallback, should the need arise.

Finally, remember that even if a user agent supports Touch Events, it does not necessarily restrict other input mechanisms. Users may use a touchscreen in conjunction with a trackpad, mouse, or keyboard. Thus, developers should avoid coding that binds event listeners to "touch or mouse/keyboard," as it might cause the application to be touch-exclusive, limiting users from using other input types.

Do you feel the benefits of Pointer Events outweigh their potential complexities in your development environment? Or does the distinct advantage of Touch Events in supporting complex gestures make them irreplaceable? Picking a side in the Touch Events vs. Pointer Events debate is largely a question of understanding and catering to your user base, an insight that holds relevance far beyond this discussion.

Enhancing Touch Events: Best Practices and Optimizations

Enhancing Touch Events: Best Practices and Optimizations

One of the foundational aspects of enhancing the efficiency and usability of touch event-driven applications revolves around minimizing the workload within touch handlers. Performing a significant volume of computations within a touch handler can potentially lead to performance bottlenecks that degrade the overall experience. Below demonstrates an optimized pattern that infuses better performance:

var touchElement = document.getElementById('touch-element');

touchElement.addEventListener('touchstart', function(e) {
    // Touch start event code
}, false);

touchElement.addEventListener('touchend', function(e) {
    // Touch end event code
}, false);

touchElement.addEventListener('touchmove', function(e) {
    // Touch move event code
}, false);

touchElement.addEventListener('touchcancel', function(e) {
    // Touch cancel event code
}, false);

In this example, touch handlers are added onto a single, specific element, identified by the id touch-element. This focused approach minimizes memory usage and optimizes performance by avoiding the inclusion of redundant nodes in the touch handling pipeline.

Another aspect of paramount concern is the target element's size. In an environment dictated by finger-based interactions, the touch target must comfortably accommodate a user's finger. Apple's Human Interface Guidelines recommend that the touch target size should be at least 44 pixels wide and 44 pixels tall. However, for more comfort and fewer errors, a size of 80x80 pixels could be more accommodating.

<div id='touch-element' style='width: 80px; height: 80px;'>
    <!-- Content goes here -->
</div>

Incorporating a larger touch target ensures robust usability, minimizing the likelihood of accidental activations of nearby elements.

Despite the value of simulations and emulations, they remain second-best to actual device-based testing. Using the following conditional check allows us to configure either touch event handlers or mouse event handlers, providing a compatibility blanket across different device classes:

if ('ontouchstart' in window) {
    // The browser supports touch events
    setupTouchEventHandlers();
} else {
    // The browser does not support touch events
    setupMouseEventHandlers();
}

One common mistake developers often make when working with touch interfaces is forgetting the effects of touch events on default browser behaviors such as scrolling and zooming. It's important to ensure that the touch-based interactions implemented in your web application do not interfere with these conventional behaviors, offering a superior user experience.

Consider this: How can we develop touch interfaces that are mindful of the default browser behaviors and yet fluid in their operation? And more importantly, how do we spread this awareness and practice among the developer community to make the web more friendly for touch-screen users? Should browsers provide more explicit and developer-friendly ways to handle these considerations? Let's continue to explore, learn, and adapt as we navigate the rapidly evolving terrain of modern web development.

Summary

In this comprehensive guide on JavaScript touch events and mobile-specific considerations, the author emphasizes the increasing importance of understanding and optimizing touch events in modern web development. The article provides a deep dive into touch event interfaces, such as Touch, TouchEvent, and TouchList, and explains their significance in advancing user experiences on touch-enabled devices. Practical applications and best practices for handling touch events are also discussed, including code examples, common mistakes to avoid, and ways to leverage touch events for enhancing interactive capabilities.

The key takeaway from this article is that touch events are critical for creating intuitive and user-friendly mobile web applications. Developers need to have a deep understanding of touch event interfaces and how to optimize their implementation to ensure smooth and satisfying user experiences. The article challenges developers to think about how they can leverage the potential of touch events to further enhance the interactive capabilities of their web applications.

A challenging technical task related to touch events could be to create a multi-touch gesture recognition system. This could involve tracking the positions of multiple touch points, interpreting the gestures based on the touch event data, and implementing specific actions or animations in response to different gestures. This task would require a thorough understanding of touch event interfaces and a creative approach to gesture recognition algorithms. The goal would be to create a seamless and engaging user experience through touch-based interactions.

Don't Get Left Behind:
The Top 5 Career-Ending Mistakes Software Developers Make
FREE Cheat Sheet for Software Developers