For this post, let’s focus on section 3.5 of the requirements:

3.5 Your app must fully support touch input, and fully support keyboard and mouse input

Your app must provide visual feedback when users touch interactive elements.

Your app must not use an interaction gesture in a way that is different from how Windows uses the gesture. The Windows 8 touch language is described in Touch interaction design.

There are a number of things going on here that are worth thinking about more carefully.

The first sentence basically says that the app has to support multiple input modalities. To “fully support touch input” means that the app must be fully usable with only touch, which should be obvious given the increasing number of touch-only devices. This means that an app cannot have UI or features that are only accessible through a mouse or through a special keystroke. Usually this isn’t much of an issue, because using pointer events lets you handle touch and mouse with the same bit of code anyway.

The second half of the scentence is a little sketchy to my mind. Depending on how you read it, it might say “the app must be fully usable with only keyboard, as well as with only mouse,” or it can be read that “the app must be fully usable with keyboard and mouse together.” From what I’ve seen, certification makes the latter interpretation.

At the same time, I strongly encourage developers to provide complete keyboard-only interfaces as well as mouse-only interfaces. In doing so, you make the app much more accessible for people with disabilities. That’s really the reason why the UI guidelines for desktop apps that have been around for many years suggests things like keyboard accelerators for menus that can also be used with a mouse.

In this context I often think of a classmate of mine within the Electrical/Computer Engineering department at the University of Washington when I was an undergraduate there. Sometime in our senior year, he severed his spinal cord in a skiing accident and became a quadraplegic. But he still completed his degree, often using a computer keyboard with a rod that he held in his mouth. To this day I admire his determination to carry on despite his permanent injury, which means making a little extra effort of support full keyboard interactions and not rely on the mouse for some of it.

Anyway, the last two sentences of section 3.5 above apply only to touch interaction, and are pretty clear. The visual feedback requirement is important because it lets the user know that the app has detected the touch–a piece of glass, in other words, doesn’t give the tactile response that a keyboard or mouse does, so the acknwoledgement of input has to be visual. For this, the pointer down and pointer up animations in the animations libraries provide standard visual responses.

As for gestures, the point here is to avoid forcing users to retrain themselves for the sake of your app, which ultimately lowers their efficiency across the rest of the system. It’s a reminder that your app is not the only one running on the device, so be a good, cooperative player and avoid “innovating” in places like input where you’ll just cause confusion.

The last thing I’ll add regarding input is not part of Store ceritification, but also good to think about: sensors. As I wrote in my book, I like to think of sensors as another form of input, one that is a fresh area that’s open to all kinds of innovation. And because there’s nothing in Store certification related to how you use inclinometer, the compass, gyros, etc., you can be very creative here!

Comments are closed