When working with the Windows.Storage.AccessCache.StorageApplicationPermissions.FutureAccessList, it’s important to know that it has a 1000 file limit. This has presented a challenge to some developers working with libraries that contain many more files than that, when they try to save every file in the library in this list. Of course, the problem is solved easily by saving the parent StorageFolder in the list instead, which will then include all those files. Sometimes, however, you might want to save a handful of files directly for performance reasons–files that you know you’re going to want, so you don’t have to go through the folders each time.

I’ll mention in this context that I’ve seen discussions about this sort of thing where developers talk about file paths and so forth. When working with files and folders in WinRT, always remember that the StorageFile and StorageFolder objects (and their shared StorageItem base class) are abstractions for path names and should be what you use whenever you think about path names. The key reason for this is that file-like entities can be backed by non-local providers such that the concept of a “pathname’ doesn’t even exist. The StorageFile/Folder/Item abstractions let the provider worry about the mapping details. For the consuming app, then, you use the FutureAccessList to do the equivalent of saving pathname strings in some other local storage. (The same goes for the recently used list that exists alongside FutureAccessList.)

Equally interesting is the question of what happens if files are moved on the file system between when you save a StorageFile to the FutureAccessList and when you later retrieve it. Generally speaking, Windows does its best to track changes to the file system and update the FutureAccessList accordingly, so the bottom line is that you should not worry about it. Of course, if you attempt to open a file obtained from the FutureAccessList, it can fail for any number of reasons, including the file having been moved without the system being able to track it. You have to handle all such exceptions anyway, so an orphaned file is just one in the mix.

 


Apart from how you can protect your app code (in this previous post), there are two other key resources on best practices for writing secure apps. These are the practices for writing apps that are in themselves hardened against hacking through the apps UI itself (as opposed to hacks on the file system and OS):

Developing secure apps (in the documentation, for JavaScript apps): covers whether to trust the data your receiving, use of local/web contexts, script filtering, use of postMessage, use of HTTPS, cross-domain requests, and the sandbox attribute.

Similar and additional tips can be found in Security best practices for building Windows Store apps on the developer blog. This covers app capabilities, using pickers, authenticating users, and validating files and protocols.

 


Developers have been concerned for a long time about the different ways that malicious people can try to hack into apps, extend trials, and otherwise rip you off. It’s something that the Windows team is working on more, but if you haven’t seen the post below, it’s good reading for Microsoft’s official stance on the matter:

http://social.msdn.microsoft.com/Forums/windowsapps/en-US/8b3cf68d-897d-4a47-ace0-2c42355bf688/protecting-your-windows-store-app-from-unauthorized-use

Speaking of obfuscation, Preemptive Solutions, for one, has their Dotfuscator for .NET (C# and VB apps) updated for Windows Store Apps, since late last year. See http://www.preemptive.com/products/dotfuscator/overview.

 


At first glance, and in reading different documentation (including the first edition of my book–I’m now working on the second), it seems like the Web Authentication Broker in WinRT is meant for a few key OAuth identity providers like Twitter and Facebook. That was at least my initial impression, but it’s mistaken.

The Web Authentication Broker is actually meant for any service that provides authentication. It’s primary purpose is to avoid collecting credentials in client apps and then transmitting those credentials over HTTP requests. Ideally, if apps always protected those credentials with appropriate encryption and using SSL over HTTP, and avoided storing any credentials on the client in plain text, then the web auth broker might not be necessary. But apps usually aren’t written that securely, so it’s best to authenticate directly with the service and have that service provide the app with an access token of some kind.

This is what the Web Authentication Broker is meant for. By invoking it–providing with your service’s authentication URI–an overlay appears above your app and displays the service’s page directly. This means that any and all information that the user enters at this point is going securely to the service and not to the app. The service, as described in the guidance for web auth broker providers, can provide whatever series of pages it needs for its own workflow, but at the end of it all it responds with a token that the client can use in subsequent calls.

So if your app in any way needs to collect credentials to authenticate with a service, consider using the web auth broker for this purpose and create the necessary pages on your service for this purpose. With just a little work, you can craft those pages so they look great within the broker window and integrate nicely into your app experience.


When a Windows Store app written in JavaScript is installed on a machine, its source code is basically sitting there, ready to be loaded into the apphost process at runtime. This does cause some developer angst, as that same source code is basically sitting there, also ready to be examined by an enterprising user with admin privileges. For this reason, JavaScript developers start thinking about minification or obfuscation to make it at least harder (I won't ever say impossible) for others to borrow or outright pirate their code. It's also important in the gaming sector to prevent cheating (whether real or perceived). In this post, then, I wanted to note a few things you can do to protect your app.

First, check out the post on the Windows 8 Developer Blog called Designing a simple and secure app package – APPX. Most of the article is about protecting the consumer through digital signing and so forth, the but near the end it talks a little about deployed app security. In the former case, the digital signature of a package applies to its entire contents, thereby giving the system the ability to detect whether the package–your source code that's just sitting there on the file system–has been tampered with. Thus if a, shall we say, creative user decides to hack a game to improve their high scores,* the system will prevent that app from running at all.

Next, to protect the source code itself, you can certainly use whatever minification/obfuscation techniques you already know for JavaScript. This makes it harder for someone to read your source code, though reverse-minification tools do exist.

On the subject of minification, note that the reduced size of JS/CSS/HTML files is really quite unimportant for apps. For one, JavaScript is automatically converted to byte code on installation as a startup optimization. Minification could save a little memory at runtime and perhaps make your package size smaller, but this is often insignificant compared to the effect of images. Indeed, making a 3-5% change in compression for PNGs and JPEGs will probably save more space than minifying text files without a noticable impact on image quality. It's also important to choose the right image format–photographs will typically compress better in JPEG than PNG, and if you can use GIFs, those work great for line art or graphics with only a few colors and long runs of similar colors. Similarly, small adjustments of the bitrates in audio and video resources would reduce the package size much more than minification.

Back to obfuscation–at this time, further protections of JavaScript code aren't available at this time. Clearly the platform could do more here, but we'll have to wait and see.

In the meantime, what else can you do? One solution for this is to keep code on a server and acquire it at runtime. This can be intercepted with network sniffers, of course, and one can always attach a debugger to the app at runtime. The Windows Store certification requirements also specifically disallows executing code in the local context that's obtained from a remote source (section 3.9). You can do this in a web context, however, and pass the results to the local context. Similarly, you can execute code on the server and return the results. With Store requirement 3.9, though, you have to avoid about driving the app's interactions with the Windows Runtime with those results lest you violate the policy.

Beyond this, you can move critical or sensitive code–that is, algorithms you want to hide–into Windows Runtime components. These can be written in C# (which is more obfuscated but still reversible, and introduces more overhead when used with JavaScript) or C++ (the best option, as reverse-engineering optimized assembly code is a strong, but not perfect barrier). Private encoding/encyrption algorithms are a typical candidate for such components, but other important logic can benefit as well.

In the end, it all depends on the level of security you're trying to achieve, recognizing that when the code is on a client machine, there simply isn't a foolproof way to keep it safe from the most determined individual. (Even running everything on a server isn't 100% secure, as hacker groups repeatedly demonstrate.) So it's a matter, really, of erecting roadblocks along the way that will deter different levels of threat.

If you've found other methods that work too, I'd love to hear about them in the comments.

 

 

*If you remember the Space Cadet pinball game that first came out with Windows 95, I eventually tired of playing through all the missions and levels. With a little exploration of its massive .dat file, I reverse engineered the data structures that determined how many lights and points you got for each mission. By bumping those up (dramatically) using Visual Studio's hex editor, it was much easier to advance enough to play the more advanced missions. In my case I just wanted to enjoy the game without the frustrations of slogging through all the earlier missions each time, but of course my scores looked pretty good too!


A somewhat frequent question from developers is how to keep some piece of data secret within an app, such as a secret app key provided by a service.

If this is something that an app acquires at runtime, then you want to use the Credential Locker API to securely store that bit of information. The Credential Locker API is found in the Windows.Security.Credentials.PasswordVault namespace.¬†Whatever is stored here is encrypted, of course, and can only be retrieved by the app that saved it in the first place. What’s also very¬†cool about this feature is that the contents of the Credential Locker is also roamed to the user’s other devices if they have PC Settings > Sync your settings > Password option turned on (and the PC is trusted). So if your app is installed on one device, obtains a secret key for a web service, stores it in the locker, and the user allows those credentials to be roamed, then that secret key will be available to your app when that same user installs it on another device. So you’d want to check for the existence of that secret key even on first run.

If you acquire such a key during app development and include it directly in the app package, then there is the possibility that the key can be discovered and extracted. In this case, the best approach is to store that key in a web service that you query on first run, after which you can save it in the credential locker and retrieve it from there during subsequent sessions.

In other words, assume that everything in your app package is public information and take appropriate steps.