I just finished publishing a body of content on unit testing for JavaScript in the context of Apache Cordova, including both command-line and Visual Studio interfaces. I had a lot of fun learning about the subject and finding ways to communicate a number of concepts. I also found a direct example of a slight difference between JS runtimes that can bite you, but I'll leave that for the articles themselves.

You can find it all on http://taco.visualstudio.com/, the docs site for the Visual Studio Tools for Apache Cordova, under the "Test" node. Here are the individual topics:

There are two other topics in that node that I'll be revising and/or integrating into the stuff above: Test Apache Cordova apps with Karma and Jasmine and Test Apache Cordova apps with Chutzpah.

I'd love to know what you think, as this material is easily the basis for a video course with Microsoft Virtual Academy as well.

In January I'll start diving into UI testing for mobile–should be fun!


I had another interesting struggle today playing around with a TFS build server and unit tests. Ostensibly this was to work through TFS and ALM matters with a Xamarin app, but this particular issue isn't specific to Xamarin.

I'd set up a team project on the TFS server, created and checked in the app, and then added a unit test project to the solution. On the local machine the tests ran fine.

In the build definition for the TFS server, I had the default setting to run tests in any assembly with "test" in the name, and the Unit Test project I added to the solution fit that criteria. I also set the build definition for Continuous Integration to build (and therefore test) on any checkin. I then checked in the unit test project along with a bug in the app code to fail the test, and a build was queued automatically as expected.

However, no tests were run. Hmmm. I checked the log of the build and didn't see my test assembly anywhere. I played around with settings in the build definitions and searched for answers for a while, to no avail.

So what was the problem? Turned out to be another really simple thing: I had neglected to check in the .sln file along with everything else. I'm not sure why that was, but that's what happened. As a result, the TFS build server had the test assembly source code, but because that project wasn't in the solution in its copy of the code, it didn't build it. Therefore it didn't find any test code to run, and thus the build succeeded because it wasn't blocked by a failed unit test.

Once I checked in the solution file, the TFS build included the test assembly and ran those tests as expected upon a checkin, failing the build with my intentional bug in the app code. Correcting that code and checking in again queued another build, which worked.

—-

As an addendum, here are my raw notes on reproducing the problem I'd encountered, setting up the Xamarin project to work with a TFS build server, continuous integration with unit tests. I didn't bother to edit these; I'm only including them as a quick-and-dirty tutorial on a minimal setup to evaluate TFS and CI functionality, because it's not easy to find something short and simple along these lines.

I have a TFS server installed. Made sure that the build agent account can get to necessary resources (important for Xamarin and Android SDK install, but this isn't about that–see previous post).

 

  1. Running Visual Studio, Connect to team server and do Create Team Project…
  2. Map the team project to a local workspace (the folder that will sync)
  3. In Team Explorer – Home > Solutions > Workspace <my machine>, select New… and in the new project dialog, create a blank Xamarin app, MathTestB.
  4. Checked the code in and ran a local build to verify it works. Make sure the solution file is also checked in.
  5. Created a build definition using defaults. Team Explorer > Builds > New Build Definition, trigger build on CI, only change is sending builds to another file share.
  6. Queued a new build to test. Did build definition (right click) > View Controller Queue… to see results.
  7. Got the Android SDK Directory could not be found, set /p:AndroidSdkDirectory=c:android-sdk in the build definition under Process > Build > Advances > MSBuildArguments. Also set MSBuild platform to x86. Saved definition and build again.
  8. Removed Windows Phone 8 project as Silverlight 4 SDK is not on the server. Checked in .sln file which queued a rebuild on the server. This built successfully.
  9. In MathTestB (PCL) > App.cs, add a method called AddOne to return a+1. This is what we'll unit test.
  10. Added a Unit Test project to the solution. Right click solution > Other Languages > Visual C# > Test > Unit Test Project call it AdditionTests. Add reference to MathTestB (required).
  11. In UnitTest1.cs, add

     

    1. using MathTestB.
    2. Change TestMethod1 to TestAddOne.
    3. Add code to test 124+1 = 125
  12. Rebuilt solution. Then did run tests and had success.
  13. Changed MathTestB.App.AddOne to be +2. Ran tests again, they failed.
  14. OK, so we're cool now. Let me now check in AdditionTest as and the change to app.cs. We're cool, right? This queues some builds and should run the test, which should fail on the server.

     

    1. However, it works on the server. Examining the log, we find that no tests were run. Why is that? It's not because the build definition is wrong, it's because I neglected to check in the .sln file that includes AdditionTests. Thus the server isn't building the test assembly, and therefore the build definition isn't finding it.
  15. Checked in the solution file now. New build queued. And it fails as expected.
  16. Restore proper code to app.cs. Check in. Build succeeds as expected.
  1.  

When I was writing the last chapter of Programming Windows Store Apps with HTML, CSS, and JavaScript, 2nd Edition, I noticed that when you used the CurrentAppSimulator object for testing the Store APIs, that the default XML included a bit for consumable in-app purchases:

<ConsumableInformation>
  <Product ProductId="2" TransactionId="00000000-0000-0000-0000-000000000000"
      Status="Active" />
</ConsumableInformation>

Unfortunately, it wasn't documented, so I had to work it out with the program manager who owned it. Put simply, the ConsumableInformation is used to provision a default in-app offer similar to how durable offers are included in the LicenseInformation node.

The TransactionId attribute is required, and will match the value used when calling CurrentAppSimulator.ReportConsumableFulfillmentAsync(productId, transactionId).

Status is also required, and can be Active, PurchaseRevoked, PurchasePending, and ServerError. This allows a developer to emulate all the possible responses to the ReportConsumableFulfillmentAsync call.

There is an optional attribute OfferId on the <Product> element which can be used to set the same value that would normally be set at the time of purchase for the large catalog API: CurrentAppSimulator.RequestProductPurchaseAsync(productId, offerId, displayProperties).

Fortunately, we were able to get all this into the CurrentAppSimulator documentation, which you can find here: http://msdn.microsoft.com/en-us/library/windows/apps/windows.applicationmodel.store.currentappsimulator.aspx

 


As the WinRT API is laced with async methods, having a robust approach to testing async behavior is very helpful for producing a robust app. Here are a few keys suggestions.

First, test re-entering a page multiple times when that page starts async operations. That is, navigate to a page that starts those operations, navigate away, then navigate forward again. Repeat a few times. Ideally what’s happening in the code is that any unfinished async operation is canceled then you leave the page. Otherwise this test can produce redundant operations that could collide.

More generally, the suggestion above means to plan and test for cancelation. With this, look at behaviors in your UI that depend on async operations either being started or completing, such as enabling/disabling controls.

You can also insert random delays in your code for async calls. In JavaScript this would mean creating a promise with WinJS.Promise.timeout(<delay>) and adding that to your chain. In C#/VB, use await Task.Delay inside an #ifdef DEBUG.

In JavaScript, because we don’t have #ifdef and because a new promise inserted into a chain needs to deliver meaningful results from the previous step in the chain, a good approach is to create a function that has as its return value a function that can serve as a completed handler in the chain. For example, assume you have a chain like this:

operation1().then(function (result1) {
return operation2(result1)
}).then(function (result2) {
return operation3(result2);
}).done();
You can create a function like this to create a delaying completed handler, where the debugMode flag would be a global variable that determines whether to create a delay promise with the previous result in the chain, or just deliver it through WinJS.Promise.as which adds very minimal overhead:

function insertDelay(delay) {
return function (result) {
if (debugMode) {
return WinJS.Promise.timeout(delay).then(function () { return result; });
} else {
return WinJS.Promise.as(result);
}
}
}

Inserting into the chain would be like this:

operation1().then(function (result1) {
return operation2(result1)
}).then(insertDelay(1000))
.then(function (result2) {
return operation3(result2);
}).done();

 


When testing your app, keep a keen eye out for how and when progress indicators are showing up, or how they aren’t showing up for that matter. Generally speaking progress indicators show that some long-running process is happening or that the app is waiting for some data or other results to be delievered. The latter is probably the most common, because long running processes like image manipulation are under the app’s control and is just a matter of crunching the pixels. Network latency, on the other hand, whether from connection timeouts of slow transfer speeds, isn’t something the app can do much about other than wait.

To simulate a slow connection, you can, of course, create additional traffic on your connection while you’re testing the app–e.g. playing videos, running big downloads (and uploads), and so forth. The faster your connection, the more you’ll have to load it up. You might also be able to change settings in your modem or router to make everything run more slowly. For example, if you have a wireless-N router, change it to run only at wireless-A speeds, then load it up with extra work.

You can also add some debugging code to simulate latency. In JavaScript, for example, wrap your network calls with a setTimeout of 1-2 seconds, or longer, depending on what you want to simulate. You can wrap the initial call itself and also wrap delivery of the results. You could create a small wrapper that would get the results quickly into a separate buffer, but then buffer them as real results within a series of setTimeout calls. Having such a setup that you can co

Whatever the case, slowing down the network speed will help show where and how progress indicators are being used. You’ll want to especially look for places where your app just appears to sit without any visual indication of the work it’s doing. This reveals that your code is making assumptions about how quickly data is coming back and that you’ll need to put up a progress indicator if the user waits for more than 1-2 seconds (the recommended timeout for showing an indicator).

On a slow connection, too, you can better evaluate whether you’re overusing progress indicators. That is, while it’s nice that we have controls to show work happening, they are something that users will get tired of looking at over and over. So think about how you might architect the app to wholly eliminate the need for those indicators. For example, if you are switching between a page with a list of items and a details page for one item, and there’s a delay in that switch, it’s a good place to simply switch the visibility of two pages that are fully constructed, rather than doing a real navigation that would tear down and rebuild the pages each time. The user won’t be able to tell the difference, except that everything is running much faster. Indeed, building a page with a list control of hundreds or thousands of items is a very expensive process, so avoiding that work is a good thing, especially when the list page itself doesn’t change in the process of navigating in and out of details.

On the flip side, also test your app on the fastest connection you can find and see if there are any places where progress indicators show up unnecessarily. This would also reveal places where you’re assuming that the data will take a long time to obtain, and where a progress indicator might just flash on the screen momentarily and create visual noise. In those cases you’ll want to make sure you again use a timeout before the indicator appears.

 

 


Before submitting your app to the Store, take some time to review your app’s manifest and the information and files it references. Mostly concentrate on the Application UI tab in Visual Studio’s manifest editor. If you haven’t noticed or installed recent Visual Studio updates, the VS team improved the editor by bringing all of the app’s UI elements into the Application UI tab. Earlier, bits of UI like the Store logo were on the Packaging tab, which made it easy to miss. Now it’s all together, plus, it shows you every scaled version of every graphical asset (there’s so much to scroll now you can’t get it into one screen shot):

manifesteditor1 manifesteditor2

Here’s what to pay special attention to:

  1. Make sure that all logos (logo, wide logo, small logo, and store logo) are all representative of your app. Avoid shipping an app with any of the default graphics from the VS templates. Note that the Store logo never shows up in the running app, so you won’t notice it at runtime.
  2. Double-check how you’re using the Show Name option along with Short Name and/or Display Name. Go to the Start screen and switch your app tile between square and wide, and see if the tile appears like you want it to. In the graphics above, notice how the app’s name is included on the tile images already, so having Show Name set to “All logos” will make the tile look silly (see below). So I’d want to make sure I change that setting, in this case, to “No logos.” However, if my square tile, perhaps, did not have the app name in text, then I’d want to set it to “Standard logo only.”
    manifesteditor3
  3. If you set Short Name, its text will be used on the tile instead of Display Name.
  4. Be aware that if you’re using live tiles, the XML update for a tile can specify whether the display/short name should be shown in the branding attribute. See my post on the Windows 8 Developer Blog, Alive with Activity Part 1, for details.
  5. If you don’t have specific scaled assets, reevaluate your choices here. Remember that if you don’t provide a specific version for each given pixel density, Windows will take one of the others and stretch or shrink it as needed, meaning that the app might not looks its best.
  6. Examine the relationship between the small logo, your specified tile background color, and any live tile updates and toast notifications you might use. Live tiles and toasts can specify whether to show the small logo, and the tile background color is used in both instances. If you have a mismatch between the small logo edge colors and the background color, you’ll see an obvious edge in tiles and toasts.

 


The Windows Store Certification requirements (section 4.5) stipulates that apps doing data transfers must avoido doing large transfers over metered connections (you know, the ones that charge you outrageous amounts of money if you exceed a typically-small transfer limit!). At the same time, getting a device set up on such a network can be time consuming and costly (especially if you exceed the limit).

There is certainly room here for third-party tools to provide a comprehensive solution. At present, within Windows itself, the trick is to use a Wi-Fi connection (not an Ethernet cable), then open up the Settings charm, tap on your network connection near the bottom (see below left, the one that says ‘Nuthatch’), then in the Networks pane that appears (below right), right click the wireless connection and select Set As Metered Connection.

network settings  Wifi menu

Although this option doesn’t set up data usage properties in a network connection profile, and other things a real metered connection might provide, it will return a networkCostType of fixed which allows you to see how your app responds. YOu can also use the Show estimated data usage menu item (shown above) to watch how much traffic your app generates during its normal operation, and you can reset the counter so that you can take some accurate readings:

Wifi usage

It’s also worth noting that Task Manager has a column in its detailed view, on the App history tab, for metered network usage, where you can track your app’s activities.

Task manager app history

It’s good to know this as a user, too, as you can see if there are any misbehaving apps you might want to reconfigure or uninstall.

 


When using different contracts that broker interaction between two apps, it’s a natural question to wonder if your app is really going to work with all other arbitrary apps. This is the intent of contracts in the first place, but it’s good to put apps in the Store when you’re reasonably confident that they’ll work according to that intent.

With sharing data through the Share contract, in particular, it really boils down to finding target apps that can consume the data formats you provide. If you’re using only known formats (HTML, text, bitmap, files), then the Share target sample app in the Windows SDK provides a good generic target. When you invoke it through Share, it basically shows you all the data it finds in the shared package, which is very helpful for checking out that the package contains the data you think it should.

You’ll then likely want to test with some likely common targets. The built-in Mail app is a good candidate, as is the Twitter app that you can find in the Windows Store.

For custom formats, you probably don’t share such data without having some target app in mind, so you’ll know who to test with.

Beyond that, the APIs for sharing are actually designed so that you don’t need to worry too much about more extensive testing. The only way standard format data gets into a shared data package is through method calls like DataPackage.setBitmap and DataPackage.setUri. That is, rather than having the source app just drop a string or some other object into the package directly, these method calls can validate the data before making it available. This guarantees that the target app will have good data to consume, meaning that the target app can just examine the available formats and use them how it will.


It should go without saying, but it's easy to forget. When testing your app, be sure to exercise various combinations of:

  • View states: fullscreen landscape, filled, snapped, and fullscreen portrait.
  • Screen sizes: from 1366×768 (or smaller, if you have such hardware), to high resolutions like 2560×1440
  • Resolution scaling: 100%, 140%, and 180%

You can test these things on actual hardware, of course, which is why Microsoft set up the app labs as I mentioned before.

Otherwise, get to know the features of the Visual Studio simulator as well as the Device options in Blend.

In the simulator, the two rotation buttons on the right side will help test landscape/portrait, and the resolution scaling control lets you choose different screen sizes at 100% and a few that go to 140% and 180%:

simulator

In Blend, the Device tab (next to Live DOM), gives you options to show the app in all the view states (the buttons next to View) as well as differnet screen sizes and scalings (these are the same options that the simulator provides):

6-6 (Blend resolutions)

 

Make it a habit, then, to regularly run through these different combinations to find layout problems with configurations that some part of your customers will certainly have themselves.


Process lifetime events, like suspend, resume, and being restarted after being terminated by the system from the suspended state, are very important to test with your app. This is because when you’re running in a debugger, these events typically don’t happen.

Fortunately, Visual Studio provides a menu at debug time to trigger these different events:

Figure 3-8 (PLM controls)

Selecting either of the first two will fire the appropriate events, allowing you to set breakpoints in your handlers to debug them.

The third command, Suspend and shutdown, simulates the case where the app is suspended (as if the user switched to another app), and then the system drops the app from memory due to the needs of other apps.

In the debugger, you’ll see your suspend events fired, then the app exits and the debugger stops. More importantly, however, is that when you start the debugger again, you’ll see the previousExecutionState flag in the activated handler set to terminated. This gives you an easy way to through the startup code path where you’ll be rehydrating the app from whatever session state you saved during suspend.

After testing in the debugger, it’s a good idea to test the app under real conditions as well. Suspend and resume are easy enough–just switch to other apps. Be sure to leave the app suspended for different lengths of time if you have any kind of timestamp checking in your resuming handler, e.g. code that determines how long it’s been since suspend so that it can refresh itself from online content, etc. If you can, try to leave the app suspended for a day or longer, even a week, to really exercise resuming.

To test terminate/restart in a real world scenario, you’ll need to basically run a bunch of other memory-hungry apps to force Windows to dump suspended apps out of memory (large games are good for this). You’ll make it easier on yourself if you have less memory in the machine to begin with; in fact, to test this condition you could shut down your machine and pull some physical memory (or use an older, less powerful machine for this purpose).

Alternately, you can use a tool that will purposely eat memory, thus triggering terminations. A good list of tools for this purpose can be found on http://beefchunk.com/documentation/sys-programming/malloc/MallocDebug.html.