And now for something completely different, and just for fun.

My son and I have been enjoying Minecraft since last fall, working first with the Pocket Edition and mostly with the PC version that supports creating circuitry with the Redstone element. Between my son and I, he’s more of a builder and I’m the one that more enjoys mining.

In one of my worlds,* which I’m just playing by myself, I’ve been working a lot with enchantments for which you need a variety of materials like Lapis Lazuli (necessary to do enchanting) and the quite rare Diamonds (used to make the best armor, tools, and weapons). This has led me to do some systematic mining deep underground where these materials are best found. Because I was specifically looking for diamonds, I concentrated on mining at levels 5-12 (see the Ore page for distribution statistics), where you’ll also find lots of other good stuff too.

This kind of mining is best done systematically. Here’s how I went about it:

  1. Standing on level 11, I circumscribed a 50×50 square tunnel, 2 blocks high (you could do any dimensions), periodically dropping torches on the outer side of the walls for light.
  2. Level 11 is recommended because lava lakes are generated at this same level, so you’ll always come out on the edge of one rather than below one where it’ll drop on you and cook you rather quickly. (Always carry a bucket of water in case you get set on fire, and it’s also handy to dump on top of lava to make obsidian, which is an effective way to remove lava and mine obsidian at the same time.(
  3. I then mines tunnels in one direction all along the square, going every other block. This means that I end up with a bunch of parallel tunnels with a one block wall in between. This accounts for the fact that ores nearly always come in groups of 3 or more blocks, so you don’t have to mine out every last bit to find the ores. You will occasionally miss a single block of something like diamond, but they seldom generate that way.
  4. For the first tunnel near an edge, I don’t bother dropping torches but punch a hole in the wall about every 6 blocks to let light in from the adjacent tunnel. This is pretty sufficient for keeping monsters from generating, though they do show up every now and then. (But they’re in tunnels so very easy to handle.)
  5. I then light the next tunnel (the third) the same way I lit the first. In other words, odd tunnels get light, even tunnels get holes. This way I save on torches.
  6. When I encounter ore, I typically mine the whole thing out, but sometimes leave stuff on lower levels for the next set of tunnels.
  7. Once all the level 11 tunnels were done, I made a hole in each corner for ladders going down to level 8–which allows for a two-block high tunnel with one block for the ceiling.
  8. I then circumscribed the same 50×50 edge tunnels and repeated the parallel tunneling. At level 8 though, you have to listen for lava sources. When you do, be careful as you mine in case lava flows out. However, if you start at level 11, you’ll know where most of these are. When I encounter a lava lake at level 11 I’ll dump water on it to turn it to obsidian and mine a bunch of it out. As lava is exposed at level 10, I’ll water it down too. This reduces the risk of getting cooked, and makes it far less likely to encounter flowing lava in the level 8 tunnels.
  9. Finally, I did the same thing again down to level 5, which is the lowest you can go without encountering bedrock obstacles.

I was quite pleased with everything I was able to extract, especially as I used a pickaxe with the Fortune III enchantment when mining lapis, diamond, coal, and redstone to increase the yield. What I found interesting, though, is that the return on mining at level 5 wasn’t nearly as good as levels 8 and 11 as these numbers illustrate

Coal Iron Gold Diamond Redstone Lapis
Levels 8-12 654 243 38 74 1,637 78
Level 5-6 150 83 4 11 254 0
Totals 804 326 42 85 1,891 78
% Level 8-12 81% 75% 90% 87% 87% 100%
% Level 5-6 19% 25% 10% 13% 13% 0%


In short, the effort to mine at level 5 really wasn’t worth it. I’d have been better off extending the range of the tunnels at levels 8 and 11, or, if looking for Lapis, to extend up to make tunnels at level 14.

In any case, I got quite a good haul out of this effort!

* For this world I used the seed “St Louis” because we’d just visited there. It’s a great challenge, as you spawn on a set of mesa islands with just a single tree to work with, and a little bit of grass from which to get seeds. But that was plenty, as I have a whole forest and farms going strong. There’s also an ocean monument nearby, and if you go straight east for a while (use a boat) from the southeast islands you’ll cross a couple more islands and then get to a desert village and a number of other villages nearby.

Update: with Eric’s comment, we’ve worked out how to make SQLite work properly with Xamarin without playing versioning games. The instructions can be found on, with thanks to Craig Dunn. The short of it is that you want to add from NuGet, then separately add a reference to Microsoft Visual C++ 2013 Runtime.

We return you now to the original post…


The last few weeks I’ve been making significant revisions to a Xamarin project based on code review feedback. This project is part of a larger effort that we’ll be presenting in a couple MSDN Magazine articles starting in August.

One big chunk of work was cleaning up all my usage of async APIs, and properly structuring tasks+await to do background synchronization between the app’s backend and its local cache for offline use. The caching story will be a main section in Part 2 of the article, but the story of cleaning up the code is something I’ll write about here in a couple of posts.

The first bit of that story is my experience–or struggle–to find the right variant of SQLite to use in the app. As you might have experienced yourself, quite a few SQLite offerings show up when you do a search in Visual Studio’s NuGet package manager. In our project, I started with SQLite.Net-PCL, which looked pretty solid and is what one of the Xamarin samples itself used.

However, I ran into some difficulties (I don’t remember what, exactly) when I started trying to use the async SQLite APIs.
On Xamarin’s recommendation I switched to the most “official” variant, sqlite-net-pcl, which also pulls in SQLitePCL.raw_basic.0.7.1. Keep this version number in mind because it’s important here in a minute.

This combination worked just fine for Android and iOS projects, but generated a warning for Windows Phone: Some NuGet packages were installed using a target framework different from the current target framework and may need to be reinstalled. Packages affected: SQLitePCL.raw_basic.

This is because SQLitePCL.raw.basic is marked for Windows Phone 8.0 but not 8.1, which is what my Windows Phone project in the solution was targeting.

OK, fine, so I went to the NuGet package manager, saw an update for the 0.8 version of SQLitePCL.raw.basic, and installed that. No more warning but…damn…the app no longer ran on Windows Phone at all! Instead, the first attempt to access the database threw a System.TypeInitializationException, saying “The type initializer for ‘SQLitePCL.raw’ threw an exception.” The inner exception, System.IO.FileNotFoundException, had the message, “The specified module could not be found. (Exception from HRESULT: 0x8007007E).”

What’s confusing in this situation is that SQLitePCL.raw does not appear in the Windows Phone project’s references alongside sqlite-net, as it does in the Android and iOS projects. This is, from what I can see, because the Windows Phone version of sqlite-net does some auto-gen or the raw DLL or has pre-built versions in its own package, so a separate reference isn’t necessary. (If you know better, please comment.)

Still, those DLLs were right there in the package and I couldn’t for the life of me figure out why it couldn’t find them, so I resorted to the tried and true method of trying to repro the failure from scratch with a new project, where the default Windows Phone project targeted 8.0. I then added “SQLite-net PCL” to all the projects in the solution, which brought in the raw 0.7.1 dependency, tossed in a couple APIs calls to create and access the database, and gave it an F5. Cool, everything worked.

Next, I retargeted the Windows Phone project to 8.1 and F5’d again. Everything still worked, but I got the warning about SQLitePCL.raw.basic once again. Apparently it’s OK to ignore that one.

I then updated SQLitePCL.raw to the 0.8 version and boom–got the exception again, so clearly there’s an incompatibility or bug in the 0.8 version with Windows Phone 8.1.

Clearly, then, the solution is to altogether avoid using the 0.8 version with a WP8.1 target, and if you want to suppress the warning, open packages.config in the Windows Phone project and have the SQLitePCL.raw_basic line read as follows:

<package id=”SQLitePCL.raw_basic” version=”0.7.1″ targetFramework=”wp80″ requireReinstallation=”False” />

I know it’s been a while since I posted much on my blog here. The coding project I was engaged in at Microsoft took up much of time between January and March, and the focus on coding didn’t leave much time to write about said coding. The //build came along and I’ve been working on Visual Studio feature guides with our marketing team.

I also discovered, in the process of coming back to my blog and updating WordPress, that the MySQL database that you can get through an Azure account (where I host this site), has a 20MB limit on ClearDB’s free tier. While updating WordPress, which does database updates as well, the MySQL file exceeded that limit and so the database got set to read-only. This, of course, meant that the database couldn’t be updated which then got me stuck in the WordPress database update loop-of-death.

Having returned from //build last week (where I spent most of my time in the Visual Studio cross-platform development kiosk talking to developers about Cordova, Xamarin, the Universal Windows Platform, and cross-platform C++), I finally got this sorted out by deleting all the spam comments from the WordPress database to reduce the file size, after which the update worked and I can post again.

[Addendum: the spam comments continue to come in at a frightening pace! Fortunately, the akismet plugin for WordPress does a good job at catching them, and will supposedly auto-delete in 15 days. However, the sheet volume of spam (many of which are 1-2 pages long) and all of akismet’s metadata it saves for each one, generates quite a few MB of garbage data in the database. So I’m having to monitor all this more closely. I hope soon to migrate the whole DB to an Azure database with a much larger quota.]

Speaking of cross-platform development, two other pieces I worked on recently are summary topics for Visual Studio Application Lifecycle Management (ALM) features as they apply to Cordova and Xamarin projects. You can find those topics on the MSDN Library:

In the meantime, our coding project team has been writing up our learnings for MSDN Magazine, so you’ll see those articles later this summer. I’m working on cleaning up my Xamarin client code, which will be the focus of Part 2 of the article.

A few folks have asked, by the way, whether I’ll be updating Programming Windows Store Apps with HTML, CSS, and JavaScript for Windows 10 and the Universal Windows Platform. Because I’m no longer with the Windows team, I won’t have working hours to focus on that project. What I’m looking at, however, is moving to something of an open-authoring model. I’m hoping first to split the Windows 8.1 book into two parts. The first would be WinJS-focused and separate from Windows, as WinJS is now its own open-source library. The second part would be Windows apps using JavaScript without the WinJS stuff.

Then I can put the files on GitHub and invite contributions, serving more in the role of an editor than a writer, though I’d probably still write some. Anyway, let me know what you think of the idea!

Similar to my previous post, but perhaps not quite as fun, is an account from Fighter: The True Story of the Battle of Britain, by Len Deighton, which I just finished.

Any comparison of the Merlin engine [as used in the RAF Spitfire and Hurricanes] and the Daimler-Benz DB 601A [as used in Messerschmidt Bf109’s] must begin by mentioning the latter’s fuel-injection system. …

Fuel injection, which puts a measured amount of fuel into each cylinder according to temperature and engine speed, etc., was demonstrably superior to the carburetors that the Merlins used. Carburetors are, at best, subject to the changes of temperature that air combat inevitably brings. At worst they bring a risk of freezing or catching fire. And with such large, high-performance engines, the carburetor system seldom delivers exactly the same amount of fuel simultaneously to each cylinder. Worst of all, the carburetor was subject to the centrifugal effect, so that it starved, and missed a beat or two, as it went into a dive.

The RAF pilots learned how to half-roll before diving, so that fuel from the carburetor was thrown into the engine instead of out of it, but in battle this could be a dangerous time-wasting necessity.

Engineers–those on trains–in the 1800s got a good start in the art of hacking. This is from The Story of American Railroads, a thoroughly well-written and entertaining book by Stewart H. Holbrook written in the 1940s that provides many quotable passages:

Although neither the Santa Fe [railroad] or most of the other roads were in a hurry to adopt new inventions, the Santa Fe held in high esteem a gadget known as a Dutch clock. This device, perhaps the most unpopular one with railroad men of the day, was set up in the caboose and it noted and recorded on a tape the speed at which the train traveled. The rule was that freights should maintain a speed of eighteen miles an hour, no more, no less. The Dutch clock soon brought reprimands to all freight conductors who tried to make up time for the breakdowns of equipment that were forever happening.

After considerable discussion of the Dutch clock, the boys figured out a method of handling the menace. On the first sidetrack out of the terminal, the crew would uncouple the caboose, then uncouple the engine, bring it back to the rear on the main line, set it in behind the caboose, then use it to slam the caboose into the standing train at a speed of exactly 18 miles an hour. This, it had been discovered, so affected the Dutch clock’s insides that thereafter it continued to clock 18 miles an hour regardless of the speed developed. This fixing the Dutch clock was considered fine sport, and always left the train crew with a sense of immoderate satisfaction.

A reader of my recent MSDN Magazine article asked what I thought about performance in Cordova apps, and here's what I wrote in response.

Performance really depends on the underlying hardware, the individual platform, and the version of the platform, because it’s highly dependent upon the quality of the app host and hardware that’s running the code.

On Windows 8.1 and Windows Phone 8.1, you’re running a native app (no webviews), and because Microsoft has put tons of perf work into the IE engine on which the app host is built, JavaScript/Cordova apps run quite well. In the perf tests I did on Windows 8.1 with JavaScript apps (see chapter 18 of my free ebook), I found that the delta from JS to C# was 6-21%, JS to C++ was 25-46%, and C# to C++ was 13-22%. This was for CPU-intensive code (and a Release build outside the debugger, of course) and thus represents the most demanding situations. If you’re primarily working with UI code in the app where the system APIs are doing the bulk of the work, I’d expect the deltas to be smaller because the time spent in UI code is mostly time spent in optimized C++.

On Android, iOS, and Windows Phone 8, Cordova apps run JS inside a Webview, and thus are very much subject to Webview performance. From what I've heard–and I haven't done tests to this effect myself–performance can vary widely between platforms and platform versions. More recent versions of the platforms, e.g. iOS 7/8 and Android 4.2 or so, have apparently improved Webview performance quite a bit over previous versions. 

In short, if you’re targeting the latest platforms, performance is decent and getting better. If you're targeting these systems, then, Cordova performance should be suitable for many types of apps, though probably not for intensive apps.

It's important to note that "performance" is not really a matter of throughput: it's a matter of user experience. I like to say that performance is, ultimately, a UI issue, because even if you don't have the fastest execution engine in the world by some benchmark measure, you can still deliver a great experience. I think of all the video games I played as a kid on hardware that was far inferior to what's probably inside my washing machine today. And yet the developers delivered fabulous experiences.

That said, running JavaScript in a Webview on platforms like iOS isn't going to match a native iOS implementation, especially with signature animations that are really hard to match with CSS transitions and such. But if you're not needing that kind of experience, Cordova can deliver a great experience for users without you having to go native.

I made a diagram that’s on that tries to illustrate where Cordova falls relative to Xamarin, native, and mobile web. The vertical axis is “mobile app user experience” which includes perf as well as the ability to provide a full-on native experience like I just mentioned. 

The diagram is one way to look at the relative strengths of different cross-platform approaches. In the end, you of course have to decide what perf measures are important for your customers, do some tests, and see if Cordova will work for your project. And of course, pester the platform providers to improve Webview perf too! :)

Many of you have probably seen this already–an article that I wrote (with contributions from my teammate, Mike Jones), for MSDN Magazine.

I expect to be writing for MSDN more because my larger team at Microsoft owns the content calendar now!


I had another interesting struggle today playing around with a TFS build server and unit tests. Ostensibly this was to work through TFS and ALM matters with a Xamarin app, but this particular issue isn't specific to Xamarin.

I'd set up a team project on the TFS server, created and checked in the app, and then added a unit test project to the solution. On the local machine the tests ran fine.

In the build definition for the TFS server, I had the default setting to run tests in any assembly with "test" in the name, and the Unit Test project I added to the solution fit that criteria. I also set the build definition for Continuous Integration to build (and therefore test) on any checkin. I then checked in the unit test project along with a bug in the app code to fail the test, and a build was queued automatically as expected.

However, no tests were run. Hmmm. I checked the log of the build and didn't see my test assembly anywhere. I played around with settings in the build definitions and searched for answers for a while, to no avail.

So what was the problem? Turned out to be another really simple thing: I had neglected to check in the .sln file along with everything else. I'm not sure why that was, but that's what happened. As a result, the TFS build server had the test assembly source code, but because that project wasn't in the solution in its copy of the code, it didn't build it. Therefore it didn't find any test code to run, and thus the build succeeded because it wasn't blocked by a failed unit test.

Once I checked in the solution file, the TFS build included the test assembly and ran those tests as expected upon a checkin, failing the build with my intentional bug in the app code. Correcting that code and checking in again queued another build, which worked.


As an addendum, here are my raw notes on reproducing the problem I'd encountered, setting up the Xamarin project to work with a TFS build server, continuous integration with unit tests. I didn't bother to edit these; I'm only including them as a quick-and-dirty tutorial on a minimal setup to evaluate TFS and CI functionality, because it's not easy to find something short and simple along these lines.

I have a TFS server installed. Made sure that the build agent account can get to necessary resources (important for Xamarin and Android SDK install, but this isn't about that–see previous post).


  1. Running Visual Studio, Connect to team server and do Create Team Project…
  2. Map the team project to a local workspace (the folder that will sync)
  3. In Team Explorer – Home > Solutions > Workspace <my machine>, select New… and in the new project dialog, create a blank Xamarin app, MathTestB.
  4. Checked the code in and ran a local build to verify it works. Make sure the solution file is also checked in.
  5. Created a build definition using defaults. Team Explorer > Builds > New Build Definition, trigger build on CI, only change is sending builds to another file share.
  6. Queued a new build to test. Did build definition (right click) > View Controller Queue… to see results.
  7. Got the Android SDK Directory could not be found, set /p:AndroidSdkDirectory=c:android-sdk in the build definition under Process > Build > Advances > MSBuildArguments. Also set MSBuild platform to x86. Saved definition and build again.
  8. Removed Windows Phone 8 project as Silverlight 4 SDK is not on the server. Checked in .sln file which queued a rebuild on the server. This built successfully.
  9. In MathTestB (PCL) > App.cs, add a method called AddOne to return a+1. This is what we'll unit test.
  10. Added a Unit Test project to the solution. Right click solution > Other Languages > Visual C# > Test > Unit Test Project call it AdditionTests. Add reference to MathTestB (required).
  11. In UnitTest1.cs, add


    1. using MathTestB.
    2. Change TestMethod1 to TestAddOne.
    3. Add code to test 124+1 = 125
  12. Rebuilt solution. Then did run tests and had success.
  13. Changed MathTestB.App.AddOne to be +2. Ran tests again, they failed.
  14. OK, so we're cool now. Let me now check in AdditionTest as and the change to app.cs. We're cool, right? This queues some builds and should run the test, which should fail on the server.


    1. However, it works on the server. Examining the log, we find that no tests were run. Why is that? It's not because the build definition is wrong, it's because I neglected to check in the .sln file that includes AdditionTests. Thus the server isn't building the test assembly, and therefore the build definition isn't finding it.
  15. Checked in the solution file now. New build queued. And it fails as expected.
  16. Restore proper code to app.cs. Check in. Build succeeds as expected.

I've been banging my head into my desk quite a bit with this one, and finally found the answer (and of course, it was after all of this that I found Xamarin's page on Configuring TFS!). I set up a TFS build server to test continuous integration features of Visual Studio and Team Foundation Server (TFS) with Xamarin projects. On the server I installed TFS 2013 Express along with Visual Studio Ultimate 2013 Update 4 and the Xamarin tools. I confirmed that I was able to build the Xamarin project on that machine directly. So far so good.

I then connected to the team project from another machine and set up a build definition to the server. I had a couple of issues there that took some wrangling that I'll come back to shortly. The biggest headache was that I got this error from the builds:

C:Program Files (x86)MSBuildXamarinAndroidXamarin.Android.Common.targets (379): The Android SDK Directory could not be found. Please set via /p:AndroidSdkDirectory.

This turns out to the be the right option if you're just building locally, but there's a little more to it. First, if you're building locally and get this error, make sure that you have the appropriate API level of the Android SDK installed, because it could be that MSBuild is finding the SDK folder but not the right version.

Second, you add this switch in the build definition under Process > Build > Advanced > MSBuild arguments. The text for that field appears like the following:


Third, with TFS there's another catch: the directory must be accessible by the build agent account and not just your user account. This bit me because I'd installed Xamarin under my user account, which ended up putting the Android SDK and NDKs underneath my c:users<username>AppData and Documents folders, respectively, which were not accessible by the build agent. In other words, the TFS build failed because of an account issue, not because the SDK wasn't there.

The solution, then, was to move the SDK and NDK into folders off the c: root. In doing so, be sure to patch up the ADT_HOME, ANDROID_NDK_PATH, and PATH environment variables to point to those new locations. I verified that a local build worked on the server after this, and then was able to successfully run the TFS build and set up continuous integration (build on check-in).

It's also possible from the Team Foundation Server Administration Console > Build Configuration section to change the account for the service. This page will identify the account under which the service will run (NT AUTHORITYNETWORK SERVICE by default). I could have set this to my own account, but that wouldn't be a workable solution in a real team environment unless you created a specific build account for this purpose.

Side Note: if you're wondering whether you can do Xamarin builds with Visual Studio Online's hosted build controller, the answer is presently no. According to the Hosted Build Controller page ( and the more detailed hosted build server software list, Xamarin is not included (nor is Cordova for that matter). I imagine this will change at some point, so check those pages again to be sure. For the time being, you'll need an on-premise build server like the one I'm working with. Apparently you can also connect the on-premise server to VSO, but I haven't worked that out yet.


Back to the build definition. When I first set this up, I had a couple of issues:

  1. The default settings for the build definition gives a warning in the Source Settings section that reads: "The build definition workspace mapping contains potential problems. This build wastes time and computer resources because your working folders include the team projects, which include a Drops folder. You chould cloak the Drops folders." After several hours of "WTF?" reactions, and having no clue about a Drops folder and cloaking which answers on StackOverflow assumed, I figured it out. In Source Settings, just add another working folder with Source Control Folder set to $/<project>/Drops, and in the leftmost Status column, select Cloaked (this was not at all discoverable!).
  2. Under Build Defaults, there are options for where to place the build drops. I suggest when you're first working with TFS builds to use the "Copy build output to the server" option for simplicity. In this case, builds will go into the folder you have set up in the TFS Admin Console > Build Configuration > <server> – Agent > Properties. By default this is set to $(SystemDrive)Builds$(BuildAgentId)$(BuildDefinitionPath).
  3. If you want to place the builds on a different server/share (e.g. for distribution to testers who side-load the app), then make sure the build machine has full access/control for that share, otherwise you'll see "Access to the path is denied" errors. On the other server, right-click the shared folder, select Properties > Sharing > Advanced Sharing > Permissions. In that dialog, click Add…, and in that dialog click Object Types then check Computers. This way you can enter your <domain><build machine name>, and have it recognized. You have to do it by machine name because the NETWORK SERVICE applies only locally. (See this question on ServerFault.)

There you have it–my learnings from the past few days. I hope this helps some of you avoid similar headaches!