Latest Event Updates

Creating & managing dev, staging and live versions of your Xamarin app

Posted on Updated on

When creating apps of moderate complexity in a team of several members including developers, designers, QA etc. it quickly becomes apparent that having a single version of an app with a single version of each dependent service doesn’t really work.

Today I’m going to share how we’ve dealt with this at JustGiving by creating 3 separate versions of our app. I’ll also show you how we manage this through source control and our continuous deployment system to be able to give everybody in the team ready access to these different versions, each with their own level of expectations around them.



There are two distinct parts to provisioning an app for various environments: 1) compiled configuration settings and 2) managing the iOS info.plist file. Managing compiled configuration settings will be the more familiar solution to regular .NET developers as this will only involve a simple combination of dependency injection and compiler flags. Managing the info.plist is a little more tricky as we’re going to use conventions of the iOS compilation process itself.

Compiled configuration settings

Any moderately complex app will inevitably connect to a variety of third party services for functions like analytics & crash reporting, push notifications, social sharing etc. Most if not all of these providers will have at least Staging and Live equivalents explicitly, or you can create your own (for example having multiple Facebook Apps for staging/live for Login With Facebook functionality). The main dependency that the JustGiving app has is of course the JustGiving Public API, for which we have Development, Staging and Live versions.

Using a simple combination of dependency injection and compilation symbols, we can easily manage a distinct set of configuration settings for each of our development, staging and live app environments.

First, we edit our configuration options of the iOS project to reflect our 3 environments, as depicted in this screenshot:

Solution Configuration Options
Xamarin Studio – Solution Configuration Options

This will allow us to compile the app in local development version (DEBUG), a staging version (RELEASE) and a live version (APPSTORE). You can effectively name these options whatever you like.

Next, we have to have a set of configuration settings that our dependency injection framework can bind differently based on those compilation symbols.

public interface IConfigurationSettings
string ThirdPartyApiKey {get;}
string AnotherThirdPartyApiKey { get; }

public class DebugConfigurationSettings : IConfigurationSettings
public string ThirdPartyApiKey { get; protected set;}
public string AnotherThirdPartyApiKey { get; protected set; }

public DebugConfigurationSettings ()
ThirdPartyApiKey = "debugKey";
AnotherThirdPartyApiKey = "otherDebugKey";

public class ReleaseConfigurationSettings : IConfigurationSettings
public string ThirdPartyApiKey { get; protected set;}
public string AnotherThirdPartyApiKey { get; protected set; }

public ReleaseConfigurationSettings ()
ThirdPartyApiKey = "stagingKey";
AnotherThirdPartyApiKey = "otherStagingKey";

public class AppStoreConfigurationSettings : IConfigurationSettings
public string ThirdPartyApiKey { get; protected set;}
public string AnotherThirdPartyApiKey { get; protected set; }

public AppStoreConfigurationSettings()
ThirdParyApiKey = "liveKey";
AnotherThirdParyApiKey = "otherLiveKey";

Now, deep in the setup code of the app, we have the following code to bind a different concrete implementation of our ConfigurationSettings class to our interface IConfigurationSettings, based on the relevant compilation symbol. Note that we are using MvvmCross, and so are using it’s own dependency injection framework, but the concept is the same regardless of what DI framework you will be using.

 #if DEBUG
 Mvx.LazyConstructAndRegisterSingleton<IConfigurationSettings, LocalDebugConfigurationSettings>();
 #elif RELEASE
 Mvx.LazyConstructAndRegisterSingleton<IConfigurationSettings, TestFlightReleaseConfigurationSettings>();
 Mvx.LazyConstructAndRegisterSingleton<IConfigurationSettings, AppStoreConfigurationSettings>();

So that sorts the configuration settings side of things out. Now let’s delve into the info.plist.

The iOS info.plist file can be thought of by .NET web developers as the equivalent of web.config. All manner of iOS-relevant configuration options appear in the info.plist including version information, supported orientations, iOS versions, the names of your image asset catalogues, etc. Additionally, some iOS 3rd party libraries require that you include certain values in this file (for example the Facebook SDK requires your Facebook App ID).

We originally had one info.plist file, and then our Jenkins build server would use plistbuddy to change individual values (like the Facebook App ID’s above) at build time depending on the environment being built for. However this meant the values were being stored by Jenkins rather than source control with everything else, and that felt rather opaque to the developers and not easy to lookup or quickly change.

We quickly refactored to simply having multiple versions of the info.plist file, one for each environment.

An info.plist for every environment
An info.plist for every environment

As with the compiled configuration settings before, this now allows you to easily manage 3 completely distinct build-time configuration option sets. A few of the more useful info.plist properties to differentiate between environments are:

We have ‘Development’ or the Jenkins build number here. This appears in the Settings screen of the app, which means any team members (or indeed users) unsure of the version on which they’re on can quickly look it up.

Things can get messy if you don’t vary the BundleId, which you can think of as the main identifier of your app. Many user-facing functions depend on BundleIds being congruent across services. For example, our Azure Mobile Services staging app for push notifications is provisioned with a APNS ‘Test’ certificate generated for an app with a BundleId of ‘’. This same BundleId appears in our (staging) Facebook App iOS settings as well. As iOS devices can only have a single version of a single app (as identified by the BundleId) installed at once, having a different BundleId for each environment also allows users to have all 3 versions on their devices at the same time.

Using Asset Catalogues became the preferred way to manage and ship your assets from iOS 7 onwards. The good news is that if you’re doing it this way, it is easy relatively easy to vary your app icons by environments. This is obviously very useful to those members of your team likely to have multiple versions of the app on their phones at any one time!

We have a separate App icon asset catalogue for each environment:

An Asset Catalog per environment
An Asset Catalog per environment

And then the relevant value in each info.plist file:

Dev environment info.plist

This results in our beautifully illustrative and differing app icons:

App per environment

The final question is, how do we use these different info.plist files? Well, Jenkins (app ‘Release’ build job) simply deletes the info.plist on disk (which is the development version) and renames the ‘Info.Release.plist’ to ‘Info.plist’ prior to compiling the app as normal. Similarly, a step in the ‘AppStore’ Jenkins job renames ‘Info.AppStore.plist’ to ‘Info.plist’.

Jenkins renames the relevant info.plist files
Jenkins renames the relevant info.plist files

Simple but effective!

Keeping separate apps for each of your environments not only avoids confusion when your internal users are using your app, but also allows solid and predictable testing for your QA team and allows you to diagnose issues with more confidence. Using the two tricks described above, you can achieve this variance effortlessly and be on your way to further scaling up your app development!

Lead Developer, Mobile Platforms
JustGiving – The world’s social platform for giving

My 60 second review of QConLondon 2015.

Posted on Updated on

I was fortunate enough to attend QCon London 2015 last week, and so I present my 60 second review.



I guess this speaks to the success people are having with microservices, but MS-talk was all over this QCon. In fact, too much so. There was a lot of duplicate knowledge presented in talks, and by the end of it I was a little tired of it all.

2. New meme: Binary protocols, and protocol design.

With the explosion of microservices in-house, focus naturally comes to performance issues around TCP/HTTP/JSON etc. Many folks are dropping down to lower level, sometimes custom-written, protocols, eg. tchannel on Thrift at Uber.

3. Oh Node Where Art Thou?

For all it’s popularity, Node.js was pretty absent and if mentioned, only in dispatches, I would’ve liked to see more node-love.

4. Potential new meme: Actor-model. Some people seem to be attacking concurrency using the Actor model or “actor-based concurrency” (eg. Halo4 built on Azure using Orleans). I will be reading up more on this!

Overall, I was glad I was there, but I don’t think the talk curatorship was as good as previous years, and I would hesitate if using my own money to buy conference ticket! (I helped out as crew, which I have done for several years, and so get a free pass).

I will though of course be back next year!

Videos of talks will be released over time here:

Blog’s 2013 in Review

Posted on

Not many chances to create a post with one click – so here is my ‘WordPress Year in Review’!


The stats helper monkeys prepared a 2013 annual report for this blog.

Here’s an excerpt:

A San Francisco cable car holds 60 people. This blog was viewed about 1,300 times in 2013. If it were a cable car, it would take about 22 trips to carry that many people.

Click here to see the complete report.

Book Review: REMOTE: Office Not Required

Posted on

REMOTE: Office Not Required is the new book from successful software outfit 37Signals, authors of REWORK which I enjoyed immensely as it chimed with my own love of calling bullshit on the ‘norms’ of contemporary office & work culture. I was keenly awaiting the publication of REMOTE since Jason’s TED Talk on the same topic resonated with me greatly. I suspect many developers can sympathise with the notion of being constantly interrupted at work, sometimes in a good way by that hot girl/guy in HR, or sometimes in the most annoying way by a Project Manager wanting to know when it’ll be done. Either way, several of these interruptions spaced over the course of the day are all that is required to destroy flow-time and thus really retard productivity. Jason’s premise in the talk is that the modern office is just not the place where work is done anymore, especially if you are a ‘creative’ (includes designers, authors, journalists, and of course, software developers), and I have to broadly agree.

REMOTE: Office Not Required is a more in-depth exploration of remote working being a way to reclaim productivity by working in an environment that you fully control. Most of the book seems to focus on the increased happiness of the employee. While a lot of this was obvious (who wouldn’t want to skip the commute?) it did nonetheless open my mind even wider to what is available when your workplace offers very flexible remote working. My newest experimental idea is to travel for a long weekend to another city, but leave on the say the Wednesday night and work in a co-working space for Thursday and Friday, exploring the city by night on those days and on the weekend, and return on the following Tuesday. Cheaper transport as it’s off-peak, a long-weekend holiday and on top of that, no missing of work. Surely everyone wins! (Of course this probably only works for solo-trips, and there may be other constraints).

As a person who intensely hates wasting time, remote working is something I’m naturally attracted to. It saves 2 hours a day for me in not commuting. It also apparently makes me happier (see #3). A natural question that comes up when I discuss this with people is “How that extra time is spent – work or play”? Well to ask this question and other similar to this, is missing the point, and rather moot. They accept a clear time-dividing line between work and personal life as a basic first premise, which at least my life doesn’t apply. Why not conduct errands on a Thursday afternoon if you can complete that project on Sunday night for a few hours? As I do what I love and love what I do, the line between work and play has long since been blurred. In practice some workers may spend 2 hours of saved commute time on themselves always, on themselves sometimes, or on their work always. Point is, it’s 2 hours spent doing something you want to do instead of something you’re forced into. That’s got to be a good thing for employees and for employers that care about staff happiness. In a regular office where everyone is in 9-5, you’re still going to get people that perform better than others by using their time more wisely. The same is true for remote workers – you may want continue to structure your reward scheme such that people that produce more, even remotely, are rewarded more.

One of the more eye-opening chapters in the book is called “Morning remote, afternoon local” and speaks of one employee that spends each morning working from home, and comes into the office for the afternoon. I love this as it guarantees a minimum amount of flow-time and productivity every day, while also allowing you to catch up face-to-face with those you need to, and vice versa. I’ve tried this a couple of times and it does have the intended effect, but one side-effect is that it also means you’re commuting for 2 hours within 4 or 5 hours, and that felt like a lot of wasted time again! Definitely a net benefit though, as you can combine the commute & the lunch time break.

So all is good and well with remote working from the perspective of the employee, but I’ve also always been interested in what makes good companies perform well from a social and cultural perspective. What makes a good culture? What strengthens social capital? Some would say at least a little of this is serendipitous communication at work – “water cooler moments”. It seems likely that a 100% remote working company has challenges in these areas. REMOTE did not cover this a great deal, aside from a couple of notes about creating a “virtual water cooler” and suchlike. 37Signals is a company with the mostly remote employees, but also mostly tech-comfortable and ‘remote-ready’ employees: people that are not averse to hanging out in chatrooms. My company has over 100 employees, most with little remote experience and even fewer that are completely comfortable online. Added to that, we have a huge variety of roles and teams, and some I would suspect are not suited to remote working. How does the company manage the remoteness of some teams and not others? How do we manage the expectations of the non-remote teams when they need the help of remote workers? The book had few answers for these questions.

Another point that comes up in discussion is the fact that some people might abuse the trust placed in them as remote workers. This is true, and does happen, I’ve seen it. However, I also think that most often this degrading situation is not the fault of the remote worker, but rather of the employer/management. At best this person may not be sufficiently motivated because the work is boring or some other similar dysfunction and guess what, they’dbe just as demotivated at the office than they are at work (and thus, just as unproductive). And at worst, this person isn’t self-managed enough to respond to remote working and you’ve made a hiring mistake, which by the way would also play out similarly if the person had to come in every day. So you see, blaming remote working is just a cop-out for managers that don’t really understand how to manage true productivity (ie. work outcomes) and employee motivation.

Ultimately, on finishing the book, I realised that the book I wanted to read was not really a variety of anecdotes from an entrepreneur who set up his still <50 employee business over 10+ years from the get-go to be remote, but rather an in-depth expose’ of the high performance, 200+ employee frequently-shipping, distributed, software-delivery-machine that is Github (or perhaps Facebook – not sure what proportion of FB is remote). These are the companies that I want to start modelling my company on, and I suspect they have many more remote working learnings to supply than 37Signals. However this book will help win the PR war, and help to reposition remote working as an accepted, and eventually expected, practice.

I’m on to reading another book that talks about remote working – The Year Without Pants is Scott Berkun talking about his experiences coming in from the consulting world into a real company, Automattic, the company. Scott’s perspective is a more accurate proxy of my own devil’s advocate position on remote working and progressive company cultures in general, and I’m interested to see how he adjusts, given that the major portion of his working experience is from late 90’s Microsoft! 

Try Ghost!

Posted on Updated on

One of the reasons I don’t blog enough is that WordPress is pretty complicated to do simple things, and it’s very difficult to make beautiful things (I have been looking for a decent theme for years, premium or otherwise). It might be powerful and extensive in its toolset but that is actually the crux of the problem. All I want is an easy editing experience when ‘creating content’ which really just means writing words and throwing in a few pictures (perhaps also sometimes dragging in some code). When your requirements are that simple, WordPress becomes the sledgehammer solution.

So I was very excited to stumble upon the Ghost kickstarted project, and immediately pledged my £25. I’m looking forward to using a beautiful blogging platform that only contains things I need to actually create blog posts, and not run and entire media empire! That said, I like that it also open-source, as this means there’ll be solutions to the problems most had by bloggers, and some of those solutions will have come from the community. Check out to see more. They are apparently releasing to their Kickstarters this September!

QCon wrap-up

Posted on Updated on

Better late than never! Here is my rather length wrap up of QCon London which I attended in February.


Main topics of conversation (at least my perception of the conference) were REST (How to do it ‘properly’), and micro-servicing / component-based architecture: breaking up big apps into independent components that are more easily ‘managed’ (ie. refactored, rewritten, deployed, etc).

My favourite talk: Steve Sanderson’s “Rich HTML/JS apps with Knockout.js and no server” – thoroughly recommended.

Also worth a look was Mike Hadlow’s EasyNetQ talk – bringing RabbitMQ to .NET developers with an easy-to-use wrapper that can get you creating distributed systems in relatively little code.

Crowd-favourite was Damian Conway’s “Fun with Dead Languages” – definitely watch this for the most entertaining keynote I’ve ever seen!

QCon (InfoQ) will be publishing the talk videos online over the next few months. See their video calendar to see what’s available now.


First up I attended Dan North’s tutorial titled “Accelerated Agile”. Dan explained that he has spent a lot of the last few years at a company that weren’t practicing Agile in the way that we know it (his LinkedIn describes this role as “Lean Technology Specialist at DRW Trading Group, Chicago”. They were breaking most of the rules and creating their own, but also meeting with much success. Framing what he observed there within his years of  agile experience beginning in the 90’s, he has boiled down a collection of patterns that he is now wrapping up and presenting in his “Accelerated Agile” course. They range across both engineering practices and organisational practices, and I’ll enumerate a list of those I found most interesting from the day of training here.

But a little history to start. Dan explained that Agile initially emerged in the 90’s to combat the unpredictability of software development, where massive projects would be worked on for years without end users ever seeing them, and eventually get canned several years into development as things had degenerated into a mess. This rung true for me as until recently I worked at an organisation that had this exact problem – a big, slow, clunky enterprise system where tech debt was never paid off and most of the core technology older than 5 years! However we needed to upgrade our definition of the success of Agile as predictability is not that important anymore. In my own experience I have time and again seen that prioritizing predictability has a negative affect on product quality, with specific reference to user experience. I have long thought deadlines to be the single biggest evil in software development and it is good to see the tide turning towards quality over predictability. After all, who cares how long it takes? Just make it good when it arrives!  Nobody remembers that a product was late, only that it is great! I think there are many project managers that still need to learn this lesson. Anyway, back to Dan’s tutorial – here are some of the patterns he revealed:

3 Phases of Activity:

An individual/team/organisation is typically in 1 of 3 phases:

1. Explore : Goal is to learn; behaviour is around experimentation and discovery. “Will this work?”

2. Stabilise : Goal is to optimise for predictability, repeatability, minimize variance. “Ok we know (or highly suspect) this works, let’s build it out and provide a rough schedule given what we’ve recently discovered.”

3. Commodotise: Goal here is to minimize cost and maximise efficiency. “Ok we’ve built it, now let’s scale it up!”

The first two are probably most relevant to software product development. I think Ops/DevOps probably think about number 3 a lot more.

Dan said that as an individual/team/organisation, we need to be deliberate about what phase we’re in and use the corresponding criteria to judge success. One contender for Quote of the Day from the audience was  that the “business is often in Explore mode while expecting product delivery to be in Stabilize mode!”.

One further observation from the floor which I found interesting was that different personalities favour different phases. For example, some developers love to play with the new stuff (Explore) but get a little bored when having to build out an entire product or feature once the novelty wears off. Other developers aren’t natural explorers but are OK to just get through building out a product once they’ve been shown/taught what new tools to leverage.

“Fits In My Head”

Here Dan talked about the importance of conventions across the codebase, and the ability for engineers to be able to effectively ‘reason about’ the codebase. One of the negative side-effects of TDD is that it encourages the emergence of ‘fractal systems’, allowing developers to be ‘intentionally ignorant’ of parts of the codebase outside of the classes-under-test that the engineer is currently working on. Over time you end with several different ways of doing the same or similar things, which makes things difficult to reason about and violates the principle of least surprise.  I am a strong believer in this concept but at the same time my experience tells me it’s not the easiest thing to stick to conventions for an infinite period of time as better ways to do things are constantly emerging, and it is usually not plausible to go through an entire codebase to update the way we access data, for example, but has to be upgraded piecemeal. However, I am writing this a few days after having refactored some code at work from getting a list of Countries in 5 different ways using 3 different representations for Country, so the point is not lost! I think in the end it is a balance, but certainly more software teams have to be more deliberate and public about their conventions, and allow developers only to break form convention when the new/different way of doing things is demonstrably better and going to itself form a new convention!

“Short Software Half-life”

Dan started this one with question to the room: “How long does it take on average before half of code in your codebase might be rewritten”. The average from the room was 2-3 years (I think in reality it might even be much longer than this for most enterprise apps). Dan wowed the room when he revealed that in DRW, for some products, it was 6 weeks! He recounted that they might build a new product or feature in one language, and end up writing the next iteration of that product from scratch in a new language, using the knowledge they’d learned both on a technical and product side, and end up adding more value in a shorter amount of time just because they’d started from a blank slate. This seemed a little extreme at first but then I started thinking about how much time we spend refactoring and upgrading our codebases! For some products it may well be worth starting from scratch and that using the latest tools and frameworks, together with the new product-related information we have from the previous version, it is possible to build a better product in a shorter time, with less code and with more agility and better maintainability going forward too. After all, source code is a liability, not an asset. This practice is one that becomes a lot more plausible when you have effectively broken up parts of your big application into componentized modules, and this was a recurring theme for the rest of the conference. This also puts you in a position where you can rewrite certain components in different languages or platforms that might suit the job in hand a lot better.

“Blink Estimation”

Another practice Dan said he saw a lot at DRW was Blink Estimation. This is basically getting a few experts in a room, talking about a project a little bit and then guessing how long it would take. No Feasibility Studies, no Impact Analyses, no Gantt Charts, nothing. “It’ll take about 6/8 people about 6/8 months.” This way of working is obviously only feasible if your organisation supports it (ie. supports the flexibility needed when 6 people turn into 8 and 6 months turns into 10!). This goes back to prioritizing outcomes (good quality products with delighted users) rather than developing until an arbitrary deadline. Dan reminded us that he thinks that too often way too much effort is put into planning and estimation and I agree. That said, in order to practice Blink Estimation he did say that you need “Experts estimating, an expert messenger (who can manage expectations effectively) and an expert client (who knows their business really well and can work with the developers to articulate what needs to be done). Working in a consumer-orientated website company as I do means the constraints are not as tight as consultancy, and deadlines are relegated to being guidelines while we’re constantly re-evaluating whether to spend longer on a project or feature or switch to delivering value in a another part of the organisational eco-system.

As usual with these things, I found myself observing that these practices are only executable in a relatively progressive organisation with relatively passionate and engaged people! Still, if you’re in an organisation like that, it’s good to either have confirmation that some of the things you are already doing are ’emerging best practices’ or get some idea of what practices you should be engaging in next.

That’s the tutorial day done, now on to the conference.

Of course much of the fun in a conference is actually attending it, absorbing the atmosphere, chatting to fellow developers in the Open Spaces and learning a thing or two in the live talks and demos. But here I’m going to summarize a few of my favourite talks and link to the videos.

Barbara Liskov’s keynote was the first order on the opening day. This was mostly spelunking around the historical evolution of some programming theory and concepts emerging over time via several academic papers coming out over the decades from the universities most engaged in computer science . Barbara reminded us that some of the concepts we take for granted as being relatively recent inventions were firs thought of in the 70’s!  The keynote was generally very interesting if not particularly actionable. If you’re into your computer science academic papers (who isn’t right), Graham Lee summarized the list Barbara mentioned very well.

GOV.UK – Paul Downey, Technical Architect.

This was an interesting talk recounting how Paul’s team essentially needed to deliver a brand new site serving much of the content of the DirectGov site (redesigned and updated) and seamlessly slip it online without much fuss, redirecting all links to the old DirectGov site to to their equivalent to cater for so many bookmarks to the old site, including physical products! The team gradually redirected more of the traffic to directgov to, eventually culminating in a switch off of the old site on 17 October 2012.

Much kudos needs to go to the team as a quick surf around and you can see how useful and easy-to-use it is, with the content far superior to what came before.

Road to Rest – Rickard Oberg

Rickard’s talk centred around the road to rest in his application and deciding how to expose his rest services. While initially exposing his data model, he ended up taking his REST services one layer of abstraction higher and rather exposing use cases. So instead of:



He’d rather have:



Rickard now recommends exposing REST functionality this way, and reminds us that the best REST services are actually analogous to ugly websites, as they  effectively embed single use cases through a sequence of pages. In the above example one might visit the Administration page, then click through to the User Management page, then click through to User1 page and from that page submit a Reset Password form. Rickard reminds us that we could effectively use this mindset to efficiently expose entire use cases or user journeys from a single intuitively designed REST endpoint. This also means you are able to refactor your domain model underneath your use case layer more flexibly than if you exposed your model directly to REST endpoints.

Mark Nottingham: HTTP 2.0

This one was interesting too, as I had no idea that “HTTP 2.0” was even in development. With HTTP 1.1 over a decade old, there is obviously room for improvement, and Mark believes that HTTP 2.0 will change the way we design, manage and operate the web. The IETF HTTPbis working group is starting with Google-developed web network protocal SPDY as a base point, and hoping to shape HTTP into something that better fits the way we use the web today. Mark pointed out that these days pages are usually over 1MB in size and average 200 requests for various content. Also, mobile usage is growing but that platform is burdened by slower speeds and despite 4G will be for the forseeable future. HTTP in it’s current form doesn’t lend itself to these challenges, leading us to develop ‘hacks’ like CSS spriting and domain sharding, to be able to work around these shortcomings. Mark pointed out that the working group currently includes people from Microsoft, Google and Twitter but is not a ‘corporate consensus’ but rather just a group of engineers around the table.

If you’re interested in HTTP 2.0, you might want to check out the Working Group page and/or the draft spec on GitHub for more information.

Ward Cunningham – A Federated Wiki

Ward is working on a new federated wiki system than can pull in data from several sources into one dynamically curated outlet. There is quite a lot of support around physical inputs, like thermometers and oscillators. This looks to m pretty much like a tool to build big dashboards and live/dynamic knowledge bases. If you want to know more, check out Ward’s 8 minute TED talk for more. Ward’s introduction in the first few minutes are not very clear but skip to 7 minutes in for a demo of the wiki itself.

Also, here is another example of federated wiki:

Ward also has more on his github page.

Although this looks cool, I’m not sure it’s relevant beyond certain niche communities…

Mike Hadlow – EasyNetQ

I know I’m a little late to this party, but I’ve only recently become interested in distributed architectures and the .NET tools in this space. Mike has a good presentation style and this talk was one of my favourites. Mike covered why firstly you’d want to use RabbitMQ for messaging in general, and then why you’d want to use EasyNetQ to bring it into your .NET architecture.

Why RabbitMQ?

– Brokered instead of Brokerless

– RabbitMQ is open source and free, with commercial backing from VMWare

– Implements AMQP, which is based on the mature and battle-hardened Erlang-based Ericson OTP telecoms protocol

– Multiplatform

Why EasyNetQ?
Mike said that EasyNetQ has been inspired by NServiceBus and MassTransit, but because those tools weren’t ‘ready’ two years ago when he started on this, 15below chose to do something original. In terms of popularity (Nuget downloads), I see that while NServiceBus is way out in front with ~70k downloads, EasyNetQ is catching up to MassTransit with both around 16k.

Mike’s sell for EasyNetQ included the fact that knowing your way around AMQP is not easy, so EasyNetQ takes care of things like serialization, threading strategies, and managing connection loss, and is also more opinionated on things like QoS default settings and error handling.

On a slide summarizing 15Below’s experiences, Mike related that having your messaging in code turns out to be a big benefit over using database-as-queue. Some might describe it as a black box but is a very reliable black box compared to SQL Server which can be ‘touchy’. The sql database is no longer the bottleneck at 15Below. Performance is also great: They’re running a multi-billion pound business and are not even anywhere near RabbitMQ’s ‘red lines’.

Mike wrapped up by saying that this is in his opinion one of the better ways to build an SOA, and urged us to check out EasyNetQ today!

Steve Sanderson: Knockout + Azure + PhoneGap

Steve now works for Azure Mobile Services, and his talk demonstrated doing a KnockoutJS site but with no server, instead hooked up to Azure, and finally doing the minimum porting required to build a native iOS app (effectively containing the web app he just built). I’d never seen or played with the insides of Azure and I must say I was really impressed with it – will possibly migrate from AppHarbour before long!

KnockoutJS is around 3 years old now, and Steve has demo’d KO a few times before, so I’m not going to detail that much of that now – for more info check out my Knockout Kickstart post or Steve’s Mix11 talk.

Side note – Steve was using a lightweight editor called docpad which looked quite cool.

On to the the Azure Mobile Services part. So Azure in this case just acts as a db-over-http (REST). You create a new ‘Mobile Service’ which seems like a nosql db, with the following kind of (psuedo)-code:

var client = new WindowsAzure.MobileServicesClient(urlToYourAzureApp);
myTable = client.getTable(tableName);
myTable.insert(data).then(callback); //asynchronous; //callback function would update the viewmodel

It operates in schemaless mode until you switch mode to locked – until then it just infers schema, very handy for development.

Azure also supports easy integration with Facebook/Twitter/Microsoft/Google for user authentication, eliminating the need to develop your own ID scheme. A simple client.login(‘twitter’).then(callback) is all that is needed to invoke the right popups and user flows – easy peasy!

To get going on the native iOS platform, Steve used PhoneGap by firing up a console app that just created a barebones PhoneGap app, creating an xcode project with a few standard iOS-like files and folders, but with a few files and folders more familiar to web developers as well. Steve was able to drop his web app straight into the PhoneGap directory structure, and with a couple of tweaks, could run the app inside the native container! This was the really impressive part of the demo.

Steve’s hot tips for developing a mobile app this way:
– Leave the iPhone simulator behind, rather develop in the browser for a quicker feedback loop.
– Writing PhoneGap plugins to be able to access native functionality are easy, so don’t be shy.
– Use commercial artwork – paying for a few images and icons etc will save loads of time and make it look decent.
– Use CSS transitions instead of $.animate() – javascript performance can drag in a mobile app.

“And Finally”

Over the 3 days of QCon I picked up that one popular theme was ‘micro-servicing’, or breaking up what have in the past been monolithic apps into smaller components:

Dan North talked about micro-servicing being a good way to enable separate development, deployment, etc of a bigger application.

Mike Hadlow said building micro-servicing joined via messaging is a good way to build stable, scalable, robust SOA’s.

Paul Downey of GOV.UK said it was a useful way to get their stuff done but must be done carefully, for example avoid slicing vertically (ie. a “data access” micro-service) but rather not horizontally (by app).

And now for something slightly different:

Last QCon I noticed a few re-occurring phrases that remind me that sometimes these conferences and the community in general can be quite an echo chamber! These aren’t technical topics but just grammatical turns of phrase that seem to catch on, which as an amateur linguist I’m always interested to observe. Last year there were at least 3 or 4 that I can’t remember now but this year I only picked up one: “reason about”. A lot of people were talking about how easy or difficult something is ‘to reason about’!

QCon remains a really great conference I look forward to QCon London 2014!

Knockout Kickstart

Posted on Updated on

A few weeks ago I waded into KnockoutJS, using it for a simple CRUD screen at work. In short it was a really good experience and I was amazed at how little javascript I had to write to get everything going. I’m looking forward to combining Knockout with other JS frameworks to enable more complex scenarios with as little effort.

However looking for tutorials and example code online, I found some resources that weren’t that good, and then looked some more and found some better stuff. I did find that it took a bit of effort to find decent learning materials – there is a fair amount of confusing and sporadic KnockoutJS content around – so I thought I’d blog what ended up being the most helpful content, in order to save you the hassle. So if you’re wanting to get going, consider the following resources above others.

First, definitely go through the tutorial at Learn Knockout. This covers the basics and leads you to the official documentation that will serve as a great reference to the various building blocks you’ll need to refer back to (and copy code from!) when building your page/site. By the time you finish the tutorial you’ll understand the basic structure of a Knockout page, and the steps involved in building the necessary parts.

Second, definitely view Steve Michelotti’s excellent PluralSight tutorial “Knockout Fundamentals“. I actually watched the TekPub Knockout screencast first but found this didn’t actually cover the fundamentals as well as required to actually build a non-trivial page. It covered Knockout in an ad-hoc manner, missing some key pieces around handling arrays, rather than the logical and cumulative treatment (building up a big app over several videos) that Steve’s video provides. Steve’s course covers all the basics systematically, so you will cover everything you need to build a proper page. The Pluralsight course also ends up with more sophisticated app than the one in TekPub, stretching your KnockoutJS knowledge further on completion.  If you are not a subscriber to PluralSight, I would heartily recommend signing up (lots of great content) but failing that, the trial will give you free access to your first 200 minutes, which will cover the 1h40m KO course.

Just these two resources should be enough to get your first Knockout page/site built, but once you start hitting those edge cases and yearning for some advanced features, the following posts helped round out my Knockout knowledge even further:

Ryan Niemayer’s blog is generally pretty good, but here are two articles that are worth reading first:

Hope that helps! Next up I am going to discover how DurandalJS builds on top of KnockoutJS to make developing SPAs even easier!