software

QCon wrap-up

Posted on Updated on

Better late than never! Here is my rather length wrap up of QCon London which I attended in February.

TL;DR:

Main topics of conversation (at least my perception of the conference) were REST (How to do it ‘properly’), and micro-servicing / component-based architecture: breaking up big apps into independent components that are more easily ‘managed’ (ie. refactored, rewritten, deployed, etc).

My favourite talk: Steve Sanderson’s “Rich HTML/JS apps with Knockout.js and no server” – thoroughly recommended.

Also worth a look was Mike Hadlow’s EasyNetQ talk – bringing RabbitMQ to .NET developers with an easy-to-use wrapper that can get you creating distributed systems in relatively little code.

Crowd-favourite was Damian Conway’s “Fun with Dead Languages” – definitely watch this for the most entertaining keynote I’ve ever seen!

QCon (InfoQ) will be publishing the talk videos online over the next few months. See their video calendar to see what’s available now.

</TL;DR>

First up I attended Dan North’s tutorial titled “Accelerated Agile”. Dan explained that he has spent a lot of the last few years at a company that weren’t practicing Agile in the way that we know it (his LinkedIn describes this role as “Lean Technology Specialist at DRW Trading Group, Chicago”. They were breaking most of the rules and creating their own, but also meeting with much success. Framing what he observed there within his years of  agile experience beginning in the 90’s, he has boiled down a collection of patterns that he is now wrapping up and presenting in his “Accelerated Agile” course. They range across both engineering practices and organisational practices, and I’ll enumerate a list of those I found most interesting from the day of training here.

But a little history to start. Dan explained that Agile initially emerged in the 90’s to combat the unpredictability of software development, where massive projects would be worked on for years without end users ever seeing them, and eventually get canned several years into development as things had degenerated into a mess. This rung true for me as until recently I worked at an organisation that had this exact problem – a big, slow, clunky enterprise system where tech debt was never paid off and most of the core technology older than 5 years! However we needed to upgrade our definition of the success of Agile as predictability is not that important anymore. In my own experience I have time and again seen that prioritizing predictability has a negative affect on product quality, with specific reference to user experience. I have long thought deadlines to be the single biggest evil in software development and it is good to see the tide turning towards quality over predictability. After all, who cares how long it takes? Just make it good when it arrives!  Nobody remembers that a product was late, only that it is great! I think there are many project managers that still need to learn this lesson. Anyway, back to Dan’s tutorial – here are some of the patterns he revealed:

3 Phases of Activity:

An individual/team/organisation is typically in 1 of 3 phases:

1. Explore : Goal is to learn; behaviour is around experimentation and discovery. “Will this work?”

2. Stabilise : Goal is to optimise for predictability, repeatability, minimize variance. “Ok we know (or highly suspect) this works, let’s build it out and provide a rough schedule given what we’ve recently discovered.”

3. Commodotise: Goal here is to minimize cost and maximise efficiency. “Ok we’ve built it, now let’s scale it up!”

The first two are probably most relevant to software product development. I think Ops/DevOps probably think about number 3 a lot more.

Dan said that as an individual/team/organisation, we need to be deliberate about what phase we’re in and use the corresponding criteria to judge success. One contender for Quote of the Day from the audience was  that the “business is often in Explore mode while expecting product delivery to be in Stabilize mode!”.

One further observation from the floor which I found interesting was that different personalities favour different phases. For example, some developers love to play with the new stuff (Explore) but get a little bored when having to build out an entire product or feature once the novelty wears off. Other developers aren’t natural explorers but are OK to just get through building out a product once they’ve been shown/taught what new tools to leverage.

“Fits In My Head”

Here Dan talked about the importance of conventions across the codebase, and the ability for engineers to be able to effectively ‘reason about’ the codebase. One of the negative side-effects of TDD is that it encourages the emergence of ‘fractal systems’, allowing developers to be ‘intentionally ignorant’ of parts of the codebase outside of the classes-under-test that the engineer is currently working on. Over time you end with several different ways of doing the same or similar things, which makes things difficult to reason about and violates the principle of least surprise.  I am a strong believer in this concept but at the same time my experience tells me it’s not the easiest thing to stick to conventions for an infinite period of time as better ways to do things are constantly emerging, and it is usually not plausible to go through an entire codebase to update the way we access data, for example, but has to be upgraded piecemeal. However, I am writing this a few days after having refactored some code at work from getting a list of Countries in 5 different ways using 3 different representations for Country, so the point is not lost! I think in the end it is a balance, but certainly more software teams have to be more deliberate and public about their conventions, and allow developers only to break form convention when the new/different way of doing things is demonstrably better and going to itself form a new convention!

“Short Software Half-life”

Dan started this one with question to the room: “How long does it take on average before half of code in your codebase might be rewritten”. The average from the room was 2-3 years (I think in reality it might even be much longer than this for most enterprise apps). Dan wowed the room when he revealed that in DRW, for some products, it was 6 weeks! He recounted that they might build a new product or feature in one language, and end up writing the next iteration of that product from scratch in a new language, using the knowledge they’d learned both on a technical and product side, and end up adding more value in a shorter amount of time just because they’d started from a blank slate. This seemed a little extreme at first but then I started thinking about how much time we spend refactoring and upgrading our codebases! For some products it may well be worth starting from scratch and that using the latest tools and frameworks, together with the new product-related information we have from the previous version, it is possible to build a better product in a shorter time, with less code and with more agility and better maintainability going forward too. After all, source code is a liability, not an asset. This practice is one that becomes a lot more plausible when you have effectively broken up parts of your big application into componentized modules, and this was a recurring theme for the rest of the conference. This also puts you in a position where you can rewrite certain components in different languages or platforms that might suit the job in hand a lot better.

“Blink Estimation”

Another practice Dan said he saw a lot at DRW was Blink Estimation. This is basically getting a few experts in a room, talking about a project a little bit and then guessing how long it would take. No Feasibility Studies, no Impact Analyses, no Gantt Charts, nothing. “It’ll take about 6/8 people about 6/8 months.” This way of working is obviously only feasible if your organisation supports it (ie. supports the flexibility needed when 6 people turn into 8 and 6 months turns into 10!). This goes back to prioritizing outcomes (good quality products with delighted users) rather than developing until an arbitrary deadline. Dan reminded us that he thinks that too often way too much effort is put into planning and estimation and I agree. That said, in order to practice Blink Estimation he did say that you need “Experts estimating, an expert messenger (who can manage expectations effectively) and an expert client (who knows their business really well and can work with the developers to articulate what needs to be done). Working in a consumer-orientated website company as I do means the constraints are not as tight as consultancy, and deadlines are relegated to being guidelines while we’re constantly re-evaluating whether to spend longer on a project or feature or switch to delivering value in a another part of the organisational eco-system.

As usual with these things, I found myself observing that these practices are only executable in a relatively progressive organisation with relatively passionate and engaged people! Still, if you’re in an organisation like that, it’s good to either have confirmation that some of the things you are already doing are ’emerging best practices’ or get some idea of what practices you should be engaging in next.

That’s the tutorial day done, now on to the conference.

Of course much of the fun in a conference is actually attending it, absorbing the atmosphere, chatting to fellow developers in the Open Spaces and learning a thing or two in the live talks and demos. But here I’m going to summarize a few of my favourite talks and link to the videos.

Barbara Liskov’s keynote was the first order on the opening day. This was mostly spelunking around the historical evolution of some programming theory and concepts emerging over time via several academic papers coming out over the decades from the universities most engaged in computer science . Barbara reminded us that some of the concepts we take for granted as being relatively recent inventions were firs thought of in the 70’s!  The keynote was generally very interesting if not particularly actionable. If you’re into your computer science academic papers (who isn’t right), Graham Lee summarized the list Barbara mentioned very well.

GOV.UK – Paul Downey, Technical Architect.

This was an interesting talk recounting how Paul’s team essentially needed to deliver a brand new site serving much of the content of the DirectGov site (redesigned and updated) and seamlessly slip it online without much fuss, redirecting all links to the old DirectGov site to to their Gov.uk equivalent to cater for so many bookmarks to the old site, including physical products! The team gradually redirected more of the traffic to directgov to gov.uk, eventually culminating in a switch off of the old site on 17 October 2012.

Much kudos needs to go to the team as a quick surf around gov.uk and you can see how useful and easy-to-use it is, with the content far superior to what came before.

Road to Rest – Rickard Oberg

Rickard’s talk centred around the road to rest in his application and deciding how to expose his rest services. While initially exposing his data model, he ended up taking his REST services one layer of abstraction higher and rather exposing use cases. So instead of:

/users/user1/changepassword

/users/user1/resetpassword

He’d rather have:

/users/user1/changepassword

/administration/usermanagement/user1/resetpassword

Rickard now recommends exposing REST functionality this way, and reminds us that the best REST services are actually analogous to ugly websites, as they  effectively embed single use cases through a sequence of pages. In the above example one might visit the Administration page, then click through to the User Management page, then click through to User1 page and from that page submit a Reset Password form. Rickard reminds us that we could effectively use this mindset to efficiently expose entire use cases or user journeys from a single intuitively designed REST endpoint. This also means you are able to refactor your domain model underneath your use case layer more flexibly than if you exposed your model directly to REST endpoints.

Mark Nottingham: HTTP 2.0

This one was interesting too, as I had no idea that “HTTP 2.0” was even in development. With HTTP 1.1 over a decade old, there is obviously room for improvement, and Mark believes that HTTP 2.0 will change the way we design, manage and operate the web. The IETF HTTPbis working group is starting with Google-developed web network protocal SPDY as a base point, and hoping to shape HTTP into something that better fits the way we use the web today. Mark pointed out that these days pages are usually over 1MB in size and average 200 requests for various content. Also, mobile usage is growing but that platform is burdened by slower speeds and despite 4G will be for the forseeable future. HTTP in it’s current form doesn’t lend itself to these challenges, leading us to develop ‘hacks’ like CSS spriting and domain sharding, to be able to work around these shortcomings. Mark pointed out that the working group currently includes people from Microsoft, Google and Twitter but is not a ‘corporate consensus’ but rather just a group of engineers around the table.

If you’re interested in HTTP 2.0, you might want to check out the Working Group page and/or the draft spec on GitHub for more information.

Ward Cunningham – A Federated Wiki

Ward is working on a new federated wiki system than can pull in data from several sources into one dynamically curated outlet. There is quite a lot of support around physical inputs, like thermometers and oscillators. This looks to m pretty much like a tool to build big dashboards and live/dynamic knowledge bases. If you want to know more, check out Ward’s 8 minute TED talk for more. Ward’s introduction in the first few minutes are not very clear but skip to 7 minutes in for a demo of the wiki itself.

Also, here is another example of federated wiki: http://fed.wiki.org/view/how-to-wiki

Ward also has more on his github page.

Although this looks cool, I’m not sure it’s relevant beyond certain niche communities…

Mike Hadlow – EasyNetQ

I know I’m a little late to this party, but I’ve only recently become interested in distributed architectures and the .NET tools in this space. Mike has a good presentation style and this talk was one of my favourites. Mike covered why firstly you’d want to use RabbitMQ for messaging in general, and then why you’d want to use EasyNetQ to bring it into your .NET architecture.

Why RabbitMQ?

– Brokered instead of Brokerless

– RabbitMQ is open source and free, with commercial backing from VMWare

– Implements AMQP, which is based on the mature and battle-hardened Erlang-based Ericson OTP telecoms protocol

– Multiplatform

Why EasyNetQ?
Mike said that EasyNetQ has been inspired by NServiceBus and MassTransit, but because those tools weren’t ‘ready’ two years ago when he started on this, 15below chose to do something original. In terms of popularity (Nuget downloads), I see that while NServiceBus is way out in front with ~70k downloads, EasyNetQ is catching up to MassTransit with both around 16k.

Mike’s sell for EasyNetQ included the fact that knowing your way around AMQP is not easy, so EasyNetQ takes care of things like serialization, threading strategies, and managing connection loss, and is also more opinionated on things like QoS default settings and error handling.

On a slide summarizing 15Below’s experiences, Mike related that having your messaging in code turns out to be a big benefit over using database-as-queue. Some might describe it as a black box but is a very reliable black box compared to SQL Server which can be ‘touchy’. The sql database is no longer the bottleneck at 15Below. Performance is also great: They’re running a multi-billion pound business and are not even anywhere near RabbitMQ’s ‘red lines’.

Mike wrapped up by saying that this is in his opinion one of the better ways to build an SOA, and urged us to check out EasyNetQ today!

Steve Sanderson: Knockout + Azure + PhoneGap

Steve now works for Azure Mobile Services, and his talk demonstrated doing a KnockoutJS site but with no server, instead hooked up to Azure, and finally doing the minimum porting required to build a native iOS app (effectively containing the web app he just built). I’d never seen or played with the insides of Azure and I must say I was really impressed with it – will possibly migrate from AppHarbour before long!

KnockoutJS is around 3 years old now, and Steve has demo’d KO a few times before, so I’m not going to detail that much of that now – for more info check out my Knockout Kickstart post or Steve’s Mix11 talk.

Side note – Steve was using a lightweight editor called docpad which looked quite cool.

On to the the Azure Mobile Services part. So Azure in this case just acts as a db-over-http (REST). You create a new ‘Mobile Service’ which seems like a nosql db, with the following kind of (psuedo)-code:

var client = new WindowsAzure.MobileServicesClient(urlToYourAzureApp);
myTable = client.getTable(tableName);
myTable.insert(data).then(callback); //asynchronous

myTable.read().then(callback); //callback function would update the viewmodel

It operates in schemaless mode until you switch mode to locked – until then it just infers schema, very handy for development.

Azure also supports easy integration with Facebook/Twitter/Microsoft/Google for user authentication, eliminating the need to develop your own ID scheme. A simple client.login(‘twitter’).then(callback) is all that is needed to invoke the right popups and user flows – easy peasy!

To get going on the native iOS platform, Steve used PhoneGap by firing up a console app that just created a barebones PhoneGap app, creating an xcode project with a few standard iOS-like files and folders, but with a few files and folders more familiar to web developers as well. Steve was able to drop his web app straight into the PhoneGap directory structure, and with a couple of tweaks, could run the app inside the native container! This was the really impressive part of the demo.

Steve’s hot tips for developing a mobile app this way:
– Leave the iPhone simulator behind, rather develop in the browser for a quicker feedback loop.
– Writing PhoneGap plugins to be able to access native functionality are easy, so don’t be shy.
– Use commercial artwork – paying for a few images and icons etc will save loads of time and make it look decent.
– Use CSS transitions instead of $.animate() – javascript performance can drag in a mobile app.

“And Finally”

Over the 3 days of QCon I picked up that one popular theme was ‘micro-servicing’, or breaking up what have in the past been monolithic apps into smaller components:

Dan North talked about micro-servicing being a good way to enable separate development, deployment, etc of a bigger application.

Mike Hadlow said building micro-servicing joined via messaging is a good way to build stable, scalable, robust SOA’s.

Paul Downey of GOV.UK said it was a useful way to get their stuff done but must be done carefully, for example avoid slicing vertically (ie. a “data access” micro-service) but rather not horizontally (by app).

And now for something slightly different:

Last QCon I noticed a few re-occurring phrases that remind me that sometimes these conferences and the community in general can be quite an echo chamber! These aren’t technical topics but just grammatical turns of phrase that seem to catch on, which as an amateur linguist I’m always interested to observe. Last year there were at least 3 or 4 that I can’t remember now but this year I only picked up one: “reason about”. A lot of people were talking about how easy or difficult something is ‘to reason about’!

QCon remains a really great conference I look forward to QCon London 2014!

Illegitimate Affecters

Posted on

I’ve long been searching for a succinct noun phrase for that certain annoying thing that seemingly requires you to do something in a less-than-perfect way. Something that you know can and should be done better. You curse having that ‘requirement’, you loathe having to create a ‘workaround’, and you dream of a better world. Whether it’s the tardiness of the web standards world requiring CSS browser hacks, or the less-than-stellar API of an enterprise system with which you have to integrate, or perhaps being forced to browse in IE because of corporate bureaucracy, you’ve probably become annoyed at having to contend with this thing that, if the world was a better place, would not otherwise be detrimental to your work or productivity.

Having failed at finding the correct phrase for such a thing, I’ve done what I usually do and made one up. I now consider something like the above to be an illegitimate affecter. Having this succinct term is useful when identifying individual constituent parts of the system that can then be factored out by function and treated to an increase in quality (refactoring/rewriting) on a part by part basis.

For example, I sometimes stumble on poorly named properties only to trace their source back to the (poorly named) property of an outside API. The developer that first created the property in our application used the property name of the API object as an affecter but failed to recognize it as illegitimate, unnecessarily causing that little bit of extra confusion to all developers consequently working with that property.

I’ve also been in design discussions when reasons are brought up against making some architectural/design change to the system, only to be carefully exposed as illegitimate affecters, often temporary in nature, and sometimes actually even lending more weight to the argument to undertake the change proposed.

When they are outside of your control, illegitimate affecters should be kept at bay by abstraction layers. When inside your control, they themselves should form part of the redesign and refactoring discussions.

What illegitimate affecters can you think of?

Lean and Mean

Posted on

IMG_0214 A couple of weeks ago I presented at an internal company conference on Lean Software Development and what we can learn from Lean Manufacturing. As I am still a neophyte at presenting, the PowerPoint notes are prosaic and should be a coherent read.

I also included a slide discussing the Estimation Fallacy and Optimism Bias in an attempt to provide some insight into why software estimation is so often inaccurate.

The last thing I wanted to slip in was a slide on The Alignment Trap as presented in a Bain Consulting industry research publication released back in 2007. Briefly, the conclusions of this study put companies in four distinct quadrants: Well-aligned/Effective; Less-Aligned/Effective; Well-aligned/Ineffective and Less-aligned/Ineffective. The sales figures of the companies in the less-aligned/effective quadrant were better than the sales figures of the well-aligned/ineffective companies. The lesson there being that companies should avoid the alignment trap by letting their IT departments become effective at what they’re doing, before aligning IT operations more accurately with the business strategy.

It’s all in the presentation in better detail here!

Software is like Tetris

Posted on

In the ongoing quest to helpfully frame the dos, don’ts and whys of software engineering to management, every so often we come up with illuminating analogies that do wonders to illustrate why a software team should or should not follow a particular strategy. One very successful one is technical debt (coined by Cunningham and later helpfully expanded on by Steve McConnell). Although they can be misused, analogies generally help us to communicate to non-techies with more clarity. Having pondered on the life of my current 2yr+ project, I’ve come up with another (albeit flippant and high-level) analogy: Software is like Tetris.

A software project, with specific focus on providing ongoing value to the business as the product evolves towards feature plateau, is like a game of tetris but with one alteration: The blocks fall quicker as they stacker higher. What I mean is that the closer you are to conducting zero-overhead iteration after zero-overhead iteration, the more time (read: less pressure) you have to solve problems: engage in root cause analysis, attack process bottlenecks, conduct code reviews, lower technical debt, and several other things that don’t directly add business value but that are obviously invaluable. The converse to all this is what I experienced on the project I’m on now, but with previous management: As deadlines tighten on features having been promised to the business by overexhuberant non-techie managers, those same managers demand that nothing else be done except the implementation of the feature, the quick-and-dirty hacking in of which slows down implementation of future features. Developers are forced into increasing their estimates because implementing features into a messy codebase takes more time and carries higher risk, and managers think the wool is being pulled over their eyes. Trust ebbs away, entropy sets into the codebase, the business get unstable software, morale is sinking and your tetris blocks are piling up fast and falling quicker.

So it seems like the focus should not be on delivering a piece of software. The focus should be building a team of engineers supported by managers, testers, business domain specialists as well as the right tools and a comfortable working environment to create an entity that can sustain the delivery of features with quality and predictability. Peaks and troughs of feature quantity and quality should be rare and small tweaks and improvements should be made to aid the gradual increase of quality and quantity. Effort should be on keeping overhead low and your features should tick away like lines of blocks in a well-running Tetris game.