Better late than never! Here is my rather length wrap up of QCon London which I attended in February.
Main topics of conversation (at least my perception of the conference) were REST (How to do it ‘properly’), and micro-servicing / component-based architecture: breaking up big apps into independent components that are more easily ‘managed’ (ie. refactored, rewritten, deployed, etc).
My favourite talk: Steve Sanderson’s “Rich HTML/JS apps with Knockout.js and no server” – thoroughly recommended.
Also worth a look was Mike Hadlow’s EasyNetQ talk – bringing RabbitMQ to .NET developers with an easy-to-use wrapper that can get you creating distributed systems in relatively little code.
Crowd-favourite was Damian Conway’s “Fun with Dead Languages” – definitely watch this for the most entertaining keynote I’ve ever seen!
QCon (InfoQ) will be publishing the talk videos online over the next few months. See their video calendar to see what’s available now.
First up I attended Dan North’s tutorial titled “Accelerated Agile”. Dan explained that he has spent a lot of the last few years at a company that weren’t practicing Agile in the way that we know it (his LinkedIn describes this role as “Lean Technology Specialist at DRW Trading Group, Chicago”. They were breaking most of the rules and creating their own, but also meeting with much success. Framing what he observed there within his years of agile experience beginning in the 90’s, he has boiled down a collection of patterns that he is now wrapping up and presenting in his “Accelerated Agile” course. They range across both engineering practices and organisational practices, and I’ll enumerate a list of those I found most interesting from the day of training here.
But a little history to start. Dan explained that Agile initially emerged in the 90’s to combat the unpredictability of software development, where massive projects would be worked on for years without end users ever seeing them, and eventually get canned several years into development as things had degenerated into a mess. This rung true for me as until recently I worked at an organisation that had this exact problem – a big, slow, clunky enterprise system where tech debt was never paid off and most of the core technology older than 5 years! However we needed to upgrade our definition of the success of Agile as predictability is not that important anymore. In my own experience I have time and again seen that prioritizing predictability has a negative affect on product quality, with specific reference to user experience. I have long thought deadlines to be the single biggest evil in software development and it is good to see the tide turning towards quality over predictability. After all, who cares how long it takes? Just make it good when it arrives! Nobody remembers that a product was late, only that it is great! I think there are many project managers that still need to learn this lesson. Anyway, back to Dan’s tutorial – here are some of the patterns he revealed:
3 Phases of Activity:
An individual/team/organisation is typically in 1 of 3 phases:
1. Explore : Goal is to learn; behaviour is around experimentation and discovery. “Will this work?”
2. Stabilise : Goal is to optimise for predictability, repeatability, minimize variance. “Ok we know (or highly suspect) this works, let’s build it out and provide a rough schedule given what we’ve recently discovered.”
3. Commodotise: Goal here is to minimize cost and maximise efficiency. “Ok we’ve built it, now let’s scale it up!”
The first two are probably most relevant to software product development. I think Ops/DevOps probably think about number 3 a lot more.
Dan said that as an individual/team/organisation, we need to be deliberate about what phase we’re in and use the corresponding criteria to judge success. One contender for Quote of the Day from the audience was that the “business is often in Explore mode while expecting product delivery to be in Stabilize mode!”.
One further observation from the floor which I found interesting was that different personalities favour different phases. For example, some developers love to play with the new stuff (Explore) but get a little bored when having to build out an entire product or feature once the novelty wears off. Other developers aren’t natural explorers but are OK to just get through building out a product once they’ve been shown/taught what new tools to leverage.
“Fits In My Head”
Here Dan talked about the importance of conventions across the codebase, and the ability for engineers to be able to effectively ‘reason about’ the codebase. One of the negative side-effects of TDD is that it encourages the emergence of ‘fractal systems’, allowing developers to be ‘intentionally ignorant’ of parts of the codebase outside of the classes-under-test that the engineer is currently working on. Over time you end with several different ways of doing the same or similar things, which makes things difficult to reason about and violates the principle of least surprise. I am a strong believer in this concept but at the same time my experience tells me it’s not the easiest thing to stick to conventions for an infinite period of time as better ways to do things are constantly emerging, and it is usually not plausible to go through an entire codebase to update the way we access data, for example, but has to be upgraded piecemeal. However, I am writing this a few days after having refactored some code at work from getting a list of Countries in 5 different ways using 3 different representations for Country, so the point is not lost! I think in the end it is a balance, but certainly more software teams have to be more deliberate and public about their conventions, and allow developers only to break form convention when the new/different way of doing things is demonstrably better and going to itself form a new convention!
“Short Software Half-life”
Dan started this one with question to the room: “How long does it take on average before half of code in your codebase might be rewritten”. The average from the room was 2-3 years (I think in reality it might even be much longer than this for most enterprise apps). Dan wowed the room when he revealed that in DRW, for some products, it was 6 weeks! He recounted that they might build a new product or feature in one language, and end up writing the next iteration of that product from scratch in a new language, using the knowledge they’d learned both on a technical and product side, and end up adding more value in a shorter amount of time just because they’d started from a blank slate. This seemed a little extreme at first but then I started thinking about how much time we spend refactoring and upgrading our codebases! For some products it may well be worth starting from scratch and that using the latest tools and frameworks, together with the new product-related information we have from the previous version, it is possible to build a better product in a shorter time, with less code and with more agility and better maintainability going forward too. After all, source code is a liability, not an asset. This practice is one that becomes a lot more plausible when you have effectively broken up parts of your big application into componentized modules, and this was a recurring theme for the rest of the conference. This also puts you in a position where you can rewrite certain components in different languages or platforms that might suit the job in hand a lot better.
Another practice Dan said he saw a lot at DRW was Blink Estimation. This is basically getting a few experts in a room, talking about a project a little bit and then guessing how long it would take. No Feasibility Studies, no Impact Analyses, no Gantt Charts, nothing. “It’ll take about 6/8 people about 6/8 months.” This way of working is obviously only feasible if your organisation supports it (ie. supports the flexibility needed when 6 people turn into 8 and 6 months turns into 10!). This goes back to prioritizing outcomes (good quality products with delighted users) rather than developing until an arbitrary deadline. Dan reminded us that he thinks that too often way too much effort is put into planning and estimation and I agree. That said, in order to practice Blink Estimation he did say that you need “Experts estimating, an expert messenger (who can manage expectations effectively) and an expert client (who knows their business really well and can work with the developers to articulate what needs to be done). Working in a consumer-orientated website company as I do means the constraints are not as tight as consultancy, and deadlines are relegated to being guidelines while we’re constantly re-evaluating whether to spend longer on a project or feature or switch to delivering value in a another part of the organisational eco-system.
As usual with these things, I found myself observing that these practices are only executable in a relatively progressive organisation with relatively passionate and engaged people! Still, if you’re in an organisation like that, it’s good to either have confirmation that some of the things you are already doing are ’emerging best practices’ or get some idea of what practices you should be engaging in next.
That’s the tutorial day done, now on to the conference.
Of course much of the fun in a conference is actually attending it, absorbing the atmosphere, chatting to fellow developers in the Open Spaces and learning a thing or two in the live talks and demos. But here I’m going to summarize a few of my favourite talks and link to the videos.
Barbara Liskov’s keynote was the first order on the opening day. This was mostly spelunking around the historical evolution of some programming theory and concepts emerging over time via several academic papers coming out over the decades from the universities most engaged in computer science . Barbara reminded us that some of the concepts we take for granted as being relatively recent inventions were firs thought of in the 70’s! The keynote was generally very interesting if not particularly actionable. If you’re into your computer science academic papers (who isn’t right), Graham Lee summarized the list Barbara mentioned very well.
GOV.UK – Paul Downey, Technical Architect.
This was an interesting talk recounting how Paul’s team essentially needed to deliver a brand new site serving much of the content of the DirectGov site (redesigned and updated) and seamlessly slip it online without much fuss, redirecting all links to the old DirectGov site to to their Gov.uk equivalent to cater for so many bookmarks to the old site, including physical products! The team gradually redirected more of the traffic to directgov to gov.uk, eventually culminating in a switch off of the old site on 17 October 2012.
Much kudos needs to go to the team as a quick surf around gov.uk and you can see how useful and easy-to-use it is, with the content far superior to what came before.
Rickard’s talk centred around the road to rest in his application and deciding how to expose his rest services. While initially exposing his data model, he ended up taking his REST services one layer of abstraction higher and rather exposing use cases. So instead of:
He’d rather have:
Rickard now recommends exposing REST functionality this way, and reminds us that the best REST services are actually analogous to ugly websites, as they effectively embed single use cases through a sequence of pages. In the above example one might visit the Administration page, then click through to the User Management page, then click through to User1 page and from that page submit a Reset Password form. Rickard reminds us that we could effectively use this mindset to efficiently expose entire use cases or user journeys from a single intuitively designed REST endpoint. This also means you are able to refactor your domain model underneath your use case layer more flexibly than if you exposed your model directly to REST endpoints.
Mark Nottingham: HTTP 2.0
This one was interesting too, as I had no idea that “HTTP 2.0” was even in development. With HTTP 1.1 over a decade old, there is obviously room for improvement, and Mark believes that HTTP 2.0 will change the way we design, manage and operate the web. The IETF HTTPbis working group is starting with Google-developed web network protocal SPDY as a base point, and hoping to shape HTTP into something that better fits the way we use the web today. Mark pointed out that these days pages are usually over 1MB in size and average 200 requests for various content. Also, mobile usage is growing but that platform is burdened by slower speeds and despite 4G will be for the forseeable future. HTTP in it’s current form doesn’t lend itself to these challenges, leading us to develop ‘hacks’ like CSS spriting and domain sharding, to be able to work around these shortcomings. Mark pointed out that the working group currently includes people from Microsoft, Google and Twitter but is not a ‘corporate consensus’ but rather just a group of engineers around the table.
Ward Cunningham – A Federated Wiki
Ward is working on a new federated wiki system than can pull in data from several sources into one dynamically curated outlet. There is quite a lot of support around physical inputs, like thermometers and oscillators. This looks to m pretty much like a tool to build big dashboards and live/dynamic knowledge bases. If you want to know more, check out Ward’s 8 minute TED talk for more. Ward’s introduction in the first few minutes are not very clear but skip to 7 minutes in for a demo of the wiki itself.
Also, here is another example of federated wiki: http://fed.wiki.org/view/how-to-wiki
Ward also has more on his github page.
Although this looks cool, I’m not sure it’s relevant beyond certain niche communities…
Mike Hadlow – EasyNetQ
I know I’m a little late to this party, but I’ve only recently become interested in distributed architectures and the .NET tools in this space. Mike has a good presentation style and this talk was one of my favourites. Mike covered why firstly you’d want to use RabbitMQ for messaging in general, and then why you’d want to use EasyNetQ to bring it into your .NET architecture.
– Brokered instead of Brokerless
– RabbitMQ is open source and free, with commercial backing from VMWare
– Implements AMQP, which is based on the mature and battle-hardened Erlang-based Ericson OTP telecoms protocol
Mike said that EasyNetQ has been inspired by NServiceBus and MassTransit, but because those tools weren’t ‘ready’ two years ago when he started on this, 15below chose to do something original. In terms of popularity (Nuget downloads), I see that while NServiceBus is way out in front with ~70k downloads, EasyNetQ is catching up to MassTransit with both around 16k.
Mike’s sell for EasyNetQ included the fact that knowing your way around AMQP is not easy, so EasyNetQ takes care of things like serialization, threading strategies, and managing connection loss, and is also more opinionated on things like QoS default settings and error handling.
On a slide summarizing 15Below’s experiences, Mike related that having your messaging in code turns out to be a big benefit over using database-as-queue. Some might describe it as a black box but is a very reliable black box compared to SQL Server which can be ‘touchy’. The sql database is no longer the bottleneck at 15Below. Performance is also great: They’re running a multi-billion pound business and are not even anywhere near RabbitMQ’s ‘red lines’.
Mike wrapped up by saying that this is in his opinion one of the better ways to build an SOA, and urged us to check out EasyNetQ today!
Steve Sanderson: Knockout + Azure + PhoneGap
Steve now works for Azure Mobile Services, and his talk demonstrated doing a KnockoutJS site but with no server, instead hooked up to Azure, and finally doing the minimum porting required to build a native iOS app (effectively containing the web app he just built). I’d never seen or played with the insides of Azure and I must say I was really impressed with it – will possibly migrate from AppHarbour before long!
KnockoutJS is around 3 years old now, and Steve has demo’d KO a few times before, so I’m not going to detail that much of that now – for more info check out my Knockout Kickstart post or Steve’s Mix11 talk.
Side note – Steve was using a lightweight editor called docpad which looked quite cool.
On to the the Azure Mobile Services part. So Azure in this case just acts as a db-over-http (REST). You create a new ‘Mobile Service’ which seems like a nosql db, with the following kind of (psuedo)-code:
var client = new WindowsAzure.MobileServicesClient(urlToYourAzureApp);
myTable = client.getTable(tableName);
myTable.read().then(callback); //callback function would update the viewmodel
It operates in schemaless mode until you switch mode to locked – until then it just infers schema, very handy for development.
Azure also supports easy integration with Facebook/Twitter/Microsoft/Google for user authentication, eliminating the need to develop your own ID scheme. A simple client.login(‘twitter’).then(callback) is all that is needed to invoke the right popups and user flows – easy peasy!
To get going on the native iOS platform, Steve used PhoneGap by firing up a console app that just created a barebones PhoneGap app, creating an xcode project with a few standard iOS-like files and folders, but with a few files and folders more familiar to web developers as well. Steve was able to drop his web app straight into the PhoneGap directory structure, and with a couple of tweaks, could run the app inside the native container! This was the really impressive part of the demo.
Steve’s hot tips for developing a mobile app this way:
– Leave the iPhone simulator behind, rather develop in the browser for a quicker feedback loop.
– Writing PhoneGap plugins to be able to access native functionality are easy, so don’t be shy.
– Use commercial artwork – paying for a few images and icons etc will save loads of time and make it look decent.
Over the 3 days of QCon I picked up that one popular theme was ‘micro-servicing’, or breaking up what have in the past been monolithic apps into smaller components:
Dan North talked about micro-servicing being a good way to enable separate development, deployment, etc of a bigger application.
Mike Hadlow said building micro-servicing joined via messaging is a good way to build stable, scalable, robust SOA’s.
Paul Downey of GOV.UK said it was a useful way to get their stuff done but must be done carefully, for example avoid slicing vertically (ie. a “data access” micro-service) but rather not horizontally (by app).
And now for something slightly different:
Last QCon I noticed a few re-occurring phrases that remind me that sometimes these conferences and the community in general can be quite an echo chamber! These aren’t technical topics but just grammatical turns of phrase that seem to catch on, which as an amateur linguist I’m always interested to observe. Last year there were at least 3 or 4 that I can’t remember now but this year I only picked up one: “reason about”. A lot of people were talking about how easy or difficult something is ‘to reason about’!
QCon remains a really great conference I look forward to QCon London 2014!
We software developers have long fought against technical debt in our codebases in a variety of ways. Ultimately it seems we settled on doing a little bit at a time in an ongoing effort to slowly “pay off” what we have built up over months and on some projects, perhaps even years. Maybe you factor this into each iteration, with dedicated tasks that get their own story points score, or maybe the developers are instructed to enter a phase of “leaving everything better than when you found it”, relying on bugfixes and feature improvements to instigate updating the code to the latest agreed-on practices. Either way, that battle is won in a lot of environments (I’m an optimist), and that means you have two streams of development in your team:
1) Business-value development: This is stipulated, specified and analysed functional work that is prioritized by the business and offers direct value to your customers. This traverses your entire value stream: Analysis –> Development –> Testing > Deployed to customer
2) Technical-value development: This is prioritized by the technical team and has indirect value to the customers by increasing the speed/quality of your development. It also effectively skips the business analysis part: Development –> Testing (for regressions) –> Deployed
But I think we lack a third stream. In the enterprise, where roles and functions are often clearly delineated, there is a certain type of development that is “lost” as it is not explicitly owned by either the business or technical teams. In my experience this type of development often centres around usability improvements and innovative feature ideas. The business won’t request work to make a particular page more “usable”, and often they don’t have the skills or know-how to point to more intricate problems that would result in a better user experience. Even if they do, it’s often difficult to justify the business value against other issues in the backlog of much more tangible benefit. From the technical side, developers are discouraged to suggest improvements since they’ll have to convince the business to allow them time to develop a new feature or improve an existing one. Then again, the business would rather spend that time on what they perceive to be more urgent issues. Over the long term, without a product owner who understands and is passionate about the benefits of good web usability, the power of user feedback, integration with social media platforms, et al the result is a product that is not well-rounded and lacking in a certain kind of polish.
That changed where I work a couple of years ago, with the introduction of 10% Innovation Time (similar to the oft-referenced Google 20% time, just half ;-). Innovation Time gives developers an opportunity to experiment on whatever they wish as long as the intent is to “benefit the company” in some way. So far this has usually taken shape in the form of separate applications either automating back-office admin. But what about improving the existing product? Of course, a developer can’t just hack together a feature and commit it, since there are some necessary checks that feature developers usually go through – Does it make sense to have this new feature in? Is the value worth the development time/effort? etc. Essentially these questions are avoided by two things: 1) The developer can hack on whatever he wants in his own Innovation Time (largely escaping “Is this worth it?” question), and secondly, all innovation output that affects the product in a way an end-user can experience a different, should be given the go-ahead by the Product Owner. So in essence, this is a third stream of development that would over time, enable the developers with good ideas to get them into the product.
As far as User Experience is concerned, software teams these days are making more use of specialists in this area. For a team that currently produces a product with a lower-than-par user experience, I think it is just a matter of hiring a specialist (UX consultant, front-end engineer, whatever) and fitting him/her in at the right place in the value stream, ie. somewhere around development/design of the front-end of any particular feature. This specialism should not be an afterthought like testing/QA was in the software industry of several years ago. The UX person should be in on the conversation before any code is even written.
“Any fool can make things bigger, more complex, and more violent. It takes a touch of genius-and a lot of courage-to move in the opposite direction”
– Albert Einstein