Clicky

Wednesday, 18 February 2015

Superiority


Ever worked on a project where there was That One Guy (or Girl) who, whilst perhaps intelligent on an abstract level, always seemed to slow up the overall progress of wider team? There are a few reasons for that phenomenon, some of which I’m going to discuss below. First and foremost, though, this blog is really just an excuse to talk about my favourite short story of all time, from which it takes its title - Superiority, by Arthur C  Clarke.

__________________________________________________________________________________

You can read the full text of Superiority online (and I strongly recommend that you do.) It used to be required reading for engineering students at MIT. It describes an arms race in a war between two advanced technological societies, and the ultimate loss of that conflict by the side with the most superior weaponry. You can read this and other equally-insightful stories in the brilliant anthology The Collected Stories of Arthur C Clarke.



Nice camouflage job, there. Once we
hide it in our giant bag of Liquorice Allsorts
they'll never be able to spot it from the air

The story Superiority was influenced by Clarke’s own experience of working on the then-new (but reliable and proven) technology of radar. Whilst facing an enemy in Nazi Germany that was producing ever-more spectacular and deadly Wunderwaffe, or “Wonder Weapons”, such as the V-2 Rocket and jet aircraft.  The theoretical science behind these weapons was sound, but the practical engineering application of those theoretical concepts was deeply flawed.

Meh. Could use a chequered pattern to really help it
blend in. Plus that big flame at the back makes it even
easier to spot from the air
Ultimately, projects like the V2 formed the foundation for even more fantastic post-war achievements, such as the moon landings. But during a wartime period where efficiency, reliability and timely intervention using proven technologies were greater concerns than furthering scientific knowledge, it was the side with lesser technology that prevailed.

And so it is with many real-life software engineering projects. Don’t get me wrong - innovation is not in itself a bad thing (if it were, I’d be out of a job.) And I love tinkering with new techniques in my spare time and discovering what works and what doesn’t. However, it is worth considering the distinction between Computing Science and Software Engineering. Specifically, the latter is the practical application of the former. 

Experimenting with new techniques must always involve assessing the opportunity cost of getting up to speed with those techniques. Doing so requires both a consideration of how well-established and proven the technique itself is, as well as an honest appraisal of your own ability to get up to speed within a reasonable amount of time. A sufficient solution that is available to a given problem today, will always be preferable to a perfect solution that may never come, or that will arrive too late to be of any practical use.

Whenever you use a particular technique in a software engineering project, you should already have a pretty firm idea of why you are using it. You’ll know you have that if you’re able to express with confidence the percentage likelihood of it working, and what timescale you expect to be successful within. As a professional software developer, no client is paying you to indulge your personal interests or private curiosities on solutions they are funding, and that they going to rely upon to run their organisations and serve their customers in turn.

I’ve been working in technology for long enough now to have seen all kinds of fads come and go. Ideas, technologies, techniques and methodologies become flavour of the month for a while, and most eventually die out. Examples from .Net in recent years would include the way that C# has become more popular than VB as a language to leverage the framework, and the way that ASP.Net MVC has rendered ASP.Net Webforms as practically obsolete for new projects. This, despite the fact those technologies essentially solve the same problem.

Relatively recently, Single Page Applications (SPAs) have crept in. With all kinds of associated JavaScript frameworks like Angular, Knockout and Sammy facilitating same. Nobody seems to be talking about the elephant in the room, which is that SPAs actually allow MVC applications to be structured more like the old Webforms apps they replaced. Specifically, they move a lot of the UI interaction code back to being contained within the View. (A main criticism of Webforms had been that the code behind of each ASPX page was a bad place to put interaction logic, since it rendered it untestable to automated unit testing techniques.)

There are other advantages to SPAs, such as bandwidth considerations for mobile client platforms, especially when combined with RESTful web services built using tools like WebAPI. But the point is if you were to chase down every passing fad in .Net or any other technology like a dog pursuing a ball, you’d pretty soon end up in a mess. That’s why it pays to get experienced people involved in your project, who’ve already made all the mistakes you would otherwise waste time repeating in the discovery of what works. And, crucially, what does not.

We all need to prototype from time to time, and I’m equally dubious of anyone that has one Golden Hammer that they attempt to use to bang in every single nail as I am of people that adopt every new technology that comes along just because it is new. They’re both pathologies at either end of the same scale. It’s important to get the balancing act right. Ultimately, only you can decide if you’re spending too much time learning from mistakes rather than utilising existing, proven skills. Or too much time using technologies that you’re familiar with but that have become outdated and for which better, proven solutions exist that you just need to skill-up on. Allowing your otherwise brilliant team to spend too much time on frivolous experimentation to the point where you get beaten by your competitors is like a team of PhDs and Noble Prize winners being beaten by a team of amateurs with high school educations.   

In a related matter, did you hear the one about the team of amateurs with high school educations that beat a team of PhDs led by a Nobel Laureate involved in the Manhattan Project? Part of the wider Manhattan Project involved using a device called a Calutron. It was basically a machine that was used to separate Uranium into its usable isotopes. It didn’t do this in the very most efficient way possible. It didn’t even do it using techniques that are standard today (e.g. because one of those techniques requires a working reactor, which wasn’t available in 1942.) But it did work. Reliably. And using workers with only a basic grasp of the engineering application of the principles involved, not of the science behind the competing underlying theories of each possible alternative technique. 



Several Calutrons were built. But the one that produced the most usable Uranium-235 was run by so-called ‘hill-billy’ women with high school educations from Tennessee. Another, run by a team of PhDs led by Noble Prize -winning physicist Ernest Lawrence, lost a race with these young women to produce the most 235U. The basic problem was that the PhDs couldn’t help satiating their interest by chasing down every minor deviation from expected outputs from their device. 

No, that's Calculon. Please pay attention

A key test I’ve found as to whether you’re heading down the wrong path on a software engineering project is whether your developers are trusted (and trustworthy) enough to get on with their own tasks with minimal supervision. Whenever I’ve run teams, I generally judge performance on measurable results rather than on how closely to the way I would have done things a particular developer has adhered. If the solution presented works as specified, is maintainable, hasn’t taken an unreasonable amount of time to build, and, crucially, hasn’t broken anybody else’s work, then it’s generally a thumbs up from me.

One experience I had of things not working owing to the Superiority Effect mentioned in Arthur C Clarke’s story above came some years ago. I was contracting for a small startup that was using a particular implementation of an Agile methodology. They were using leading edge techniques that I benefited greatly from exposure to. However, there was one very frustrating habit within the team.

Each morning, we went round in a circle within the 10-or-so person team and gave an individual one to two minute update on what we had independently been working on in the past 24 hours. There were only two .Net developers, and I found whenever I was giving my update, the other developer couldn’t help himself from interrupting me to give ‘helpful’ advice about any particular topic I’d just mentioned. Often this wasn’t merely bad form for what was meant to be a quick scrum. It was also completely unnecessary, as often I was only mentioning the issue in question to say that it had now been solved, or was in hand and would be done within the next day or so. Because he never listened to a complete sentence all the way to the end, he only ever heard the description of any given issue and not the key fact that it was no longer a problem. 

Ima let you finish your scrum update. After I've talked over you
for ten minutes first telling you how I would have done it and why
it's better than whatever you were thinking or actually did.

 
No matter how many times privately and publicly I said to the person concerned that it’d be better to wait until after my update to make any suggestions, and best of all to wait until help was sought, he was just too much a “Mr Fixit” character to be able to control his over-enthusiasm. He honestly couldn’t tell the difference between you saying “here is a problem I am working on” and “here is a problem I need some help with.” He was a nice person other than that and we remained friends after the project, but really he was a very frustrating colleague to collaborate with for that reason.

I decided not to extend my involvement in that particular project, largely because the problem kept happening with no sign of improvement and eventually got much worse. I found that after scrums this guy would try and have hours-long conversations where he tried to take over my assigned responsibilities and ensure that I did them his way, instead of just trusting me to get on with my own work. I wished I could just get on with the job instead of constantly talking about getting on with the job.

Coupled with the lack of trust that was evident in this tedious behaviour, there was also the significant fact that a lot of the opinions put forth in those long and unnecessary discussions were simply obviously wrong when considered with even a basic amount of thought, which is why I'd discounted them in the first place. 

E.g., one particular task involved re-ordering the 'layers' in a given object upon insertion of a new layer or deletion of an existing one. The implementation I chose was a linked list modeled as a self-referencing database table. This approach meant a maximum of two layers / database records would be affected for every CRUD operation (the one you inserted, and the one that came after it, if any, to give that layer a new parent.) A suggestion put forth for an alternative approach was having an additional database column that stored a physical number against each layer indicating that layer's ordinal position in the list (1, 2, 3... etc). When it was pointed out that would leave two layers with the same number for inserts, the suggestion was revised to adding gaps (100, 200, 300...etc) and running a 'renumbering' routine on a schedule at fixed points. The proposer of this alternative approach didn't say why he felt it was necessary to change the design let alone why it was better than the initial solution, which was working. But he nonetheless insisted that his approach must be more appropriate because "he'd asked on the internet, and apparently that was the 'standard' way of solving that problem." Um, sure. Because people on the internet are never wrong.

"And that's totally how Normalisation works, some guy on the internet told me."
Said no software developer ever.



It didn't seem to occur to him that his suggested approach would involve re-numbering every layer after a particular layer rather than just two as before. And when you eventually have millions of objects each with their own set of layers, having a 'renumbering routine' that both had to determine which objects needed to be updated and make the necessary updates would quickly become impractical. And in the meantime the database could still get into an inconsistent state if more layers were inserted between runs than however big a gap you'd left between the arbitrary numbers associated with individual layers. No, you either did the necessary updates at the time of the CRUD operation by rolling it in an all-or-nothing Transaction that could be Rolled Back, or you got into a mess. Fast. His approach wasn't merely a solution looking for a problem. It was a naive, obvious flaw in critical thinking and an avoidable breach of the ACID Principles looking for an opportunity to wreak havoc. And, just to recap in case your brain blocked it out before to preserve your sanity, this was all for precisely no gain since this problem had already been solved by the existing design.

Second-guessing and attempting to micromanage ground-level implementation decisions like this isn’t merely a professional frustration for a contractor. It’s also a significant and unacceptable business risk. Unlike employees, contractors financially guarantee our work via Professional Indemnity and Public Liability Insurance. That means that if the client loses money through any mistake we make, or if a member of the public gets hurt as a result of using the software products we help build as a result of following our advice, we and they are fully financially covered for that eventuality. It’s similar to the way car drivers are insured - you hope you won’t need it, but it’s a legal requirement to be covered. Trying to tell a contractor not just what result you want (which is perfectly fine), but specifically how you want them to do their job (which is not OK), is akin to grabbing the wheel from an otherwise insured driver. Any resulting accident will invalidate their insurance, and you will likely personally become liable for any loses rather than the driver’s insurer.

On my last day on that particular project, I remember we had a “fire drill” exercise where a new instance of the application was rolled out as it would be for any new client the product was sold to. It was a cloud-hosted solution, and there were several discrete automated steps involved in getting a customer up and running with their own single-tenancy instance of same. Each of the team worked on their own part of the solution during the exercise.

My part (setting up the underlying database and producing a named cloud instance of the web app then deploying the solution to same) was over in the opening minutes of the exercise. The fully-automated process had been well-tested and debugged by one of the test team and myself during the preceding six weeks, and we had a high degree of confidence in using that part of the solution in anger.

Other bespoke parts of the solution went equally flawlessly. Specialist artists produced customised artefacts over the period of an hour or so. Animators and designers integrated those artefacts into an interactive demo. One bit of the system, setting up new users and permissions, didn’t work however. Can you guess who was responsible for that particular part of the ensemble? Yup, you’ve guessed it, Mr Fixit himself. I couldn’t help thinking that if he’d only focused on his own tasks rather than constantly trying to ‘fix’ my approach to mine, maybe his part of the system wouldn’t have failed to the cost of everyone else’s efforts?

As I left the office that day for the final time, the last sentence I heard was “but it doesn’t prove anything, the idea was still technically sound!” Nobody responded. We were all late for my goodbye lunch by that time, and we actually had to find an alternative restaurant since the one we’d originally selected was by then full, because the “fire drill” had overrun to the point where we’d all have been burnt to a crisp in a real emergency. Had an actual customer needed to use the product that day, they wouldn’t have been able to. 


I sent the team a copy of “Superiority” as a leaving thought. I hoped they’d learn from the experience. It may only take one misguided individual to engage in inappropriate tinkering and experimentation. But it takes a whole team and its management to tolerate a culture of constant prototyping, meddling in other people's assigned tasks, and direction changes to the point of failure of the overall objective.

Unfortunately, the VCs that had been supporting the company to that point brought their involvement to an end only six weeks after I left. By which point I’d already moved on to somewhere else that listened to my advice more and where I was consequently able to make more of a difference. The rest of the team all got new jobs other places too. They were basically good guys that I liked a lot as individuals and that had a lot of individual skills. Like those Calutron PhDs, it was just that as a collective they weren’t focused enough on delivering provable, stable results that mattered in the moment.

Sunday, 14 December 2014

Product Review - LED Lenser LED7299R H14R.2 Rechargeable Head Torch




I bought one of these for running during the Winter months, when you inevitably find yourself having to make some runs in the dark or twilight.

There are plenty of options out there - ranging from an offering at £5 from Tesco, right the way through to Hollis Canister diving head torches at £800. Obviously, there’s a trade off between getting what you pay for, choosing a light that’s suitable to your purpose, and not spending more than you need to.

After checking out other reviews for several different options, I opted for the LED Lenser LED7299R H14R.2 Rechargeable Head Torch. You can spend anything from £90 to £130 depending on where and when you choose to buy this model. There’s also a similar-but-cheaper model in the same range that isn’t rechargeable. (No reason that you couldn’t buy separate rechargeable batteries of course.) However, I liked the convenience of having the recharging unit built in. It can alternatively take four conventional AA batteries, which you can use as a backup.

For running, it was important that the torch had enough light output to be able to see in pitch darkness on unlit trails with occasional tree cover that blocks ambient light. It was also important that it was comfortable to run with. A lot of runners recommended the Petzl range of head torches. I can see why. They’re a lot lighter than the one I chose (whilst at the same  time being a lot dimmer - typically about a third to a quarter of the light output.) My main criticism of the LED Lenser H14 R2 is that it can feel a bit hard and uncomfortable on your head, particularly the front torch holder. A softer, more padded material behind the lamp would have made it much more useable. As is, it’s more comfortable with a beanie hat underneath, but I wouldn’t fancy trying to run with it overnight in the Summer when a hat would make you overheat.

In terms of light output, it was difficult to find reliable information. The minimum light output was fairly consistently reported by various sources to be 60 Lumens. The product box and the site where I bought it both say the maximum output is 850 Lumens. Other sources quoted as low as 260 to 350 Lumens.There appears therefore to be some confusion about what is meant by "maximum". Namely, the torch has a 'boost' setting that increases brightness for 10 seconds at a time. However, there is a second definition which is the maximum brightness that the torch is able to consistently maintain. I suspect this distinction accounts for many of the differences reported by different sources.

60 Lumens is about as good as the majority of the Petzl range. The brightest setting for the H14 R2, whatever the real value in Lumens,  is a very bright light that is uncomfortable to look at directly at the highest setting. The very highest setting (known as the "boost" setting) only stays on for 10 seconds at a time. Most of the rest of the time, I used it at the highest 'stable' setting.

On that highest constant-current setting, the light can be diffused over an area about 5m wide and 10m far directly in front of you. You can also elect to have a narrower but more intense beam. The specs say it will project light up to about 260m. I found that not to be the case, though I did stick to the “wide and bright” setting throughout my run. Perhaps the boost setting when combined with the narrowest beam would momentarily illuminate the farther "260m" distance quoted for 10 seconds at a time; I didn't test that, because such brief and narrow momentary brightness isn't relevant for my use case or many others I can imagine. I did test the range on the max consistent setting combined with a wide beam when I returned to my car. I found that whilst that setting is quite good enough for running/walking in the pitch dark by allowing you to see what's immediately in front of you, the light didn’t even make it across to the trees at the far end of the 100m or so car park I was in. I’ll try it again on the “narrow beam, temporary boost” setting during my next night run. However, whilst I suspect that the specs are technically correct and that objects will be able to be illuminated at that distance, albeit briefly, it is only with a beam that’s about 1m wide. It's for the reader to decide whether that performance meets their actual needs.

I found the light was good enough for my use case. I ran during astronomical twilight (the third darkest phase of the night; pretty much pitch black for the purposes of this test.) Without the torch, I would just about have been able to see my hand in front of my face in open ground, but not the path I was running on. On stretches covered by trees, it'd have been completely dark. As it was, I missed a pothole in the same forested location (once on the way out, and once on the way back.) I couldn’t see how I’d done this at the time, as I felt I’d been seeing the path well enough to run at a normal pace. I stumbled at the exact same spot again, however the very next day during daylight. So, it just appeared to be a particularly well-camouflaged pothole, rather than a failing of the torch. 

The final lighting feature of note in this torch is the rear red light that you can turn on to allow traffic and cyclists to see you more easily. I thought that was a nice little safety feature. Although, there's no real way to tell if it's on or off once you have the torch on, and the button is very sensitive. Other non-lighting features include a battery-power indicator (the rear LED glows red, amber or green for five seconds when you switch it on, to let you know how charged up the battery is.) I've used mine for less than an hour so far, and it's still in the green from its first charge. I'll update this review with how long a full charge lasts when I've gone through a full cycle. Lastly, you can detach the battery pack (and the front torch itself if you want) and wear them as a belt attachment. I personally prefer the light being cast wherever I'm looking, and didn't find the battery pack intrusive where it was, so haven't used this option.

The last point I want to note about this product isn't about the torch itself. It's about the user manual that comes with it. For a top-of-the-range piece of kit, the quality of the instruction manual translation leaves a lot to be desired. It's some of the worst Deutsch-glish I've ever seen. Take this excerpt for example:


It's so bad that at first I thought I might have been sent a fake item, since I couldn't imagine any self-respecting manufacturer allowing such a poorly-translated document to accompany their product. But, the supplier I used (ffx.co.uk's) bona fides checked out. And, checking with LED Lenser's own website, it seems that they've just done a very bad job of translating the user manual of an otherwise very good product. You can read the full manual (downloaded from LED Lenser's US site) for yourself here


All-in-all, I’m glad I bought this piece of kit. It’s good enough for what I need it for. The head harness could be a little more comfortable, but it’s very usable for its intended purpose nonetheless. I feel a Petzl and other cheaper options would probably not have been bright enough for what I need. And other more expensive options would have been brighter still, but wouldn’t have been designed to wear out of water.

Not a bad purchase : 7/10

Sunday, 21 September 2014

Amazon deletes negative feedback that it doesn’t agree with - how can anyone trust a company that behaves that way?





Amazon has been lowering customer service standards for quite a while. Despite being a company that in the past has wisely avoided self-harming behaviour like spamming and ripping off customers, lately they seem to have Jumped The Shark. My recent experience with them demonstrates a Google-level degree of cynicism in their dealings with customers.

This month I purchased a couple of running tops from SportsShoes.com. This is SportsShoes.com’s Amazon storefront. You may, like I was, be impressed by the 4.8 out of 5 stars averge review that other consumers had apparently given this vendor. You may also be particularly surprised to compare it with this 2.1 out of 5 stars rating from another popular independent review site. (Something I really wish I had done before foolishly trusting Amazon’s own ratings at face value.)

How did those ‘customer ratings’ get to be so different?


After receiving my running tops, in short order I received the following unsolicited email from SpamShoes (as I now think of them) -




OK, as First World Problems go, it’s right up there. But, avoiding annoying spam like the above begging for feedback and further business is one of the main reasons I’ve used Amazon in the past. Amazon has a setting in their user options that allows you to opt in to receiving reminders about leaving feedback, if you want to. Like most people, I have that option set not to bother me. I don’t use Amazon to help people build their business. I use it as a consumer for my own convenience. Period. So, when an individual vendor decides to ignore my preference and contact me anyway, that rankles.

So, I sent a response back to the vendor saying that I didn’t appreciate their spam, and reminding them that Amazon themselves will send us an email reminding us to leave feedback if we have agreed to receive one. The vendor doesn’t need to know what my preference about receiving feedback reminders is, only that I have one and would have received a reminder already if I’d asked for one. This is the response I received:


Thank you for your email,

I am very sorry that you feel aggrieved by our email, this is an automated email sent to all our customers. It's a courtesy follow up email to our customers mainly to say thank you for ordering and we hope you're happy with the purchase. But it's also a chance for any customers who may have had a problem to contact us so that we can resolve this. We are not begging for your feedback, it's just a polite reminder for you to leave some if you wish. The setting you refer to on your buyer profile, I can only assume to be for Amazon fulfilled orders only as we are unaware of any settings on your profile.

We received your negative feedback for your order, however contacted Amazon regarding this as we felt it was unfair as no spam emails have been sent. They have agreed with us, and removed the comment as they have acknowledged no spam emails were sent.

Finally, I can assure you we're a very professional vendor with a vast customer base. As I'm sure you can see from our feedback ratings, we generally do a good job which is reflected within the percentages. We'll continue to provide the service we are currently on both Amazon and our website.

Please be assured, you'll receive no further emails from our company.

Kind Regards,
Adam


Spammer doesn’t want to recognise they're a spammer shocker. Those perpetrating the act rarely choose to recognise they're doing anything wrong. No apparently doesn't mean "no" for these people. It means you must have misunderstood their intentions. Whilst they undoubtedly know deep down that they're behaving badly, they completely fail to recognise how pathological and self-defeating their behaviour is. You made a purchase from them once. So they feel entitled to invade your inbox whenever they like. They're the date rapists of the marketing world. It's no wonder they need to pay a third party like Amazon to be able to do something as simple as communicate with potential customers.    
 
This alone would not keep me up nights - plenty of businesses do dumb things that alienate their customers, without ever recognising how dumb or self-defeating they are. (Even when, as in this particular case, their business model is so fundamentally flawed that they actually need to sell their goods through a third party website, the only benefit of which is that it allows consumers to withhold their real address from the vendor!)

The part that does surprise me, and I believe should surprise any consumer that uses Amazon, however is that part in red where the vendor boasts about having been able to easily remove my negative feedback merely by asking Amazon to delete it.

Here is Amazon’s advice to Vendors about when feedback can be deleted. My review (which I don’t have a copy of since it was deleted) didn’t breach any of these rules. It merely stated my opinion that I had received unsolicited email from the vendor that I considered to be spam, and that as a consequence I was glad I hadn’t exposed my real address to them.

Looking around the internet, it seems like I’m not the only one that’s had a problem with their reviews and feedback being deleted. (There are plenty of other examples of negative reviews of both vendors and products that you can Google on your own if you wish.) In my case, I contacted both Amazon Customer Services and Amazon CEO Jeff Bezos to ask what their policy actually is about deleting reviews they merely disagree with (as opposed to any that breach their published rules.) In both cases, I specifically asked which of Amazon’s feedback guidelines my feedback had breached? And if none why was it deleted anyway? Customer Services merely restated that the vendor didn’t agree with my review. In Jeff’s case, there was no response at all. 

So, I’m forced to conclude that Amazon’s customer feedback ratings are nothing more than a sham. If the vendor in question (SportsShoes.com) hadn’t been dumb enough to send me further unsolicited email bragging about how easily Amazon had agreed to remove feedback they didn’t like I wouldn’t even know the review had been deleted since Amazon themselves didn’t even have the courtesy to tell me.

So, next time you’re perusing Amazon, have a think whether that ostensibly-5-star vendor you’re reading other consumers’ opinions about might really be a 2-star Del Boy outfit that’s just playing the system. And next time you’re considering whether to leave feedback about one of your purchases, positive or negative, to help other consumers. Stop to think whether you’re contributing to an honest feedback system that actually helps fellow consumers make better purchasing decisions, or merely lending validity to an artificially-whitewashed feedback system that has no credibility whatsoever.

Thursday, 20 February 2014

Scalability, Performance and Database Clustering.


What the Exxon Valdez and database clusters have in common


I was recently asked to comment on the proposed design for a project by a prospective new customer. The project involved a high number of simultaneous users, contributing small amounts of data each, and was to be hosted in the Cloud. The exact details were To Be Decided, but Amazon EC2 and MySQL were floated as likely candidates for the hosting and RDMS components. (Although my ultimate recommendations would have at least considered using SQL Azure instead, given some of the time constraints and other technologies involved that would have dovetailed into the wider solution.)

The discussion got me thinking about the topic of database clustering, as it relates to performance and scalability concerns. During the course of the discussion of the above project with the client’s Technical Director, it transpired that, despite the organisation concerned having used clustering in an attempt to improve performance previously, that approach had failed.

The above discussion didn’t surprise me. It’s a misunderstanding I’ve witnessed a number of times, whereby people confuse the benefit that database clustering actually bestows. In short, people often believe that using such a design aids scalability and performance. Unfortunately, this isn’t the case. What such an architecture actually provides is increased reliability, not performance. (It’s actually less performant than a standalone database, since any CRUD operations need to be replicated out to duplicate databases). Which is to say that if one database goes down, another is in place to quickly take over and keep processing transactions until the failed server can be brought back online.

The analogy I usually give people when discussing the benefits and limitations of clustering is that it’s a bit like the debate about double hulls on oil tankers. As you may know, after the Exxon Valdez disaster the US Government brought in legislation that stated every new oil tanker built for use in US ports was to be constructed with double hulls. The aim was admirable enough: to prevent such an ecological disaster from ever happening again. However, it was also a political knee-jerk reaction of the worst kind. Well intentioned, but not based on measurable facts.

Of perhaps most relevance to the topic was the small fact that those parts of the Exxon Valdez that were punctured were in fact double-hulled (the ship was punctured on its underside, and it was double-hulled on that surface). Added to this is the fact that a double hull design makes ships less stable, so they’ll be that little bit more likely to collide with obstacles that more manoeuvrable designs can avoid . And, just like in database clustering, the added complexity involved actually reduces capacity. (In the case of ships, the inner hull is smaller; in databases the extra replication required means less transactions can be processed in the same amount of time with the same processing power.)

As with all things, the devil is in the details. You can design clustered solutions to minimise the impact of replication (e.g., if you make sure the clustered elements of your schema only ever do INSERTs, the performance hit will be almost negligible). But, many people just assume that because they are clustering that in itself will automagically increase performance, and it’s that misconception that leads to most failed designs.


I’ve been involved in a couple of projects that involved either large amounts of data in one transaction impacting on a replicated database, or large numbers of smaller individual transactions being conducted by simultaneous users. In neither case, in my experience, was clustering a good solution to the design challenges faced.

The first project I have as a point of reference was one I worked on back in 2007, that involved a business intelligence application that collected around a million items of data a month via a userbase of 400 or so. I was the lead developer on that 7-person team, and so had complete control over the design chosen. I also had the advantage of having at my disposal one of the finest technical teams I’ve ever worked with.

The system involved a SQL Server database that was used by around 30 back office staff, OLAP cubes being built overnight for BI analysis, and certain sub-sections of the schema being replicated out to users that accessed the system via PDAs over GPRS (which of course will have been replaced by 3G / 4G now). The PDA users represented the bulk of those 400 users of the system.

The design we settled upon was one that traded off normalisation and database size for the least impact on those parts of the schema that needed to be replicated out to the PDAs. So, CRUD updates made in the back office system were only transferred to near-identical, read-only tables used by the PDAs once an hour (this could be fine-controlled during actual use to aid performance or to speed up propagation of information as required). This approach meant that the affected tables had less sequential CRUD operations to be carried out whenever the remote users synched over their low-bandwidth connections. And if they were out of range of connectivity at all, their device still worked using on-board, read-only copies of the backoffice data required.

The second main consideration in the design involved a large data import task that happened once every six weeks. One of my developers produced a solution that was algorithmically sound, but that quickly reached the limitations of what an ORM-driven approach can do. In short, it took several hours to run, grinding through thousands of individual DELETE, INSERT and UPDATE statements. And if any consistency errors were found in the data to be imported (which was not an uncommon occurrence) the whole process needed to be gone through again, and again, until eventually it ran without hiccups. It wasn’t uncommon to take a skilled DBA 24 hours to cleanse the data and complete the import task successfully. Meanwhile, the efficiency of those replicated parts of the schema used by the PDAs would be taking a battering. A better approach was needed.

In the end, I opted for using SQL Server’s XML data type to pass the bulk upload data into a stored procedure in a single transaction. Inside the procedure, wrapped in a reversible TRANSACTION, just those parts of the data that represented actual changes were updated. (E.g., it wasn’t uncommon in the imported data to have a DELETE instruction, followed by an INSERT instruction that inserted exactly the same data; the stored proc was smart enough to deal with that and only make those changes that affected the net state of the system). I designed the stored proc so that any errors would cause the process to be rolled back, and the specific nature of the error to be reported via the UI. The improved process ran in under a second, and no longer required the supervision of a DBA. Quite a difference from 24 hours.

The second project that informs my views of clustered database designs was one that I wasn’t the design authority on. In this case, I was just using the database(s) for some other purpose. Prior to my involvement, a SQL Server cluster involving three instances of the database was set up, and kept in sync. The solution was designed for use by a vendor of tickets for all sorts of events, including popular rock concerts. It wasn’t an uncommon occurrence for the tickets to go on sale, and for an allocation of many thousands to be sold out in literally ten seconds flat, as lots of fans (and I’m sure ticket touts too) sat feverishly pressing F5, waiting for the frenzy to start. (And sometimes, if the concert organiser got their price point wrong, you’d find that only a few tickets were sold for an over-priced event, but that’s another story!)

In the case of this design, I never did see the failover capabilities come into play. Which is to say that each of the three SQL Server instances that replicated the same data for reliability reasons all stayed up all of the time. I had a feeling that if one ever went down for reasons of load, however, it wouldn’t have been long before the others would have suffered the same fate. And since it was an on-premise deployment rather than being cloud-based, something like a power cut would have stopped the show dead.

It’s not that common for hardware to fail just because a high number of requests are being made simultaneously. All that will happen is that some users won’t get through (and you as the site owner will never know that was the case). It’s not like the server will shut down in shock. Even the recent low-tech attacks to large online retailers like Amazon using amateur tools like LOIC didn’t damage any critical infrastructure. At best, such conditions can saturate traffic for a short while. And often they don’t achieve even that much.

As a final point, I’d note that there are far greater concerns when designing an authenticated, public-facing system, such as CSRF vulnerabilities. Any attempt to address performance concerns by using clustering will inevitably adversely affect those security concerns. Because commonly-accepted solutions to same typically rely on data being reliably saveable and retrievable across short time frames (rather than getting in sync eventually as most clustering solutions allow for).

So, in summary, whilst there’s a place for database clustering for reasons of reliability, my earnest advice to anyone considering using that design for reasons of performance or scalability is to reconsider. There are usually changes you can make to your database schema itself that will have the same or better impact on the amount of data you can cope with in a short timeframe, and the impacts that data will have on your wider design. Don’t end up like Fry from Futurama, lamenting how your design might have worked had you only used (n+1) hulls/servers rather than n :


Tuesday, 16 July 2013

Tools for Assessing Software Developers

It’s been a while since I last wrote on the subject of how to hire great software developers and weed out any applicants that aren’t experienced enough for the more senior positions within your team. Given the advent of new tools that are available to conduct such interviews, I felt it was worth updating my previous advice on the subject.

Skype is probably the single biggest game-changer in technical recruiting in recent years. Particularly if distance is an issue, using Skype to conduct interviews is a no-brainer.

Previously, phone screens were the de facto best way of carrying out an initial sift of shortlisted candidates. And to be honest they were never that good of a predictive indicator. What’s different about Skype is that, provided the candidate in question has an IDE at home (and most experienced developers do) you can use it to quickly screen candidates’ coding ability. There’s nothing like seeing someone actually using an IDE right from your very first ‘meeting’ to get a feel for whether the experience they profess to have on their CV actually translates into meaningful skills that they’re capable of applying to realistic business problems.

Skype allows you and the candidate to see one another. For the hirer, that enables you to get feedback from any non-verbal cues about their interest in the job and aptitude for same. It also allows you to screen-share, so you can see what they’re typing in real time in their IDE. In those respects, Skype is even better than trying to conduct a similar process in person, because you don’t need to crowd around a laptop screen or use a projector to be able to see them at work.

So, by all means don’t rule any interesting CVs out on the mere grounds that the applicant doesn’t have a webcam, a development setup at home, or a fast enough internet connection to facilitate a video call. But if they do have those assets available it makes it much easier to confirm their ability in a matter of minutes, before either party has invested any great amount of time in the process. 



The second biggest innovation in recent years, in my opinion, is Github. It’s always been desirable for candidates to provide code samples as a means of demonstrating their skill. However, previously you could never be sure that any work submitted was a candidates own. Most candidates are honest. Just occasionally, however, you’d identify someone that had provided an impressive ‘code sample’, but who it later transpired couldn’t programme a tenner out of a cash machine. Wherever they had plagiarised such samples from, it was clear that they didn't actually understand them themselves. (Such antics are quite probably how this guy here got his job.) It’s a waste of both of your time if you only discover this fact when it comes to sitting down in front of a laptop at interview and you ask the candidate to take you through their solution, only to find they can’t explain the first thing about how it works or why certain design choices have been made.

Github aids candidates’ credibility by being a freely-available online source control solution, that verifiably identifies the authors of any content submitted. Not only can you freely download any complete solutions that have been placed there, but you can see the individual check-ins that went in to producing each solution and the thought processes indicated by the comments associated with same. If you know what you’re looking at, those fine details tell you much more about a candidate than a mere CV full of buzzwords and all the glowing references in the world ever could. And unlike copying whole solutions you didn't write yourself, forging a history of the individual check-ins that go in to making up a complete solution is all but impossible.

With Github, you can also confirm a demo project’s creation date. This is important. Do you ever get the impression that candidates’ CVs are merely re-wordings of your job spec? This is in some ways understandable, and arises from the fact that the standard advice jobseekers are given is to tailor their CVs to highlight relevant experience. But still, as a hiring manager you sometimes would prefer to see what a candidate felt their own strengths were, before they knew what you were actually looking for. Github gives you that insight. If you’re looking for someone that has experience in Technology ‘X’, being able to see that they’ve completed a project using that technology some months before your particular requirement even came up is a pretty convincing demonstration that the candidate actually does know what they’re talking about when it comes to the subject concerned*.

(* That said, outside of specialist contracting roles, where you do expect new hires to hit the ground running from day 1, hiring software developers should rarely if ever merely be about hiring a particular skillset. It’s always better to instead hire for aptitude and attitude, and train for skill when you need to. Because new technologies come up all the time, and it’s no good hiring one-trick ponies that are incapable of keeping up with constantly-emerging technologies. Or, worse still, people that may be gifted as individuals but whose personality problems render them unsuitable for teamwork. You can teach people with the right aptitude and temperament almost any technical skill they need to know. The best ones will be capable of constantly improving themselves. But you can’t teach them not to try and use their one golden hammer to solve every single problem they come across. And you can’t teach them not to be an arrogant control freak that alienates their peers.)





The above are great ways to identify talent. That said, I know from working with a great many talented software developers over the years, that a lot of them don’t have the time to work on open source projects on Github whilst they’re fitting a family life around about being great assets to their existing employer. And some of them live in places where the internet connection is slow, making Skype a difficult option.

So, for people for whom Skype and Github aren’t options, there is a Plan ‘B’ you can use. A less-preferable secondary approach that also works is to conduct an initial phone screen using a stock list of questions. I’m loathe to suggest an undue correlation between merely knowing the answers to some coding trivia questions and actual meaningful ability as a software developer. One is merely knowledge, the other is a demonstration of actual intelligence. However, there are just some basic things that you should know about any language or technology you profess to be proficient in, and that knowledge can be used as a baseline check if need be.

E.g., for a junior level C# developer, I’d expect them to know:

  • Q. What are the scopes you may use to limit Field/Property visibility, and to what extent do they make these aspects of a class visible?

    A. Public, Private, Protected, Internal and Protected Internal.
    (NB: I wouldn’t fault anyone for failing to name that last as a distinct scope in its own right, whose limit is a combination of that afforded by ‘Protected’ and ‘Internal’.)

The key thing is that there are no trick questions here that would require knowledge of obscure parts of the .Net framework. Candidates may or may not not happen to have used certain discrete parts of the 4000-plus namespaces in the .Net Framework, but good developers could easily look up and utilise any part of the Framework if they needed to with only a couple of hours research. Asking about the features of a specific namespace is therefore pretty meaningless. The questions above instead just concern basic, core features of the C# language. Anyone that has used C# at all should be reasonably expected to be aware of them.

Questions like these don’t help you identify whether someone is a great developer or not. Seeing how candidates write actual code using a real IDE is the only thing that enables you to do that. These questions are purely intended as a baseline negative check to help you identify any manifestly-unqualified candidates where the other preferred means of confirming ability mentioned earlier are unavailable.

For more senior C#  developers, I’d expect them to know more advanced, but still core, features of the language. E.g., :


For a Lead Developer or Architect, I’d expect them to be able to speak meaningfully about:

  •  Can you describe some Design Patterns? (e.g., please explain what Singleton is, What is the Decorator pattern? Tell me about a time when you used them?)


  • What are your thoughts on Inversion of Control / Dependency Injection? What about Test Driven Development? Do you always use them on every solution?* If not, what criteria do you use when deciding whether to expend the additional effort? What are the limitations of IoC? Which of the 22 plus frameworks that presently exist have you encountered on live projects?
    (* FWIW, I personally believe that using these presently-fashionable methodologies and techniques on every single project is about as misguided as never using them.)


  •  What is an abstract class?*
    (* The observant will notice that this last question is the same question used for junior developers. It’s amazing how many Architects can recite high-level summaries of chapters from the Gang of Four, but who’ve lost touch with how coding actually works in the trenches. It gets more difficult as your career develops to keep in touch with the front line, but my personal belief is that you can only lead great developers if you actually share their pain by hitting a keyboard yourself once in a while. You certainly shouldn’t exhibit any signs of Hero Syndrome or micro-managerial tendencies by needing to be involved in writing every line of code yourself, and you shouldn’t try to do developers’ thinking for them. You need to entrust and empower those you lead by allowing them the freedom to get on with any tasks you delegate to them using their own skill. However, it is important to implement a particular feature yourself every so often, purely to keep your own skills current in an ever-changing technical landscape. Otherwise you only lose touch with emergent technologies. A clear sign that you aren’t getting enough personal keyboard time is when you begin to lose the basic knowledge that even junior developers working under you are expected to possess.)

For any one topic that I consider myself experienced enough to assess others in, I have a list of about 200 such questions that represent basic knowledge I’d expect most people to know at each level. During an initial phone screen, selecting two or three such questions as baseline checks is the next best alternative to using Skype or Github to assess whether there’s any potential.


I wouldn’t lose sleep over anyone getting any one individual question wrong. (Especially if they’re honest enough to admit they don’t know a particular fact. The very best people show awareness of things they don’t presently know, whilst less skilled individuals are often paradoxically unaware of their own current limitations. That inability to perceive their own present weaknesses leads to them failing to ever improve. This is known as the Dunning-Kruger Effect.) I still prefer actually seeing a person code using Skype, Github or even YouTube in preference to using coding trivia as an initial screening tool, but phone screens using basic questions to eliminate candidates is the next best option for the initial sift of candidates that invariably apply to almost any openly-advertised technical position. You can apologise to the ones that find it ridiculously easy afterwards, and explain the reasoning behind your using such simple baseline checks.

Skype and Github are better options because they represent positive checks for ability, whilst asking baseline questions is merely a negative check to identify the absence of basic knowledge. However, if a candidate can’t answer any of the simple baseline questions appropriate to their level of seniority, that’s clearly someone that you won’t take forward to interview.

For anyone that attends an in-person interview, I’d always recommend seeing them code using an actual IDE. (If you’ve seen them do so via Skype previously, obviously you can skip this step). The best way to do this is to attach a projector to a laptop that’s loaded up with a full IDE and an internet connection, and watch them work. I once had a hiring manager tell me that they used pen and paper coding exercises instead “because they didn’t want the candidate to have access to Intellisense, and all those other ‘cheats’ that a full IDE provides”. No, I don’t understand the logic behind that one either. I found myself wondering if they’d ask a prospective master carpenter to bang in nails wearing a blindfold, and decide from how swollen their thumbs were afterwards which was the ‘best’ at their craft.



Just like when you’re using Skype, you can record candidates’ efforts to build a quick solution using free tools like CamStudio recorder if you like. That approach can be very useful if you work in a large organisation and have a wider selection committee that will need to review the interview later on. It can also feel a little like an unfriendly interrogation, though, so you need to decide what’s right for your own organisational culture. Personally, I’d only record a coding test if there were a need to show the recording to other members of your recruitment panel afterwards. And I would explain to the candidate that the purpose was to save them having to demonstrate their ability multiple times to different people.

It’s important to make clear that the problem you’re asking them to solve constitutes realistic work, but not real work on an actual business problem. The first activity is a meaningful test of their skill. The second would merely represent unpaid work, and that would risk making you look like a freeloader. One problem I’ve seen used in the past and that I thought was a pretty fair baseline check read something like this:

“Design a system that allows you to model shapes as objects. Each shape should be capable of outputting a text description of itself. The description given in each case will be:

‘I am a _________. I have ____ sides, and ____ corners. My colour is ______. Their lengths are _______.’

There will be appropriate Properties in any classes you use to model such shapes to store the information to be supplied in the blanks in the above description.

You can implement this solution using any UI you like. Have specific classes that describe the shapes ‘triangle’, ‘square’, ‘rectangle’ and ‘circle’”

A developer should be able to come up with a simple design that has a base (possibly abstract) class that provides any shared Properties like colour, numSides, etc. They can either implement a Method in that abstract class to allow a string description to be output, or they can override the default ToString method. Classes describing the specific shapes requested should be inherited from this base. Extra points for having the perception to make appropriate properties/fields read only in more specific classes (i.e., you don’t want consumers to be able to create a triangle with four sides). Points too for using inheritance where appropriate (e.g., realising that a square is just a more specific instance of a rectangle.)  Nothing too taxing, and no trick questions or tasks that would take an unreasonable amount of time. Just a simple problem to allow developers to show that they’re not a non Fizz-buzzer.

As this is a blog about assessment tools, it’s worth mentioning ‘online’ tests like ProveIT, Brain Bench, and Codility. These ‘tests’ fall into two main categories:

  • Tests that attempt to assess ability based on being able to instantly-recall knowledge of obscure parts of particular frameworks.
  • Tests that try to assess an actual ability to write code, but not using an actual IDE.

My opinion on using obscure trivia to assess problem-solving ability is well-documented. I’m with Einstein on this one, who when asked what the speed of sound was once said that:

“[I do not] carry such information in my mind since it is readily available in books. ...The value of a college education is not the learning of many facts but the training of the mind to think.” *

[ * New York Times, 18 May 1921 ]

I don’t consider memorising a lot of obscure and easily-obtainable facts to be a good indicator of programming ability. Nor do I consider not being able to recall such facts at will to be an indicator of a lack of ability. Developers have Google and reference books available on the job. I’m therefore only concerned with testing those aspects of a developer’s ability that those tools can’t provide.

That leaves those online ‘tests’ that attempt to assess coding skill, such as Codility. There’s nothing wrong with the basic idea of getting candidates to write code as a demonstration of their existing ability and potential. However, there’s a big difference between writing code using an actual IDE, and attempting to write code using a web browser (which is how Codility works). In a real IDE, you have Intellisence, code snippets, meaningful object navigation (e.g., if you place the carat on the usage of a class or property in Visual Studio and use the F12 key, it’ll take you to where that class/property is implemented), colour coding of keywords and objects, compilation checking as you type, etc, etc. Codility advocates believe that because that assessment tool has a “compile solution now” button at the bottom of the browser window that amounts to the same thing. It simply doesn’t. Going back to my earlier analogy about inappropriate ways to assess carpentry skills, you’ve merely gone from using a blindfold to asking the candidate to wear sunglasses in a dimly-lit room.  

Codility tests run in a web browser

The main problem with Codility et al, however, is simply this. They don’t give you anything that you don’t also get by watching a candidate solve a real problem using a real IDE. Because of this, you invariably find that these tools are preferred by interviewers that don’t possess skills in the language concerned themselves. Such interviewers don’t use an IDE / laptop with a projector approach, because they simply wouldn’t understand what it was they were looking at. By using Codility instead, they’re generally looking for an ‘easy’ way to understand whether a given solution is ‘right’ or ‘wrong’, without having to go to the trouble of understanding why such a value judgement has been arrived at themselves. Good candidates are aware of this, and the best of them will be concerned that if you only understand how good they are because some automagically-marked test tells you what to think, how are you going to be able to fairly assess their performance on the actual job in the absence of such feedback?

Everyone knows that good interviews are a two-way street. Candidates are assessing you and your organisation just as you are assessing them. Sending a signal that you don’t understand what it is that they do can damage your credibility and your employer/manager brand considerably. So, if you’re not technical yourself (and some managers aren’t), I’d generally recommend instead asking one of your existing staff that you trust to be able to make a meaningful assessment of a candidate's ability to accompany you when assessing candidates’ technical fit.

A second problem with Codility, in my opinion, is that solving discrete problems using technology in the real world rarely works in such black and white terms as a solution being ‘more’ or ‘less’ right than other approaches. There are generally a great many ways to satisfy any one problem. Which one(s) is/are ‘correct’ is all about context. Tests that focus on an overly-narrow set of criteria when determining success may not always identify the best candidate, even if they identify someone that produces the fastest solution, or the one that uses the least (or most) lines of code to solve a problem. e.g., if someone were to use the line  123 << 4  to get the result 1968 instead of writing  123 * 16  , that might be the genius you need to optimise nanoseconds on calculations within the firmware for a graphics card, or they might just be That One Guy that writes unreadable code that produces hard to find bugs. (Mostly, though, they’ll just be someone that doesn’t realise low-level arithmetic optimisations like bitwise operators are largely meaningless in languages like C#, where high-level code is converted at compile time into optimised MSIL before being converted into even more optimised machine code specific to the hardware it’s running on.)

You can try Codility for yourself here, and I'd strongly recommend that you do so if you're considering using it to fairly assess candidates. It's not enough just to get someone else to look at the test for you, unless you ask your chosen guinea pig to work under the exact same time constraints as candidates will be asked to work to. That also means they only get one shot at the test, just like candidates.

In the interests of debunking The Emperor's New Code, when I tested Codility out as an assessment tool I found that I didn't produce a 100% solution myself first time in the time allowed. I therefore felt it'd be unfair to ask candidates to do something that I myself couldn't.

I doubt that many people could produce an 'optimal' result in the timeframe allowed, particularly when you don't get to see the criteria that will be deemed to constitute an 'optimal' solution before submitting your answer. When they only have a short window to think about the problem, candidates will be inclined to focus on providing a solution that works rather than one that shaves milliseconds off of the runtime. And even where candidates do provide an 'optimal' solution, there doesn't seem to be much allowance for readability in the simplistic percentage score returned.

I suspect that most 100% results that users might see from this tool may be best explained by the fact that there are many solutions to the tests published online, and some candidates will be inclined to copy one of those.



This deliberately-obscure and unreable
solution scores 100%

(Full-size view available here)

This shorter and more readable solution also scores 100%


My overall conclusion: companies that let computer algorithms select the best people to work for them rather than the other way round may well be disappointed by the results.