.comment-link {margin-left:.6em;}

Sam Lowe's blog on Enterprise IT

Friday, July 07, 2006

Are Most Enterprise Technology Selection Exercises Flawed?

There is no single best way to select Enterprise technologies or Enterprise application packages. However, having been involved in many (on both sides of the fence, customer-side and vendor-side), it never fails to amaze me that each process I come across seems to either 1) Have been conceived from a blank sheet of paper, from first principles, or 2) Is the rolling out of an old set of templates that someone found on their hard disk which came from a very different context.

I suppose it's indicative of the youth of the IT industry that there doesn’t seem to be perceived or taught guidance on this, and instead the task is often left to people to make up as they go, or is outsourced to 3rd parties who may see the measurement of their success/quality as the amount of documentation produced! Things would be better if there was a dialogue about this, so in that spirit, here's a couple of thoughts/observations of mine to be shot down.

In my experience there really are only three ways I've come across to run a selection exercise:-

First is the traditional Request-For-Information (RFI) followed by Request-For-Proposal (RFP) process.

*Your classic paper-heavy type of affair where each party works through the night in sequence to turn embed every thought or nuance into a series of lengthy documents. You generally dredge the brains of everyone you know, trawl the internet and raid the piles of old trade-magazines that are under your desk, to come up with every product you can buy that might do something similar to some of what you think you might need. And then you start communicating with all vendors via the medium of MS Word whilst refusing to actually talk to anyone.

*Then, typically I find these work on a critical approach I.e. you look for something that is unacceptable about a particular vendor or product based on their RFI answers, and use that as a reason to knock them off the list. Eventually you're left with 2 or 3 to go to RFP with.

*This type of process is unbeatable for auditability, creating a 'whiter-than-white' approach that appears fair to all. However, it is extremely resource and time-inefficient to all sides, and often runs the risk of selecting the vendor who's best at the process (the game?) rather than has the most appropriate product.


Second is the accelerated version of the RFP-centric process, where you go straight into the RFP, with a smaller number of parties, from an independently-researched and validated shortlist.

*This process is great if you know who the contenders are, and you can get away without the paper-trail (most organisations can in my experience). Plus, the increased level of communication possible because of the shorter process and smaller number of participants means the results are more likely to be on target.

*But being able to do this is dependent on really knowing who the serious contenders are in advance. And really knowing does not mean just going for the most famous. This usually means using someone who's got real expertise _and_ experience of similar situations. Please do not just choose the few closest to the top right-hand corner of a Magic Quadrant. Even if it's the right MQ (and who's to say the big G have chunked things up the same way you should), using an MQ alone is a complete lottery. Frankly, it smacks of the IT department wanting the best toys to play with, or worse, wanting the most valuable addition to their resumes.


The third option is to run an interactive process from the start, concentrating on minimising the documentation, and instead scripting the interactions to collaboratively work though the issues, iterating towards a joint-solution that can be tendered against with no misunderstandings.

*This is of course an extension of a joint-design exercise, applied into the selection world. It can deliver the best results where the right selection cannot be reduced down to a feature-function comparison, and a qualitative decision is needed (e.g. on process issues and the user-level experience, rather than technical does have / doesn't have), bringing the business and IT communities together to assess the contenders. It also is the fastest way to get to a selection, thanks to not having the overheads of all that documentation, and instead using the time to do design and education work you’d have to do anyway.

*But, this process leaves little in the way of an audit-trail, and it can only work with a very small number of vendors/products without becoming very cumbersome. So it can only really be considered where there are only a few credible alternatives and the fact that the others are not credible can be clearly articulated (with external assistance normally).


So, there is a quick tour through the three types that I see out there. There are some interesting other thoughts from Tony Byrne here. I'd be interested to hear other people's experiences.

Technorati Tags:

6 Comments:

  • Interesting topic. I have done a number of these product evaluations. The evaluation was always a little different depending on the technology space but the one common thing was hands on always. Having the products in house with your own people to test and verify is very important. The paper evaluations esp. in the RFP formats are usually over hyped by the vendors. Or the capabilities do exist but only with an army of consultants.

    By Blogger Mark Griffin, at 1:46 pm  

  • Yes, absolutely - having the products in house for hands-on evaluation gives a lot more flexibility for you to really put the products to the test. You can go straight into competitive Conference Room Pilot or Proof-of-Concepts that way.

    However, isn't there a limit to how many products you can include and give them a decent assessment (assuming you have limited 'experts' available to do the assessments, or limited training time)? So don't you still have do some kind of selection to get the contenders that you need in?

    Second thing is (as you say) it depends on the product-space. For portals, ETL, EAI/ESB and other platforms technologies that's great, but large-scale business applications like ERP/CRM/PLM/SCM are too big and complicated to install in a reasonable time, and assess independently in a meaningful way?

    By Blogger Sam Lowe, at 2:19 pm  

  • Yes I think you are right on both counts. You have to get the number of products down to a reasonable level in order to do the hands on. Generally speaking having the vendors not only respond via paper to a list of questions but then having them come in and present/defend their statements is a good start to thinning the product list.

    Yeap ERP/CRM etc are just plain to big to do anything with. If anyone ever successfully implements one of those let me know. ;)

    By Blogger Mark Griffin, at 9:35 pm  

  • Sam - This is a great summary, I think there are far too many people who think that #1 actually produces the best selection, when in reality as you point out it's just the most heavily documented.

    I think 2 and 3 are far better options - the closer you can get to working software in your environment during an evaluation, the better chance you stand of making a valuable choice.

    I also think that more subjective and social measures are often undervalued - reference calls with current and former customers, reading blogs by vendors and customers/users, etc.

    By Blogger scott, at 7:15 pm  

  • Scott, yes good build - I didn't get to mentioning the softer side of the diligence (such as the reference calls) at all in the post.

    The risk of course with reference calls is that you may get set up with the vendors favourite corporate-entertaining buddy customer, or you may get one of the early adopters who's a big fan and a bit defensive, or you may get a genuine objective customer. And you don't know which you'll get before you get them.

    The best way to side-step that I have found is simply to request which customers you want to speak to yourself. Ideally not even from a list they give you, but rather through getting someone (maybe temporary if necessary) into the team who has the personal contacts and experience/knowledge of who's got what, and maybe even who had the issues.

    By Blogger Sam Lowe, at 10:03 pm  

  • Sam,

    Funny thing about the "template" approach to buying: when adapting the old checklist, I think purchasers don't usually correlate the template to project success.

    In other words, if the old project failed in some significant manner, then using that purchasing template may not be a good idea.

    Michael Krigsman
    http://projectfailures.com

    By Blogger mkrigsman, at 5:30 pm  

Post a Comment


via Haloscan

<< Home