Sunday 25 April 2010

Feature stacks and the abuse of language


So the new financial year heralds the season of vendor conferences, and - as night follows day - over the horizon, like the four riders of the apocalypse, approaches the associated marketeering storm that always comes with such conferences.

Sadly one trend I'm seeing more of from the (increasingly desperate?) IT infrastructure industry is aspirational future feature stacking. Where endless features are announced haphazardly into the mix in an attempt to justify new revenue streams; naturally the delivery of these features is in a different year/decade to when they are announced, let alone when any actual benefit might be realised.

Of course the first challenge to this is trying to convince customers re the vital importance of features they haven't heard of before, often for problems they never knew they had. So some use fictional stories in order to try and paint a picture of utopia as a result of paying for their magic liquor, some just plaster the industry with noise, others use new abuses of marketing terms, some use all.

A common area glossed over is the 'initial ingress disruption' required to achieve such utopia features - especially given the likely useful life of 'nirvana function'© versus the duration of benefits case and lifetime of said feature.

The benefit's case is an interesting point in it's own right - remember these are the vendors that often still haven't a clue about the TCO or ROI for their products several years after they were announced. Naturally there is little or no mention of the financial costs involved, ingress & egress disruption, organisation & technology process changes, operating model changes, and increasingly, the business process changes needed to use this fictional future widget function.

Now you wouldn't expect otherwise, but of course there is little mention of either the existing abilities to solve this problem other ways, or that the effort & resources might be better invested elsewhere (ie higher up) in the technology solution stack? Or that the symptom cold be avoided entirely if the cause were addressed with better application design. My view has always firmly been that infrastructure can provide at best single digit % improvements, where as changes in the application layer can provide double digit % improvements.

Always just snow-ploughing the data problem symptom around rather than addressing the cause - of course you can't fault the bottom feeding tin vendors from offering this solution, there is always some legacy application that can benefit from any improvement; but frankly the infrastructure companies don't have many other options and there is always somebody that'll buy anything.

So there's plenty of noise, lots of definition and understanding confusion & plenty of widget functions, indeed it's nothing new for companies to start abusing words and terms in a desperate hope to generate excitement and differentiation - yet normally this just further confuses the market (remember when a word typically had one clear, obvious and innocent meaning???).

Some recent history of definition & language abuse could be :-
  • 'Cloud' NIST has worked to a certain extent but IT companies have abused the hell out of it.
  • 'Virtualisation' has some common understanding in the server world, but as usual the storage world is chaos.
  • Now along wanders 'Federation' as the latest word to be put through the hype & definition mangler.
I'd really encourage the use of the relevant standards body to help create common industry definitions for the terms used, always provide clear & transparent context and always detail the assumptions & pre-requisites with any form of benefits discussion. Rather than using hypothetical stories and definition abuse, I'd really much rather prefer it if companies explicitly list the: -
  • The specific customer requirements & problems this addresses & justify how
  • The use cases this feature / function applies to, and those that it doesn't
  • Why & how this feature is different to that own vendor's previous method for solving this problem
  • Provide clarity over the non-functional impacts of the feature before, during & after it's use - ie impact on resilience, impact on performance, concurrency of usage etc (including provide up-front details of constraints)
  • Provide the before & after context of the benefit position, clearly explain the price of the benefit change and any assumptions or prerequisites needed to use the feature

  • Provide some form of baseline & target change objective for entire process steps impacted
  • Confirm the technology costs and cost metric model for this feature
  • Naturally you'll also expect me to require TCO & ROI of the feature, and any changes to the models as a result of this feature
To take an example, one key element being touted by 'federation' is 'non-disruptive migration' - something I'm very much in favour of. However a) for many this can already be done through the use of the de-facto volume manager & file-systems, but b) the real issue associated with migration are 'remediation' and CABs. With most CABs nowadays been based on risk, and commonly used as process validation gates - it's hard to understand how 'federation' helps change approval boards (especially when you consider that lots of CABs still require engagement for moving hypervisor guest images). For the 'remediation tech refresh' use case of federation there will need to be a lot of changes in the vendor support & interop processes, culture, responsibilities and agreements for this to be of use. If the host still requires any material remediation (eg HBA change, firmware changes, OS patches, server model change, VM/FS changes etc) then moving the bytes stored on the rust, whilst good, does little to address the majority of the problem. Let's not forget all the other associated OSS processes that have to be engaged - eg ICMS/CMDB updates, asset & license management registers, alert & monitoring tools, networking planning & bandwidth management etc. Yes in the world of the automated dynamic data-centre these related issues will be improved, but that's a future state after a lot more of investment & disruption.

If this sounds overtly negative that isn't the intent. The issue for me is that any 'nirvana function'© is normally only of use if it makes a net positive change to the cost of BAU service or change. In order to prove that we need to understand how it impacts the steps, effort & duration for each item in the transition from 'desire to delivery' (eg when somebody thinks they may need some capacity to when they are able to actually use this). From my experience this sequence involves a mix of commercial, technical, political, emotional & financial steps - similarly very few companies seem to be able to show the steps in this sequence and how their function changes them.

Now I'm very much one for focusing on capabilities and architectures rather than point widget features, but the current trend of announcing aspirations as architectures and then products is a very dangerous and steep curve downhill. Like an iced wedding cake made from cards built on a sandy beach - this obsession with feature stacking promises everything but benefit delivery regularly lasts for only a few minutes before collapsing in an ugly mess.

Are suppliers hoping that by increasingly frequently hyping the shiny shiny baubles of the progressively distant future they will distract us from the factual reality of today? Remember today was the future of yesterday, and how many of the past's 'nirvana functions'© promised by these same charlatans vendors actually came half way true?

If only these vendors spent time & resources making the existing features usable, simplifying the stack, resolving the interop issue, given clear context and being able to actually justify their claims, rather than building their own independent leaning towers of Pisa from which they can throw mud at each other...

Thursday 15 April 2010

NotApp takes a byte of objects?

So NetApp have finally shown their cards with regards to their previous press noises on cloud & object storage, with their acquisition of Bycast Inc here

Now eager followers (it's legit to use plural as there are at least 2 of you!) will recall that I commented about NetApp and cloud storage last year here (NotApp or NetApp) and here (NetApp cloud or fog) - so of course I'm rather interested to follow-up and hear how @valb00 carries through with his statement on my blog comments from Aug 2009 of :-
"- Finally we come to the highly anticipated Object Storage question. Without pre-announcing anything, I will divulge that our solution will prove the value of Spinnaker’s scale-out excellence, particularly beyond NAS or even SAN/iSCSI configurations. Priorities of REST, XAM, SOAP and others are really interesting to us at the early (pre-standards) market phase"
I must admit to being a little disappointed by the announcement - much like @StorageBod, I had been allowed to gather the impression that they were much further along with their own internal object work. One assumption would be that what was being alluded to was a whole bag of empty :( Of course the another possibility is that the internal work is going fine and this is an stand-alone additional product line?

Either way, the timing of NotApp & ByCost gives me a wry smile given length of time between 'object strategy' PR statements to actually starting doing something.... (let alone the GA/GD date of the final solution)

Fundamentally I still have the same questions plus naturally some addition new ones :)

Clearly there are some of the obvious questions :-
  1. How quickly they make this a native capability of Ontap and not just a standalone product or a bolt-on gateway? (frankly I'm not taking bets on anything earlier than the GD release of v8.3??)
  2. What pricing model & cost they sell the tech at - the object model will not stand NetApp's traditional COGs, let alone combined COGs of NotApp plus Bycost
  3. How NetApp intends to handle Bycast as a company? As let's face it, NotApp's acquisition history isn't exactly great, and their software dev trains are rather muddled and overly complex right now
  4. How will NetApp manage to hold on to the people & desire fuelling the drive and innovation at Bycast? Especially when they faced with the monolithic wall of spaghetti code that OnTap must be by now...
  5. How much did NetApp pay for Bycast? and thus how much additional value do they need to return to their shareholders over what period of time?
But there are some other questions that come to mind as well :-
  1. Would I have purchased from Bycast before? no. Would I now via NetApp? don't know - far too early to understand
  2. What's the product costs and the combined/revised TCO model?
  3. How will NetApp position this pure software only model, that allows for felxibility with hardware (eg server reuse, DAS pricing models, capex risk mitigation with repurposing etc), against their normal hardware + software model?
  4. When will they include compatibility for the AWS S3 object APIs standard? As this is most definitely the de-factor standard that people are interested in right now..
  5. What will Netapp do for globally local deployment skills & support?
  6. "Is this too little too late?" @RandyBias asks here - interesting question! All depends on things like the API model, product cost, time to deliver real integration, where it fits in sales proposition, roadmap integration etc...
  7. How will NetApp build upon Bycast and what is their 18mth roadmap for the Bycast technology?
  8. What difference will being part of NetApp make to Bycast? and how will this improve their products and services?
  9. How will NotApp adapt the waffle maker to be able to efficiently cope with the metadata needed in an object platform?
  10. How does it relate to OnTap 8 distributed file-system mngt? is this helping in-fill minds, technology, issues in that space?
  11. Does NetApp have the suitable culture to be able to connect and deliver in this space? interesting... in the enterprise market for internal object stores - maybe... for the web 2.0 uber scale developer lead object stores then no... They certainly are not the driving culture innovator they once were, just look how hard EMC have fround this area with Atmush and the squillions of $s & good minds that they've poured into CIB & Atmush so far... (and that's a not too bad a product - some material API standards issues but mainly internal culture, sales & cost issues...). Now given that NetApp are nowadays more like EMC in the 90s than any other company I've ever met (ie complacent, out of touch, expensive, slow to react, storage only player, rhubarb for ears etc - but interestingly still better NAS than the rest) - how on earth wil NotApp's sales-force get their heads around selling something at much lower costs, higher value and address the margin cannibalisation directly?
  12. Will NetApp want to get into the IaaS/SaaS market directly by offering an object store service directly to compete with AWS & EMC etc? and if so how will they handle the 'competing with their own customer' bit?
  13. How will the competition react?
  14. Who will look to snap-up the other software only storage cloud players out there?
  15. Will NetApp now finally calling 'any shared bit of tin' a cloud and use the term with a bit more respect?
Now - quick breather - time to comment on Bycast and their products :-
  • Prior to this I was aware of them, but they weren't somebody I was actively engaged in discussions with
  • I think it's good that they are working with the SNIA CDMI standards (I'm guessing this is where the acquisition discussions may have started from)
  • David Slik's blog here seems to have plenty of good content in it
  • Bycast certainly seem to have a bunch of happy customers so far
  • The fact that it already supports multiple types & staus of target media is very positive, as is the support of running under VMWare (and hence being hardware agnostic)
  • The data on the website is rather light on specific numbers (volume, qty scale, performance etc) and details on policy mngt & metadata
  • One thing that annoys me, is that to find any information out (documentation, technical, support etc) it would appear I have to register (and wait for an email of the document and for the inevitable sales droid to try and contact me) - big hint, you want me to look at your company & product? Make it easy! (especially when I tried it the system crashed with Siebel OnDemand errors all over the place)
  • Clearly the devil is in the details, I'll wait to find out more over time
So the big question is - "when & what are you going to do with your new baby now NetApp?", I'm grabbing a beer and going to sit, watch & wait... :)

Wednesday 14 April 2010

Large slices of pie do choke you!

So a new blogger called "Storage Gorilla" makes a few interesting and well reasoned points here about IBM's XIV (my views re XIV will be in a different blog post) - but a couple that jump out to me are the 'entry size' & 'upgrade size' points about half-way down the text.

Now anybody who's spent time working with me on my companies' global storage BOMs will understand that this is a major issue for me, and not something that is getting any easier. The issue is a complex one :-
  • The €/Per GB ratio becomes more attractive the larger the capacity within an array (as the chassis, interfaces, controllers & software overheads get amortised over a larger capacity) - however of course the actual capex & opex costs continue to be very sizeable and tricky to explain (ie "why are we buying 32TB of disk for this 2TB database??")
  • As the GB/drive ratio increases, the IOPS per individual drive stays relatively consistent - thus the IOPS/GB ratio is on a slow decline, and thus performance management is an ever more complex & visible topic
  • IT mngt have been (incorrectly) conditioned by various consultants & manufacturers that 'capacity utilisation' is the key KPI (as opposed the the correct measure of "TCO per GB utilised")
  • DC efficiency & floor-space density are driving greater spindles per disk shelf = more GB per shelf
  • Arrays are designed to be changed physically in certain unit sizes, often 2 or 4 shelves at a time
  • As spindle sizes wend their merry way up in capacity the minimum quantity of spindles doesn't get any less, thus the capacity steps gets bigger
  • Software licences are often either managed / controlled by the physical capacity installed in the array, or in some random unit of capacity licences key combination - these do not change re spindle sizes
  • Naturally this additional capacity isn't 'equally usable' within the array - thus a classic approach has been to either 'short stroke' the spindles or to use the surplus for low IO activity. However in order to achieve this you either have to have good archiving and ILM, or need to invest in other( relatively sub-optimal to application ILM) technology licences such as FAST v2.
  • Of course these sizes & capacities differ by vendor so trying to normalise BOM sizes between vendors becomes an art rather than science
So what does this all mean?
  • Inevitably it means that the entry level capacity of arrays is going up, and that the sensible upgrade steps are similarly going up in capacity.
  • We are going to have to spend more time re-educating management that "TCO per GB utilised" is the correct measure
  • Vendors are going to have to get much better at the technical size of software & functionality licensing that much more closely matches the unit of granularity required by the customer
  • All elements of array deployment, configuration, management, performance and usage must be moved from physical (ie spindle size related) to logical constructs (ie independent of disk size)
  • Of course SNIA could also do something actually useful for the customer (for a change), and set a standard for measuring and discussing storage capacities - not as hard as it might appear as most enterprises will already have some form of waterfall chart or layer model to navigate between 'marketing GB' through at least 5 layers to 'application data GB'
  • Naturally the strong drive to shared infrastructure and enterprise procurement models (as opposed to 'per project based accounting') combined with internal service opex recharging within the enterprise estate will also help to make the costs appear linear to the business internal customer (but not the company as a whole)
  • The real part though will be a vendor that combines a technical s/ware & h/ware architecture with a commercial licence & cost model that actually scales from small to large - and no I don't mean leasing or other financial jiggery pokery 
So I wonder which vendor will be the first one to actually sit their licensing, commercial & technical teams all together at the start of a product's development, then talk with & listen to customers, and actually deliver a solution that works in the real enterprise to enable scaling from small to large in sensible units? I'm waiting...

Friday 9 April 2010

Vendor Partner Programmes - use or useless?

Most of you will know that many topics can make me a tad irritated, however a reoccurring one that never fails to wind me up is the topic of "parter accreditation" programmes.

You know, the ones where ISV XYZ says they are a 'gold partner' of technology supplier ABC and all the world is going to be hunky dory. Of course this applies equally to SIs, ISVs & OEMs.

This irritates me for a number of reasons :-

1) Mainly because it's never 'hunky dory' and given the devil is in the details, all these partner schemes do is set certain expectation levels in top management's minds that can never be achieved. Indeed if it was all so 'hunky dory' why the heck do so many SI managers drive around in expensive cars paid for through 'change control additions'???

2) Often the one 'partner' insists on the use of another partner, either in the form of a direct statement, or simply through the use of limited partners on the support & interop list. Of course it most certainly would be interesting to better understand any financial relationships or transactions between these partners ('finders fees' anyone?)

3) However the main issue I have with these schemes is that 90% of the time they are little more than joint marketing and sales programmes, which whilst sounding nice in reality they do nothing to actually help the customer. Once in, their accreditation schemes are often so light & flexible its daft, and this leads to lazy practices - which may in fact be negative for the customer.

But the real point of this note was to call out one specific area that vendors really could use these programmes for some positive value, namely tacking the issue of "supported versions". What I mean by this is that far too often, companies that have these schemes ignore a key issue - which is that they allow 'accredited partners' to either require the use of older technology versions or that they allow partners to support subsets of the products.

A couple of examples of this are :-

a) Cisco SAN switch interop programme for SAN-OS/NX-OS - where Cisco allow their partners to certify & support against a specific minor version of the code that is effectively unique to the partner. Thus making it more than tricky to get a solution between server, HBA, OS, disc array, tape library etc that actually matches all of the partner's specific certification requirements.

b) Oracle partners & out date/support software - where Oracle allow their certified partners to 'only support' aged versions of the database products. In the last month alone I've had one major billing partner & one major ISV both say that they currently only support Oracle DB 10.2.x and it will be mid-next year before they have anything that supports 11.x! Remember that 10.2 ends premier support July 2010, and 11.1 was released in Aug 2007!!!  (see here for details on Oracle support versions & dates)

So what do I want done about this? Frankly a simple starting point would be for these three items to added into the conditions of a partner scheme :-

1) If you offer a partner accreditation programme then mandate that members of the programme support the current version of technology within 90 days if it's GA release (after all they will have had plenty of notice & beta access as part of the programme) - this must also include providing upgrade routes

2) Partners are only permitted to do new deployment installs using non-current versions for up to 1yr of GA of the current version (and even then only n-1 release version)- thus allowing 'in-flight' projects to complete but preventing partners from proliferating aged technology deployments

3) If you allow partners to initially certeify against specific minor release sub-set versions, then require them to support the full major release version within 6 months of initial release (eg a 'terminal release' variant) - thus ensuring that the eco-system will converge on a common supported version within a period of time

Naturally companies that are not part of these partner programmes could carry on causing issues, but what this would mean is that those genuinely in such partner schemes would actually be helping their customer and have real differentiating benefits to offer.

Of course this would actually require companies to actively manage their partner programmes, and of course to remove those partners that don't adhere to the rules - something I doubt will ever actually happen. But you've gotta have a dream haven't you? :)