Monday, 21 September 2009

TCO - Time for an opensource framework?

So in my job I regularly see what each vendor claims to be a 'TCO model' - now funnily enough these normally show that the vendor's widget is much better than the competitor's other widget. Naturally each model has some elements in it that the others don't or places a certain weighting / emphasis on particular attributes that others don't.

Now this is a topic that is very close to my heart, as all standards and strategy changes I make in my company are supposed to be TCO based - with us not making any changes unless they improve own actual TCO. Naturally this breaks when vendors EOL products or TCO isn't the driver - but the principle is valid (although sadly a surprise to many people).

Now on @om_nick's blog here http://www.matrixstore.net/2009/09/17/defining-an-up-to-date-tco-model/ he reminded me that I had a draft blog on this, and that 'crowd-sourcing' such models can work quite well. There's plenty of good attributes listed so far on Nick's blog and I'm sure we'll all add many more as time goes on (I know I must have a good dozen or so TCO Excel models knocking around somewhere).

Of course I know that TCO isn't always the right measure, and that ROI or IRR can often be just as valid, but for lots of elements of infrastructure the first point of call is a TCO or CBA - and making those consistent would be a great starting point!

One thing I'm very sure about is that for each technology category there is more than one 'level' to measure a TCO at for different purposes, for example :-
Industry average TCO - ie what does a GB of data cost to store for x hours on average in the industry? (the analyst KPI - and product / vendor agnostic)

Estate average TCO - ie what does a GB of data cost to store for x hours in my company on average? (the CTO level KPI - and product / vendor agnostic)

Architecture average TCO - ie for this type of reference design (inc Function & NonFunctional Requirements) what does a GB of data cost to store for x hours in my company on average? (the architect level KPI) This is product / vendor agnostic and used for ROM costing and selection of an infrastructure architecture.

Category average TCO - ie for this class of product (eg modular storage, enterprise storage, small x86, medium unix etc) what does a GB of data cost to store for x hours in my company on average? (the catalogue level KPI) This is now technology 'class' specific, but still product / vendor agnostic, and is used for building up the ROM & architecture costs above.

Product TCO - ie for this specific vendor product & version what does a GB of data cost to store for x hours in my company on average? (the product level KPI) This is now product and vendor specific, and is used for selecting product within a category (ie direct product bake-offs).
There are many tricky parts in a TCO model, including :-
  • What to measure? (both what is desired, and what is actually possible over time)
  • How to measure?
  • Where to measure?
  • How frequently to measure?
  • What relative weighting to give?
  • What TCO output KPIs to give? (eg € per GB, GB per kW, € per IOP etc)
  • How to communicate such KPIs without creating dangerous context-less sound-bites for people to abuse (ie my absolute hatred of the phrase 'utilisation' - it's utterly meaningless without context!)
  • How to ensure transparency & clarity over assumptions and driving inputs?
  • How to value / compare functionality when no direct equivalents?
  • How to handle 'currently familiar' or 'keep same' (ie low cost of introduction) Vs 'new vendor & widget' (ie disruption & short term duplication of costs / disruption etc)?
  • How to handle 'usefulness'? (eg performance is a NFR than has value - does 'IOPS per GB per €' work?)
  • How to build a feedback loop and refinement model to periodically measure and validate TCO predictions Vs actuals, and take action accordingly?
  • How to protect confidential or sensitive values?
Getting a common list of assumptions, factors and attributes, relative weightings and of course values for all of these is the absolute key and a very valuable exercise for all - customer and vendor alike.

Lastly -
one company with a very interesting approach to TCO mngt is www.Apptio.com who provide a SaaS model for building & maintaining an automated TCO measurement and reporting platform - it's certainly sparked interest in my mind, would love to hear more about people's thoughts about or experiences with them.

Now I for one am more than up for spending time on creating an 'open source' TCO model that has many people's input and thoughts into it, that we can refine and revise over time and use to evaluate many vendor technologies - so what do other people think?

Tuesday, 1 September 2009

Cloud Backup & Android

Firstly a tip for Android users (and I'm a major fan of this platform for mobile devices) :-

If you change your Google account password, also make sure you change the password cached on your Android phone quickly afterwards - otherwise it appears that Google automatically decides that the failed login attempts from your phone (when it is auto syncing contacts, calender & email etc data) are a hacking attack and temporarily disabling your Google account... Clearly not good and needs some more thought from Google I think :(

Which leads me onto a related topic - cloud backup (which of course should really be cloud restore rather than cloud backup). In this context I'm really talking about the SaaS & PaaS definitions of cloud.

Firstly I should say that I'm greatly in favour of having multiple controlled & secured instances of data in several locations, my feeling is that for a lot of smaller organisations or individuals this simply doesn't occur. As such the technologies loosely referred to as 'cloud backup' could be invaluable to many people in easily & cost effectively enabling data persistence and recovery.

Now, before we dig into the real topic, there are lots of side-points to consider re security & availability in the cloud backup/recovery area, including :-

  • Obviously the data needs to be protected, normally with encryption. My view is that this should have private keys supplied & owned by the user, and not by the backup/recovery provider. With the private keys similarly backed-up to a separate key escrow / backup & recovery provider.
  • What SLAs does the provider have? (availability, accessibility, performance, integrity etc) and how, from where and how often are they measured & reported?
  • How can you contact your provider should you have an issue? (a web form simply doesn't cut it)
  • What guarantees do they offer to keep your data at their site available - are they a '2nd copy hoster' or do they treat your data with the same care as a master copy (eg do they do their own backups/replicas, can you treat their service as an archive rather than B&R etc?)
  • What are the guarantees worth? what kind of financial penalties / compensation are available, how are they calculated & triggered and how do they compare re the value of the data?
  • Is the provider somebody you'd trust with your banking details? As it's likely you'll either be giving them these, or all of the information behind them, in one form or another
  • Cloud economics often rely on some form of content dedupe at the provider's end, you need to satisfy yourself that supplier's dedupe won't impact your security or encryption

But with the above in-mind, back to the real topic - the three real points I was wondering about here are a little bit different :-

  • Should you backup your SaaS & PaaS cloud service data to your own local media (ie backing-up your part of the cloud)?
What happens to your data (your assets & value) when a service goes down, your account is deleted, the service is hacked, the company vanishes or... Can you backup your Google/Yahoo email to your local home NAS, can you backup your blog sites & social media pages to your local storage?

Irrespective of how it is done I'm increasingly of a belief that there is going to be a need for this. The first time it happens is often for some 'novelty' data which is irritating but little more, however as people rapidly move to cloud services that handle their data the risk & loss becomes higher...


Not saying stop using the SaaS services (different view re PaaS but that's another blog) as the prime system, but if the data is worth something (emotional, financial etc) then my view is that it should always be in two independent places, with one of those in the data owner's direct control.

So I'm wondering when the current generation of home NAS devices will start to include the ability to receive data from remote sites, or to have the ability to obtain that data automatically themselves?
  • Can your cloud backup/recovery partner also backup your social media and SaaS services?
Moving on from the previous point, what I'm thinking about here is that rather than need to use local media, could your cloud backup partner (assuming they are different to your other PaaS/SaaS providers) also provide 'content aware' backups for your other internet data services such as blog sites, Facebook & MySpace sites, Twitter tweets/favourites/friends/followers, webmail and other PaaS / SaaS services etc?
  • Could your cloud backup partner also move into providing a basic 'cloud DR' service?
It's a fairly simple step for a cloud partner to wrapper & automate the creation of an AWS EC2 image, load their backup/restore software onto that image and then allow the customer to restore their data 'as needed' to the EC2 image. Where in turn they can run the usual suite of common apps easily enough... Not earth changing but a simple enough value add that would provide transitory help for some situations...
Now I'm aware that some of the points above could be twisted into FUD, they certainly aren't intended as that (and I'll be more than grumpy if they do get used as FUD) - they are the questions I ask myself about my personal information storage (especially when an account gets disabled!).

In this topic (like many others) I certainly agree with some of the points that
@StorageBod makes in his blog entry at http://storagebod.typepad.com/storagebods_blog/2009/08/information-haze.html re personal information being both of value and dispersed, with little current understanding from the public at large re the potential consequences...

Monday, 31 August 2009

NetApp - Cloud of Fog?

Right, so firstly despite what it may look like this isn't a NetApp bashing blog site or month (in fact I'm grumpy with all vendors - and NetApp are a strategic supplier to me), but I'm afraid I'm a bit lost re NetApp's marketing and 'cloud' and really need to ask a public question :-
"Can anybody give me any detail and substance to NetApp's cloud strategy, technology and deployments?"
I ask this genuinely, as despite asking the company directly for over 2yrs, and despite regularly looking myself through their blogs and whitepapers, I'm finding it really hard to locate any information re NetApp and IaaS, PaaS or SaaS usage or technologies (direct of indirect).

Oh of course I've heard the stories re large company X using ### many filers for search or webmail, and other such things - but frankly these strike me as very much a standard use of a technology coupled with significant discounts linked to volume & name etc. Not specifically addressing the changed requirements in a cloud (thinking IaaS) environment (ie individual assets may have less availability req, object protocols often needed, much greater connectivity reqs etc).

What I'm looking for info on is :-
1) A technology & commercial model that is more aligned at a web 2.0/3.0 business, with vast data scale, objects & files, geographical distribution, policy based mngt, compression & dedupe of data, adequate performance, adequate availability of a physical asset but v high availability of the information, and at a 7yr TCO price point substantially less than today's post discount per TB price

2) Any info re an object store capability (although Val Bercovici has already said he's not prepared to pre-announce anything right now)

3) Any info re a direct to market SaaS offering?

4) Any details, info or case studies re genuine cloud companies or web 2.0/3.0 companies that are using or planning to use NetApp in decent sized (ie multi-PB) cloud deployments
I'm wanting to like their 'cloud' technology but frankly speaking at the moment trying to find out about it is like trying to knit fog (sorry couldn't resist)

If any NetApp'er or anybody wants to point me towards some substance and fact then I'd be more than grateful :)

Thursday, 27 August 2009

Steve's IT Rants - Supplier Eraser

Steve Duplessie has a well honed nack of hitting the nail on the head in a style and language that both informs, entertains & 'sticks' (if you've seen him present in person you'll know what I mean).

Again I find myself reading his blog at
http://esgblogs.typepad.com/steves_it_rants/2009/08/it-shops-are-starting-to-trim-number-of-vendors-in-earnest.html

I started to write a comment reply to his post, but felt it was more than a reply so would draft something here.

Some key points from Steve's blog :-
  • 34% of customers looking to reduce supplier qty
  • Isn't the vendor list being reduced organically through acquisition?
  • Is it good for the market as a whole? I'm not sure it is.
  • It seems it will stifle innovation in exchange for more consistent vendor relationships (consistent does not imply "good," necessarily).
Frankly I'm staggered that it's only 34% and not a much higher number - a lot of people (incorrectly IMHO) associate supplier reduction with cost savings.



Yes in the short-term there can be a reduction in the cost of purchase orders by reducing supplier qtys - with the biggest short term gain to be to move to single supplier for a given area. Clearly 'value', 'quality' and 'sustainability' are often overlooked in such situations, similarly 'flexibility' is often missed and a 'cheap deal' suddenly becomes much more expensive when the full picture of lifetime costs & changes are considered. Again over time we have traditionally seen these 'savings' eroded as both supplier & customer get 'lazy & comfortable' together, with less emphasis on value.

As far as I can see such deals often also ignore the cost of migrating to the new terms, technologies or processes - often the 'adoption' costs can be considerable. Again people often think of deal durations as being a long time, but consider a 3yr deal - 6+ months taken to 'adopt' the deal after signing, and the renewal position will need to be considered at least 12 months before expiry to allow for the annual budget process to permit flexibility in the renewal outcome. So at best I'd say you get 12-18 months of stability (ie potential for 'cost savings') during a 36mth agreement.

So what do I think are some of the driving factors re supplier reduction :-
  • Reduction in resources & overheads in supply chain functions - thus forcing them to only wish to deal with fewer companies of a larger deal value.
  • The current 'in vogue' supply chain quality processes have a similar 'better fit' with companies of larger scale and with a larger / more regular financial exchange between customer & supplier
  • As a result of above, supply chain are often being rewarded or KPI'd with regards to supplier qty reduction - and personal rewards or KPIs always drive behaviours
  • Customer IT/IS depts are under hard pressure to do more, better, quicker with less - which equals less time & resource to accommodate diversity (product or suppliers)
  • Naturally there is also a major drive from the larger suppliers, with them offering big discounts on deals in order to generate any form of revenue and also elbow out competitors. These 'deals' are increasingly not just technology acquisition price, but are moving to multi-year deals covering tech, services, support & maint etc. It would appear such deals are often 'sold internally' as vendor displacement and are defensive in nature, with the vendor expecting later to cross sell into other areas (sometimes even just to act as a reseller to be able to book revenue), to recoup funds through services or simply to be able to remove a competitor.
Yes the supplier list is being reduced through acquisition, but I've yet to see this reduce the costs to the customer, nor does it always appear to be beneficial when the technology is killed off or vanishes for years.

Similarly I'm yet to be convinced that a reduction in competition is good for the customer of the industry - the customer looses due to less commercial pressures and less reason for suppliers to work with open standards, the industry looses through lack of innovation and enforced marriage acquisition issues.

My conclusion is that I think there is a lot of short term thinking (and savings benefit claiming) going on both in vendors (revenue now at cost of profit now & revenue in future), and customers (in terms of standards, commercials, leverage & choice) - with a considerable risk to all concerned for the mid-term.
Reblog this post [with Zemanta]

Wednesday, 26 August 2009

NotApp or NetApp?


So after years of us asking and waiting it looks like NetApp have finally made a couple of pre-announcements :-
  1. OnTap v8 http://www.netapp.com/us/products/platform-os/data-ontap-8/
  2. Object storage http://www.theregister.co.uk/2009/08/24/netapp_object_interfaces/
Now I've been a big fan of what NetApp did for storage re:-
  • Storage configuration ease & simplification
  • Single OS/firmware over all products
  • Consistent and compatible capabilities on each product (eg think replication)
  • Their work with Oracle on NFS
  • Use and publication of open storage & system benchmarks
However I also regularly raise concerns re NetApp over :-
  • They are just not a truly global player and struggle with dealing with global companies
  • Roadmap & futures disclosure - as much as I have issues with EMC (and I have many) they do technical contact, futures and strategy briefing much much better
  • OnTap GX - has been in the wings for years, and appears to have been a major drain on their dev resources
  • OnTap constraints not matching the increasing scale of the requirements and/or platforms - eg re aggregate max 16TB etc
  • Poor estate mngt tools - prior to Onaro acquisition these were woeful, and there still a long way to go for NetApp native tech
  • Too frequent product changes, revisions & models - making interop a pain, and appearing to drive too many codebase versions
  • Poor interface and processes for RFEs (though I've yet to find a storage company that has any worth mentioning)
  • Poor acquisition history re choices & integration execution
  • Lastly, and most importantly, sadly over the last couple of years IMHO they have listened to their own hype too much, and as a consequence have lost touch with the real market prices and are unable to prove the value of their benefits. (Acting very similarly to EMC in the 90s and early 00s)
That all said I still like & recommend their technology! So, knowing above, what are my thoughts re their recent announcements :-

1) Ontap v8
  • DataMotion - looks very interesting, but the devil in the the capability, requirements & constraints details, can anybody provide these yet?
  • Pam-II cards are interesting, and a good way to get overall performance improvements without requiring lots of specific configurations, but value will depend on their € cost, and how to mitigate against the use of the onboard slots (thus reducing either disk loops or network interfaces)
  • NDDC - have read this 4 times and still think it's purely a Prof Srvs play wrapped in words, can anybody correct me with details?
  • I can't find a public document that compares v7.3 with v8.0 7-mode, so very tricky to talk about differences, anybody see a public doc?
  • The last time I saw public docs on Ontap v8.x the major features, benefits and improvements came in v8.1 rather than v8.0 - so I'm also rather keen to see what's being disclosed publicly re comparisons between v7.3, 8.0 & 8.1
  • The PDF published on the Netapp website (http://media.netapp.com/documents/ds-2950.pdf) re 8.0 7-mode makes lots of claims re 'lower TCO', 'increase productivity' etc but I can find nothing about a) what they are comparing to, b) what level of improvement and c) what proves the justification for these statements
2) Object storage
  • Fundamentally this is good news
  • mid/end 2010 will be too late, if it's not Q4 '09 then the momentum will be elsewhere
  • In the object space, the model cares & relies much less on the 'tin' thus the 'OnTap Object' techology will need to exist in a software 'virtual appliance' that can run on commodity hardware (look at the great things done by Caringo is this area)
  • Price point - similarly object storage is expected to have a materially lower price point than SAN or NAS (think DAS price point), very unclear how NetApp will be able to achieve this given their current pricing models
  • There is a lot more to object storage than simply being able to use a REST or SOAP API to R/W objects - look at how long it's taking EMC to get Atmos into shape (and there are some mightly minds on that project)
  • API - if it's not both XAM and EC2 compatible, then frankly don't bother. So when will the API details be published?
So for me these announcements are nice but nothing more than that, until the details are made public and the code goes GD (rather than GA). Of course what I really want to see is a TCO model comparing the before and after pictures :)

[Disclaimer] I am a NetApp customer and have over a PB of their disk platforms and have access to NDA information (that I will not disclose)
Reblog this post [with Zemanta]

Saturday, 22 August 2009

Objects & Metadata

As usual Dave Graham brings up some interesting and worthwhile topics in his blog post here http://flickerdown.com/2009/08/micro-burst-metadata/#comments

Now being an ex database programmer ('ex' of anything being of course the very worse and most dangerous type), and of course a storage curmudgeon, I have a passion for the topic of metadata and data. And being somebody having to deal with PBs of object data I naturally have some concerns and views here...

Now normally I agree with Dave on a lot of things - but I have to say I much prefer my scallops to be seared and served on black pudding nice and simply, letting the quality of the flavours shine.



That said I have to agree re his view of being able to segment metadata & object storage models into two areas - but do think there is a place (almost essential IMHO) for both models in the future storage.

We've seen this area tackled by a number of existing technologies re CAS and object stores (Caringo CFS gateway onto Castor object layer is good example) - but are only just starting to see the key new elements test these, namely vast scale (think EBs), geo-dispersal/distribution/replication, low cost.

I do also think it's worth exploring some of the possible types / layers of metadata, for me this breaks into :-
  • System / Infrastructure metadata - the metadata mandated by the storage service subsystem for every application using the service and every object held within the storage service. System metadata is under the exclusive control of storage service subsystem, although can be referenced by applications & users. Examples such as object ID, creation data, security, hash/checksum, Storage service SLA attributes (resilience, performance etc) etc.
  • Application metadata - This is the metadata associated with each object that is controlled and required by the application service(s) utilising the object. There may be multiple sets of application metadata for a single object, each only accessible by the approved application.
  • Object metadata - context & descriptive attributes, object history, related objects, optional user extensible metadata
I would expect all 3 examples of these metadata to be linked with every object, with at least the 'system metadata' always held locally with the object. The 'application metadata' & 'object metadata' may reside in the storage system, the storage service, the application or any combination. (In this context I refer to the storage system as an object store, and the storage server as being object store + metadata store)

Some of the metadata relates to the application and infrastructure architecture (eg geo-location information re object distribution & replication) whilst some of the metadata are attribute fields used within the application itself.

Given the above, it should be clear that I certainly agree with an entry Dave made in his blog comments re :-
"interesting note on ownership to which I'd say that there has to be dual ownership, one from the system level (with immutable meta such as creation date, etc.) as well as mutable data (e.g. user generated meta). The meta db then needs to maintain and track 2 different levels. Policy can affect either, fwiw."
So some thoughts about where to locate metadata as it relates to the object :-
  • As referenced above, I believe 'System metadata' must always reside with the object as it is used by the storage service for mngt, manipulation and control of the object itself, and ensure it's resilience & availability.
  • As has been an issue with file-systems for some time, there is always an issue with fragmentation of the underlying persistency layer with vast size differences between objects and metadata when they are tightly coupled
  • As a result of needing to traverse the persistency layer to establish the metadata, there are performance issues associated with metadata embedded within the object layer - move the metadata to a record based system and performance & accessibility can increase dramatically
  • For certain classes of use (eg web 2 etc) it's often the metadata that is accessed, utilised & manipulated several orders of magitude more often than the objects themselves, thus the above improvements in performance and accessibility of metadata (thin SQL query etc) make major differences
  • Clearly if the metadata and objects are held separately the metadata can be delivered to applications without needing to send the objects, similarly the metadata can be distributed separately / in-advance of the object. Thus having major advantages for application scaling and geo-distribution.
  • With the split of persistecy location / methods this also allows for security layers to be handled differently for the metadata and the object.
This also brings into line a question area I've been working with for over 2 years with object stores - in that what features & functions should live in the application layer and what features and functions should live within the infrastructure (storage service) layer. What areas of metadata are actually data information in their own right, or embedded in the application logic, there appears to be no clear rules or guidelines.

If you like, this could be seen as an argument between IaaS & Paas - and for sure the only sensible answer for a company right now is IaaS, PaaS exposes far too much of the logic, taxonomy, behaviours, trends and metadata layers to the PaaS provider than is healthy.

There is also an additional interest point re metadata - as we move from the System metadata into the Application & Object metadata, should we consider privacy and encryption of the metadata itself? (assuming that the objects will always be protected appropriately) I could see how this will be a requirement in some multi-tenancy environments an for some metadata elements...

Lastly some more questions :-
  1. How do you cover the topics of backup/recovery of the various metadata elements?
  2. How do you cope with bulk import / export of the various levels of metadata and their logical / context relationships?
  3. What standards will emerge for metadata schema definitions and attributes?
  4. What standards will emerge for policy script language & descriptors that manipulate within the storage systems? (think how to describe an SLA in a programmatic language)
  5. Can security authorisation & permission tokens exist and be enforced in a separate context and control domain to the identities?
Naturally in this we're not covering any of the 'internal' metadata used by the storage system to locate objects, to handle the multiple instances of the same object within a 'object storage service' (resilience, replication etc), to enable sharding / RSE encoding of objects etc that the storage system has to cope with.

Now I'm off for some lunch, I'm hungry and fancy some seafood for some reason ;)

Cheers

Ian


Reblog this post [with Zemanta]

Sunday, 16 August 2009

Storage Security

In this post I'm going to discuss a small area of storage security, specifically the privacy side of the security coin, more specifically array data erase. Right, so in my view security is a very dangerous area for people to wade into, especially storage people.

However when we do dare to wade into this area my feeling is that the storage people often either :-
  1. Totally ignore the topic and hope (the religious Ostrich strategy)
  2. Simply don't understand the topic, and have no idea how to assimilate the vast myriad of actual or hype 'requirements' that impact storage from a security aspect, or frankly don't know who to trust in this area (other than @beaker obviously)
  3. People often select and place security technologies in the wrong areas in the mis-belief that this will help them
Security is always an expensive topic in terms of investment, process and discipline - and generally I'd argue that often a bad security design, or technology, is actually worse (& more expensive) than no security.

However one interesting security technology that I do think has a use in the storage array area is the TCG SSC Opal standards work, which should offer another option in the 'data erase' sector.

With it already often taking "well over a week" to securely erase a current 100TB array, just how long do you think it will take to secure erase a 2PB disk array using current methods, and at what cost? For those companies disposing of, or refreshing, 10s->100s of arrays every year, this is a major & expensive pain.

My understanding of one element of TCG SSC Opal is that each individual disk interface uses standards-based encryption techniques and methodologies (AES-128 or AES-256) to encrypt all data stored on that disk. Further this supports multiple storage ranges with each having its own authentication and encryption key. The range start, range length, read/write locks as well as the user read/write access control for each range are configurable by the administrator. Thus to 'erase' the data only the keys need to be revoked and destroyed within the drive.

Problems addressed
  • Failed / failing disks - Allowing data on failing disks to be 'erased' rapidly as part of the disk swap process.
  • Technology refresh & Array disposal - clearly before an array and it's disks can be exited from a company the data on disks must be rendered inaccessible, incurring considerable cost and time. Sometimes this results in physical destruction of the array and disks, preventing any possible credit / resale value.
  • Array relocation - increasingly it's a requirement to secure erase an array prior to moving it to an alternate location within the same company. Again incurring additional cost and time delays for the relocation.
  • Lack of standards - sure there's the U.S. Department of Defence 5220-22.M specification document, but this isn't an international standard, and is open to interpretation.
Benefits
  • Standards - this will provide an industry standard based method, against which vendors & technologies can be measured and operational practices audited against. Should also help reduce the FUD from technology sales in this area.
  • Scalability - unlike other 'in band' encryption technologies this solution scales linearly and independently with each disk in system, no bottlenecks or SPOFs introduced.
  • Time to erase - now this is a major pain as array capacities grow, particularly in migrations where the data on the array must be securely erased prior to array decommissioning. Hence extending the duration that power/cooling is needed etc. Anything that improves this timeline on-site is a significant benefit.
  • Reliability - strange one I know, but a fair % of enterprises do not allow their failing disks to be returned to the manufacturer under support or warranty, preferring instead to physically destroy such disks and pay for new ones. Thus denying the manufacture the RCA analysis process and not contributing to the continual improvement process. If the data on the disk is useless (according to an agreed standard, and at no cost/time impact to customer) then these disks may now go back into the RMA processes to the benefit of all.
  • Security - by providing an easy to utilise technology the technology should see increased utilisation and hence an overall improvement in the security of data in this area.
  • Cost reduction - clearly anything that saves time reduces some elements of cost. But this should also make a fair dent in additional technology sales and professional services costs. Similarly, should also reduce the need to physically destroy (disks & arrays) during refresh projects, and thus expand a resale market / opportunity.
The questions I want to know answers to though are :-
  • Why haven't I been hearing about this from array manufacturers in their 'long range' roadmaps?
  • When will we see the first array manufacturer supporting this? (the disk manufacturers are already doing so and shipping products)
  • What will be the cost uplift for this technology in disk & array?
  • When will the first customers mandate this in their storage RFx requirements?
Clearly this is only going to help for 'data at rest', on newer disk drive models & arrays, and not for portable media etc - but it's a step in the right direction.

Yes there is a need for more 'intelligence' in the disk drive firmware to ensure that latency & throughput levels are maintained. Yes there is work on the array needed for mngt control interfaces and KMS relationships etc. But I want to know more and get answers to my questions above :)

Some links for further reading on TCG SSC Opal :-
http://www.trustedcomputinggroup.org
http://www.trustedcomputinggroup.org/developers/storage
http://www.trustedcomputinggroup.org/developers/storage/specifications
http://www.trustedcomputinggroup.org/developers/storage/faq
http://www.trustedcomputinggroup.org/files/static_page_files/B1105605-1D09-3519-AD6FD7F6056B2309/Opal_SSC_FAQ_final_Jan_27_4_.pdf
Reblog this post [with Zemanta]