December 30, 2009 -- Ever since I’ve been involved with storage, the topic of solid-state disk (SSD) drives has come up in passing from time to time. But my first real hint at how disruptive SSDs were going to be came when I was attending C-Drive, Compellant's annual user conference, in the spring of 2008.
One of the speakers, a Compellent user, was lamenting how difficult it was to optimize a disk array for database performance and how Compellent’s storage system had helped him in that area by striping data across the array. One of the Compellent presenters then asked the user if he would consider using SSDs to improve performance and the user responded positively, which I think caught the presenter off-guard.
The presenter then turned to the audience of end-users and asked them how many would consider implementing SSD technology. Well over 50% of them raised their hands, which again appeared to catch the presenter by surprise. At that point I knew SSD technology was on the cusp of becoming the next big thing in storage, and it was not long after that conference that Compellent announced support for SSDs.
Heading into 2010, SSDs are gaining end-user interest, as well as vendor hype. Already they’re being hailed as the end to high-performance disk drives, and some observers predict that SSDs will eventually replace all disk drives - possibly as soon as the end of this coming decade.
Whether or not that comes to pass remains to be seen. SSD technology is still maturing, and prices have to come down to the point where most businesses can justify and afford deployment.
There are three notable ways that SSDs were positioned for use in the enterprise during 2009, making SSDs my third major trend of 2009. [To read about the other two trends, see “2009: The Beginning of the Corporate Love Affair with Cloud Storage” and “Deduplication is the Big Success Story of 2009.” ]
First, most SSDs are being initially deployed on external networked storage systems. Vendors such as Compellent, Dot Hill, EMC, HDS, Pillar Data Systems and others have added support for SSDs to their storage systems or can virtualize external SSD systems (e.g., NetApp's support for Texas Memory Systems’ RamSan 500).
Installing SSD drives, from providers such as STEC or Pliant Technologies, in existing storage systems was certainly one of the fastest and easiest ways to bring SSDs to market in 2009. However, I’m already questioning how long this trend of putting SSDs into storage systems lasts before companies figure out that they’re not getting the full range of anticipated benefits.
This scenario may take two or three years to play out, but what I eventually see emerging as the problem with putting SSDs into storage systems is the storage network itself.
Most networked storage infrastructures in use today are Ethernet, Fibre Channel, or a mix of both. Unfortunately, neither one of these technologies - even the latest versions - currently offers sufficient bandwidth to deliver on SSD's potential.
One technology that seems best positioned to break this bottleneck is InfiniBand. The problem is that no one outside of the high performance computing (HPC) space seems ready to commit to this protocol or even take a serious look at it. However, the forthcoming network bandwidth logjam is already recognized within storage circles and, in talking with CTOs of a number of storage providers, some are already testing InfiniBand with their storage systems.
Second, SSDs are showing up in servers. In a trend that runs counter to trends over the last decade, storage is moving back inside servers again. This trend is being largely (and I would say almost exclusively) driven by Fusion-io. While I have written a number of blogs about Fusion-io's ioDrives in the past, and their disruptive nature, that’s only part of the story. (Fusion-io puts SSDs on PCI-Express cards that are placed inside of a server to achieve higher performance than what high-end storage systems can achieve.)
The rest of the story has to do with the elimination of cost and complexity. Many storage networks are still neither easy nor intuitive to manage, especially in high-end storage environments.
Fusion-io eliminates the need, and associated expense, to deploy any storage network at all. While its ioDrive reintroduces the inability to easily share unused, excess storage capacity with other servers, its design is so compelling (especially when viewed in the context of cloud computing and cloud storage architectures) that the ioDrive will be a force in the coming years - maybe more so than any other SSD architecture.
Third, SSDs are moving into the storage network. This approach is currently being championed by Dataram with its XcelaSAN product. What makes this approach so different from the other SSD architectures is that it’s not inside the server or the storage system, but resides in the storage network between the servers and back-end storage.
The argument that Dataram makes for this implementation of SSD is that approximately 5% of data on a storage system is active at any one time. However, most storage systems only have a fraction of that much cache. For instance, a disk array with 20TB of active data should in theory have about 1TB of cache in order to provide the read and write performance that high-end applications need.
A quick check will reveal that many midrange storage systems are lucky to offer a fraction of that, with most supporting in the range of 3GB to 5GB of cache. Dataram argues that by putting its XcelaSAN in front of an existing midrange storage system, it can act as an alternative to cache while turning a midrange storage system into a high-end solution at a fraction of the cost.
Not a bad idea, but there are some hurdles to overcome. First, Dataram needs to convince users that placing its device inline between the server and back-end storage is a good idea.
Another hurdle the company faces is from midrange storage system providers. Depending on how Dataram positions its product, why should vendors such as Compellent, Pillar and other midrange providers agree to play second fiddle to Dataram and sit behind its solution, especially if they are offering SSD in their own systems? Further, SSD is becoming lucrative, which means that the disk-array providers would be giving up the big margins to Dataram.
Another potential obstacle Dataram faces is from traditional network storage providers such as Brocade and Cisco. These guys never take kindly to anyone intruding on what they view as their turf, so if Dataram garners any momentum at all, expect these heavyweights to step in and try to snuff it out. So while I like both Dataram's architecture and argument as to why deploying SSD in the network makes sense, convincing the channel to sell it, end-users to buy it, midrange storage system providers to endorse it, and storage networking providers to leave them alone is going to be an uphill challenge.
All this said, SSD gained solid momentum and mind share among end-users in 2009 and is poised to emerge as a major trend in 2010. However, which of these architectures becomes the predominant one remains to be seen. In the near term (2010 to 2011) I’m placing my bets on storage system solutions, but until technologies such as those from Fusion-io reach critical mass, and others from vendors such as Dataram are tested in the market, the real impact of SSD is yet to come.
To read more of Jerome Wendt’s blogs, visit the DCIG web site.
Wednesday, December 30, 2009
Sunday, December 20, 2009
Who are the (perceived) leaders in storage virtualization?
December 21, 2009 – IT Brand Pulse recently surveyed 146 IT professionals, asking them which vendors they perceived as leaders in storage virtualization. In addition to the overall leader, other categories included price, performance, reliability, service and support, and innovation.
IT Brand Pulse gave the survey participants 12 vendor choices in each category: 3PAR, DataCore, Dell, EMC, FalconStor, Hitachi, HP, IBM, LSI, NetApp, Symantec and VMware.
And the winners are . . .
EMC took top honors as the perceived leader in the overall storage virtualization market, and also took the top spot in the reliability and service/support categories, while placing second in the innovation and performance categories.
NetApp took the top spots in innovation and performance and, strangely enough, placed second in all other categories.
Not surprisingly, Dell was #1 in the price category and, surprisingly, was #3 in the market leader and performance categories.
VMware was #3 in price and innovation, while IBM placed third in reliability and service/support.
The other seven vendors each got less than 10% of the vote in all of the categories.
Here are the win, place, show vendors in each category, with the percentage of end-user votes they received.
Market leader: EMC (26%), NetApp (19.9%), Dell (12.3%)
Price: Dell (27.4%), NetApp (22.6%), VMware (17.1%)
Performance: NetApp (24.7%), EMC (21.2%), Dell (11%)
Reliability: EMC (28.1%), NetApp (15.8%), IBM (12.3%)
Service and support: EMC (28.8%), NetApp (19.9%), IBM (11.6%)
Innovation: NetApp (25.3%), EMC (21.9%), VMware (11%)
Frank Berry, IT Brand Pulse’s CEO and senior analyst – and infostor.com’s newest blogger – recently blogged about storage virtualization. I like his lead:
“They say humans recognize the smell of chili, but dogs can detect the smell of different spices in the chili. Similarly, humans recognize storage virtualization, but industry experts see clear distinctions between the different types of storage virtualization.”
Frank goes on to outline the first two phases of storage virtualization, and provides a glimpse into the emerging third phase (heterogeneous storage virtualization in the cloud). And he also gives his view on which vendor was first to market with storage virtualization. Hint: It was in 1970.
Read Frank’s blog here.
IT Brand Pulse gave the survey participants 12 vendor choices in each category: 3PAR, DataCore, Dell, EMC, FalconStor, Hitachi, HP, IBM, LSI, NetApp, Symantec and VMware.
And the winners are . . .
EMC took top honors as the perceived leader in the overall storage virtualization market, and also took the top spot in the reliability and service/support categories, while placing second in the innovation and performance categories.
NetApp took the top spots in innovation and performance and, strangely enough, placed second in all other categories.
Not surprisingly, Dell was #1 in the price category and, surprisingly, was #3 in the market leader and performance categories.
VMware was #3 in price and innovation, while IBM placed third in reliability and service/support.
The other seven vendors each got less than 10% of the vote in all of the categories.
Here are the win, place, show vendors in each category, with the percentage of end-user votes they received.
Market leader: EMC (26%), NetApp (19.9%), Dell (12.3%)
Price: Dell (27.4%), NetApp (22.6%), VMware (17.1%)
Performance: NetApp (24.7%), EMC (21.2%), Dell (11%)
Reliability: EMC (28.1%), NetApp (15.8%), IBM (12.3%)
Service and support: EMC (28.8%), NetApp (19.9%), IBM (11.6%)
Innovation: NetApp (25.3%), EMC (21.9%), VMware (11%)
Frank Berry, IT Brand Pulse’s CEO and senior analyst – and infostor.com’s newest blogger – recently blogged about storage virtualization. I like his lead:
“They say humans recognize the smell of chili, but dogs can detect the smell of different spices in the chili. Similarly, humans recognize storage virtualization, but industry experts see clear distinctions between the different types of storage virtualization.”
Frank goes on to outline the first two phases of storage virtualization, and provides a glimpse into the emerging third phase (heterogeneous storage virtualization in the cloud). And he also gives his view on which vendor was first to market with storage virtualization. Hint: It was in 1970.
Read Frank’s blog here.
Monday, December 14, 2009
Can Citrix, Microsoft put a dent in VMware? (and is storage the hammer?)
December 14, 2009 – According to a recent Fortune 1000 end-user survey conducted by TheInfoPro research firm, users are increasingly considering alternatives, or additions, to VMware.
In the TheInfoPro survey, just over 75% of the respondents were using VMware today. However, nearly two-thirds of the companies have tested a hypervisor other than VMware, with Microsoft and Citrix cited most often, followed by Red Hat. Of those who have tested a VMware alternative, 27% plan to use the alternative, while an additional 20% say they “may” use the alternative.
On the other hand, when VMware users were asked if they would switch to an alternative, only 2% cited firm plans and an additional 9% were considering it. Bob Gill, TheInfoPro’s managing director of server and virtualization research, concludes that “The analysis reveals that VMware users aren’t switching away from VMware, but [many] are embracing competing technologies in heterogeneous deployments.”
One possible scenario is that VMware will retain its hegemony in production environments, while vendors such as Citrix and Microsoft make inroads through deployments in development and testing scenarios.
In comparing the relative strengths and weaknesses of the various virtual server platform alternatives, one often-overlooked area is storage – not third-party storage optimized for virtual environments but, rather, the native storage tools that come from the virtualization vendors themselves.
Could storage be the hammer that enables vendors such as Citrix and Microsoft to put a dent in VMware? Well, it’s only part of the overall puzzle, but it’s an increasingly important one as server/storage administrators rapidly grow their virtual environments, winding up with virtual server “sprawl” and the associated storage challenges.
I recently read a Solution Profile (white paper) written by the Taneja Group that takes an in-depth look at Citrix’s StorageLink, which is part of Citrix Essentials. Taneja analysts contrast two approaches to storage in the context of virtual servers: those that attempt to replicate, and those that integrate with, the enterprise storage infrastructure. The former approach is referred to as ‘monolithic,’ and the latter is referred to as an ‘assimilated’ approach (characterized by Citrix StorageLink).
If you’re grappling with the storage issues in your virtualization environment, and/or considering virtual server alternatives, check out the Taneja Group’s “Multiplying Virtualization Value by the Full Power of Enterprise Storage with Citrix StorageLink.”
In the TheInfoPro survey, just over 75% of the respondents were using VMware today. However, nearly two-thirds of the companies have tested a hypervisor other than VMware, with Microsoft and Citrix cited most often, followed by Red Hat. Of those who have tested a VMware alternative, 27% plan to use the alternative, while an additional 20% say they “may” use the alternative.
On the other hand, when VMware users were asked if they would switch to an alternative, only 2% cited firm plans and an additional 9% were considering it. Bob Gill, TheInfoPro’s managing director of server and virtualization research, concludes that “The analysis reveals that VMware users aren’t switching away from VMware, but [many] are embracing competing technologies in heterogeneous deployments.”
One possible scenario is that VMware will retain its hegemony in production environments, while vendors such as Citrix and Microsoft make inroads through deployments in development and testing scenarios.
In comparing the relative strengths and weaknesses of the various virtual server platform alternatives, one often-overlooked area is storage – not third-party storage optimized for virtual environments but, rather, the native storage tools that come from the virtualization vendors themselves.
Could storage be the hammer that enables vendors such as Citrix and Microsoft to put a dent in VMware? Well, it’s only part of the overall puzzle, but it’s an increasingly important one as server/storage administrators rapidly grow their virtual environments, winding up with virtual server “sprawl” and the associated storage challenges.
I recently read a Solution Profile (white paper) written by the Taneja Group that takes an in-depth look at Citrix’s StorageLink, which is part of Citrix Essentials. Taneja analysts contrast two approaches to storage in the context of virtual servers: those that attempt to replicate, and those that integrate with, the enterprise storage infrastructure. The former approach is referred to as ‘monolithic,’ and the latter is referred to as an ‘assimilated’ approach (characterized by Citrix StorageLink).
If you’re grappling with the storage issues in your virtualization environment, and/or considering virtual server alternatives, check out the Taneja Group’s “Multiplying Virtualization Value by the Full Power of Enterprise Storage with Citrix StorageLink.”
Tuesday, December 8, 2009
We’re looking for end-user bloggers
December 8, 2009 -- InfoStor.com is looking for end users, as well as channel professionals (integrators and VARs), to guest blog on our site. Our goal is to develop a greater sense of community among infostor.com visitors.
You can blog as frequently or infrequently as you like, with either opinion pieces or, better yet, success stories (case studies). Share your thoughts, insights and solutions with other storage professionals in a colleague community where you can gain experience through the experience of other storage specialists.
Contributing will also call attention to your, and your company’s, areas of expertise and innovation.
If you're interested in participating, contact me at daves@pennwell.com or give me a call at (818) 484-5645 to discuss.
And follow us on Twitter at InfoStorOnline
And on LinkedIn at the InfoStor Group.
-Dave
You can blog as frequently or infrequently as you like, with either opinion pieces or, better yet, success stories (case studies). Share your thoughts, insights and solutions with other storage professionals in a colleague community where you can gain experience through the experience of other storage specialists.
Contributing will also call attention to your, and your company’s, areas of expertise and innovation.
If you're interested in participating, contact me at daves@pennwell.com or give me a call at (818) 484-5645 to discuss.
And follow us on Twitter at InfoStorOnline
And on LinkedIn at the InfoStor Group.
-Dave
Monday, December 7, 2009
Cloud storage predictions for 2010
December 7, 2009 – As much as we might rail against the terminology itself, and despite the rampant hype around the technology, cloud storage is here to stay. And at least for a while, the seemingly hundreds of cloud storage providers can survive (pending the inevitable shakeout), jockeying and jousting for position and differentiation.
As cloud storage garners the attention of both large and small IT organizations, technological advances will come rapidly next year.
I recently spoke to Sajai Krishnan, ParaScale’s CEO (and fellow skiing enthusiast) about what to look for in cloud storage in 2010. Here are his predictions and comments.
Virtualization will drive adoption of private clouds in the enterprise. Virtualization has driven huge efficiencies in organizations but the weak link today is the storage infrastructure behind virtualized servers. The need to eliminate the SAN bottleneck and automate provisioning, configuration, management and recovery across the compute and storage tiers will drive enterprises to begin to adopt cloud storage.
“The SAN performance issue is the controller,” says Krishnan. “VMware has a best practices for storage page that starts by saying you need to figure out what the I/O requirements are for each of your applications. Any storage administrator would love to have that information, but admins often don’t have a clue about the I/O requirements of their VMized applications.”
“You have controller bottlenecks. How many disks are you going to stack up behind the controller as you scale out? It’s much more manageable when you can buy cheap servers like popcorn and just keep scaling out [in a cloud architecture].”
Service providers will start to unify cloud computing and cloud storage. In 2009 service providers began expanding cloud storage offerings. The cost and management efficiencies provided a much-needed storage alternative for consumers, SMBs and enterprises. To further drive efficiencies, cost savings and management ease of use, service providers will begin executing VMs directly on cloud storage platforms. Unifying cloud compute and storage infrastructure will also provide a platform for service providers to build new value-added services.
“With large cloud storage implementations you discover that 90% of the data is cold, and you have a lot of CPUs idle,” Krishnan explains. “Some people are looking at running hypervisors and VMs on the storage nodes.”
The middle tier emerges. The strategic importance of a low-cost, self-managing tier that provides a platform for analysis and integrated applications will emerge in organizations with large stores of file data. This middle tier will provide opportunity for administrators to automate storage management and optimize for performance and cost; support large-scale analysis, while eliminating related data migration and administrative tasks; and enable “cloud bursting,” the seamless ability for service providers to offer spillover capacity and compute to enterprises.
“The middle tier will expand significantly next year,” says Krishnan. “Traditionally, you have the processing tier and the archive tier. But now you’ll need a smart middle tier that’s going to keep the 90% of data. The archive tier is ok, but it’s very inflexible and could be proprietary. The middle tier is priced like the archive tier, it’s persistent, and it’s ‘burstable.’”
Krishnan says that ‘cloud bursting’ refers to the ability to burst into the public cloud for certain periods of time, but not use the public cloud as the repository where you keep data on a permanent basis (although if you Google the term you’ll wind up with altogether different discussions and clips from The Men Who Stare at Goats film.)
Commodity hardware displaces proprietary storage. “Just as Linux displaced expensive server gear with its attractive commodity footprint, Linux-based cloud storage will displace expensive legacy storage for the same reason: it gives the user choice and it’s inexpensive, highly scalable and easy to manage,” says Krishnan.
Opex, not capex, will emerge as the most important criteria driving storage purchases. “Management and operating costs will figure much more prominently in storage decisions in 2010. Maintenance costs on existing gear will be under heavy review with the emergence of commodity-based hardware storage options. Customers will look for a storage platform that is self-managed – including the ability to monitor, configure, manage and heal itself,” according to Krishnan.
Cloud becomes an action verb. “We’ve already seen ‘cloud’ taken to new heights as an overused adjective and noun. In 2010, marketers will outdo themselves by clouding the landscape with more product names and descriptions. Admittedly, this prediction is somewhat tongue-in-cheek, but unfortunately fairly close to the mark,” says Krishnan.
(ParaScale is a cloud storage software provider that recently released the 2.0 version of its software. See “ParaScale beefs up cloud storage platform.” )
As cloud storage garners the attention of both large and small IT organizations, technological advances will come rapidly next year.
I recently spoke to Sajai Krishnan, ParaScale’s CEO (and fellow skiing enthusiast) about what to look for in cloud storage in 2010. Here are his predictions and comments.
Virtualization will drive adoption of private clouds in the enterprise. Virtualization has driven huge efficiencies in organizations but the weak link today is the storage infrastructure behind virtualized servers. The need to eliminate the SAN bottleneck and automate provisioning, configuration, management and recovery across the compute and storage tiers will drive enterprises to begin to adopt cloud storage.
“The SAN performance issue is the controller,” says Krishnan. “VMware has a best practices for storage page that starts by saying you need to figure out what the I/O requirements are for each of your applications. Any storage administrator would love to have that information, but admins often don’t have a clue about the I/O requirements of their VMized applications.”
“You have controller bottlenecks. How many disks are you going to stack up behind the controller as you scale out? It’s much more manageable when you can buy cheap servers like popcorn and just keep scaling out [in a cloud architecture].”
Service providers will start to unify cloud computing and cloud storage. In 2009 service providers began expanding cloud storage offerings. The cost and management efficiencies provided a much-needed storage alternative for consumers, SMBs and enterprises. To further drive efficiencies, cost savings and management ease of use, service providers will begin executing VMs directly on cloud storage platforms. Unifying cloud compute and storage infrastructure will also provide a platform for service providers to build new value-added services.
“With large cloud storage implementations you discover that 90% of the data is cold, and you have a lot of CPUs idle,” Krishnan explains. “Some people are looking at running hypervisors and VMs on the storage nodes.”
The middle tier emerges. The strategic importance of a low-cost, self-managing tier that provides a platform for analysis and integrated applications will emerge in organizations with large stores of file data. This middle tier will provide opportunity for administrators to automate storage management and optimize for performance and cost; support large-scale analysis, while eliminating related data migration and administrative tasks; and enable “cloud bursting,” the seamless ability for service providers to offer spillover capacity and compute to enterprises.
“The middle tier will expand significantly next year,” says Krishnan. “Traditionally, you have the processing tier and the archive tier. But now you’ll need a smart middle tier that’s going to keep the 90% of data. The archive tier is ok, but it’s very inflexible and could be proprietary. The middle tier is priced like the archive tier, it’s persistent, and it’s ‘burstable.’”
Krishnan says that ‘cloud bursting’ refers to the ability to burst into the public cloud for certain periods of time, but not use the public cloud as the repository where you keep data on a permanent basis (although if you Google the term you’ll wind up with altogether different discussions and clips from The Men Who Stare at Goats film.)
Commodity hardware displaces proprietary storage. “Just as Linux displaced expensive server gear with its attractive commodity footprint, Linux-based cloud storage will displace expensive legacy storage for the same reason: it gives the user choice and it’s inexpensive, highly scalable and easy to manage,” says Krishnan.
Opex, not capex, will emerge as the most important criteria driving storage purchases. “Management and operating costs will figure much more prominently in storage decisions in 2010. Maintenance costs on existing gear will be under heavy review with the emergence of commodity-based hardware storage options. Customers will look for a storage platform that is self-managed – including the ability to monitor, configure, manage and heal itself,” according to Krishnan.
Cloud becomes an action verb. “We’ve already seen ‘cloud’ taken to new heights as an overused adjective and noun. In 2010, marketers will outdo themselves by clouding the landscape with more product names and descriptions. Admittedly, this prediction is somewhat tongue-in-cheek, but unfortunately fairly close to the mark,” says Krishnan.
(ParaScale is a cloud storage software provider that recently released the 2.0 version of its software. See “ParaScale beefs up cloud storage platform.” )
Wednesday, December 2, 2009
Flexibility is a prerequisite to avoiding storage cloud lock-in
December 2, 2009 - Note: This post is a guest blog from Jerome Wendt, president and lead analyst with DCIG Inc. and a former IT professional. DCIG provides analysis for hardware and software companies in the data storage and electronically stored information (ESI) industries.
Flexibility is a prerequisite to avoiding storage cloud lock-in
By Jerome M. Wendt
Google searches for "cloud storage" started to take off in late 2007 according to a Google trends report and have only increased since. But as organizations move from searching for and reading about cloud storage to actually selecting cloud storage solutions, they need to ensure that those solutions offers flexibility in order to avoid public storage cloud lock-in.
Cloud storage is breaking down into two distinct camps: private and public storage clouds. Private storage clouds were discussed in a previous DCIG blog in terms of the attributes that they possess and what is motivating organizations to adopt them.
Public storage clouds are a different animal. They align more closely with the common definition of "cloud storage," but are developing their own set of characteristics that should influence which public storage cloud offering an organization selects. One such feature that an organization may not initially consider in the evaluation process is the flexibility of the public storage cloud solution.
Understanding a public storage cloud's flexibility is not as intuitive as it may sound. Most organizations assume they will fall into the category of being either a public storage cloud consumer or a provider. A public storage cloud consumer obtains storage on demand from a public storage cloud provider. Conversely, a public cloud storage provider may only offer cloud-based storage services to subscribing clients.
A few organizations may fall into a third category that both subscribes to and provides a public storage cloud. This category is primarily reserved for those few enterprise organizations, such as telcos, that may have the infrastructure, IT staff and business model to support offering a public storage cloud that serves both internal and external clients.
However, a fourth category also exists that organizations may fall into sooner rather than later. For example, an organization starts out as public cloud storage consumer but eventually decides to become a public storage cloud provider driven by data inaccessibility, increasing costs or inadequate capacity (network or storage) from its current public storage cloud provider. Once this scenario exists, suddenly an organization has a very real desire to go it alone only to find out that they are locked-in because the technology their provider uses is not available for them to purchase.
Already this issue is surfacing among organizations that have subscribed to online backup offerings -- the forerunners of the current public storage cloud trend. The challenge that early consumers of online backup services are encountering is that as they store more data with a provider, their costs inevitably go up.
So while using an online backup service is initially more economical than doing backup themselves, as they store more backup data with an online backup provider, eventually it reaches the point where the annual cost of the monthly service now exceeds what it would cost for them to provide the service themselves. However, they find themselves in a position where they can do nothing or minimally their options are extremely limited since they cannot purchase the online backup software. It is for this reason that some online backup providers are making their software available for purchase -- not just to service providers but to enterprises as well so organizations have the option to host online backup services themselves.
Organizations should expect to eventually face a similar decision when storing their data with a public storage cloud provider. While they may initially want to store data with a public storage cloud provider for ease of startup, if data availability, increased monthly storage costs, or inadequate performance becomes an issue, having the option to bring their public storage cloud in-house can start to look very appealing. However, that may only be a possibility if the underlying public cloud storage solution that the provider uses is available for purchase by the organization.
This is already happening to some degree. For example, Symantec is designing its new FileStore product so either enterprises or service providers that wish to offer cloud storage services can procure it and do so. This gives organizations the flexibility to first use public cloud storage from a provider, with FileStor as the underlying technology, while giving organizations the option to purchase the FileStore solution directly from Symantec and implement it in-house should they ever decide to do so. (Note: FileStore in its first release is intended for use only as a private storage cloud though Symantec plans to offer a public storage cloud option for FileStore in the future.)
Public storage clouds provide organizations with some exciting new options for lowering their upfront storage costs while also meeting immediate storage needs. However, public storage cloud technology is not a panacea and organizations should carefully weigh their options as they seek a provider for this service. As part of the decision, organizations need to determine if the flexibility to purchase the underlying technology that powers their provider's public cloud storage offering will eventually become a requirement. Based on what we have already seen in the online backup space, having this flexibility is an absolute necessity.
Jerome Wendt is the president and lead analyst with DCIG Inc.
Flexibility is a prerequisite to avoiding storage cloud lock-in
By Jerome M. Wendt
Google searches for "cloud storage" started to take off in late 2007 according to a Google trends report and have only increased since. But as organizations move from searching for and reading about cloud storage to actually selecting cloud storage solutions, they need to ensure that those solutions offers flexibility in order to avoid public storage cloud lock-in.
Cloud storage is breaking down into two distinct camps: private and public storage clouds. Private storage clouds were discussed in a previous DCIG blog in terms of the attributes that they possess and what is motivating organizations to adopt them.
Public storage clouds are a different animal. They align more closely with the common definition of "cloud storage," but are developing their own set of characteristics that should influence which public storage cloud offering an organization selects. One such feature that an organization may not initially consider in the evaluation process is the flexibility of the public storage cloud solution.
Understanding a public storage cloud's flexibility is not as intuitive as it may sound. Most organizations assume they will fall into the category of being either a public storage cloud consumer or a provider. A public storage cloud consumer obtains storage on demand from a public storage cloud provider. Conversely, a public cloud storage provider may only offer cloud-based storage services to subscribing clients.
A few organizations may fall into a third category that both subscribes to and provides a public storage cloud. This category is primarily reserved for those few enterprise organizations, such as telcos, that may have the infrastructure, IT staff and business model to support offering a public storage cloud that serves both internal and external clients.
However, a fourth category also exists that organizations may fall into sooner rather than later. For example, an organization starts out as public cloud storage consumer but eventually decides to become a public storage cloud provider driven by data inaccessibility, increasing costs or inadequate capacity (network or storage) from its current public storage cloud provider. Once this scenario exists, suddenly an organization has a very real desire to go it alone only to find out that they are locked-in because the technology their provider uses is not available for them to purchase.
Already this issue is surfacing among organizations that have subscribed to online backup offerings -- the forerunners of the current public storage cloud trend. The challenge that early consumers of online backup services are encountering is that as they store more data with a provider, their costs inevitably go up.
So while using an online backup service is initially more economical than doing backup themselves, as they store more backup data with an online backup provider, eventually it reaches the point where the annual cost of the monthly service now exceeds what it would cost for them to provide the service themselves. However, they find themselves in a position where they can do nothing or minimally their options are extremely limited since they cannot purchase the online backup software. It is for this reason that some online backup providers are making their software available for purchase -- not just to service providers but to enterprises as well so organizations have the option to host online backup services themselves.
Organizations should expect to eventually face a similar decision when storing their data with a public storage cloud provider. While they may initially want to store data with a public storage cloud provider for ease of startup, if data availability, increased monthly storage costs, or inadequate performance becomes an issue, having the option to bring their public storage cloud in-house can start to look very appealing. However, that may only be a possibility if the underlying public cloud storage solution that the provider uses is available for purchase by the organization.
This is already happening to some degree. For example, Symantec is designing its new FileStore product so either enterprises or service providers that wish to offer cloud storage services can procure it and do so. This gives organizations the flexibility to first use public cloud storage from a provider, with FileStor as the underlying technology, while giving organizations the option to purchase the FileStore solution directly from Symantec and implement it in-house should they ever decide to do so. (Note: FileStore in its first release is intended for use only as a private storage cloud though Symantec plans to offer a public storage cloud option for FileStore in the future.)
Public storage clouds provide organizations with some exciting new options for lowering their upfront storage costs while also meeting immediate storage needs. However, public storage cloud technology is not a panacea and organizations should carefully weigh their options as they seek a provider for this service. As part of the decision, organizations need to determine if the flexibility to purchase the underlying technology that powers their provider's public cloud storage offering will eventually become a requirement. Based on what we have already seen in the online backup space, having this flexibility is an absolute necessity.
Jerome Wendt is the president and lead analyst with DCIG Inc.
Tuesday, December 1, 2009
How many terabytes do you manage?
December 1, 2009 – Forrester Research analyst Andrew Reichman recently wrote a report that provides a lot of advice on how to use key performance indicators (KPIs) to measure the efficiency of your storage operations/personnel, noting that building an effective storage environment is a balancing act between performance, reliability and efficiency.
Cruise to How Efficient is Your Storage Environment to read the full report ($499), which was co-authored by Forrester analysts Stephanie Balaouras and Margaret Ryan, but one area of the report that jumped out at me was the section on the TB/FTE metric. (FTE stands for either full-time employee or full-time equivalent.)
Basically, TB/FTE is a measure of the quantity of primary storage per staff member. To get the number, you divide the total number of raw TBs by the total number of permanent, contract, and professional-services employees engaged in managing storage.
Forrester’s Reichman notes that “a few years ago the accepted best practice benchmark for TB/FTE hovered around 35 to 40, but since then most storage environments have grown significantly without adding staff.” Of course, that’s due partly to the IT mandate, or mantra, “to do more with less.” According to Reichman, the best practice benchmark for TB/FTE today is approximately 100.
Assuming that you calculate metrics such as TB/FTE (and I realize that most firms don’t), drop me an email at daves@pennwell.com and let me know what the TB/FTE metric is at your shop. Of course, I’ll keep the responses anonymous, but I’d like to get a feel for how many terabytes are actually under management per employee for a future post or article.
Cruise to How Efficient is Your Storage Environment to read the full report ($499), which was co-authored by Forrester analysts Stephanie Balaouras and Margaret Ryan, but one area of the report that jumped out at me was the section on the TB/FTE metric. (FTE stands for either full-time employee or full-time equivalent.)
Basically, TB/FTE is a measure of the quantity of primary storage per staff member. To get the number, you divide the total number of raw TBs by the total number of permanent, contract, and professional-services employees engaged in managing storage.
Forrester’s Reichman notes that “a few years ago the accepted best practice benchmark for TB/FTE hovered around 35 to 40, but since then most storage environments have grown significantly without adding staff.” Of course, that’s due partly to the IT mandate, or mantra, “to do more with less.” According to Reichman, the best practice benchmark for TB/FTE today is approximately 100.
Assuming that you calculate metrics such as TB/FTE (and I realize that most firms don’t), drop me an email at daves@pennwell.com and let me know what the TB/FTE metric is at your shop. Of course, I’ll keep the responses anonymous, but I’d like to get a feel for how many terabytes are actually under management per employee for a future post or article.
Subscribe to:
Posts (Atom)