December 30, 2009 -- Ever since I’ve been involved with storage, the topic of solid-state disk (SSD) drives has come up in passing from time to time. But my first real hint at how disruptive SSDs were going to be came when I was attending C-Drive, Compellant's annual user conference, in the spring of 2008.
One of the speakers, a Compellent user, was lamenting how difficult it was to optimize a disk array for database performance and how Compellent’s storage system had helped him in that area by striping data across the array. One of the Compellent presenters then asked the user if he would consider using SSDs to improve performance and the user responded positively, which I think caught the presenter off-guard.
The presenter then turned to the audience of end-users and asked them how many would consider implementing SSD technology. Well over 50% of them raised their hands, which again appeared to catch the presenter by surprise. At that point I knew SSD technology was on the cusp of becoming the next big thing in storage, and it was not long after that conference that Compellent announced support for SSDs.
Heading into 2010, SSDs are gaining end-user interest, as well as vendor hype. Already they’re being hailed as the end to high-performance disk drives, and some observers predict that SSDs will eventually replace all disk drives - possibly as soon as the end of this coming decade.
Whether or not that comes to pass remains to be seen. SSD technology is still maturing, and prices have to come down to the point where most businesses can justify and afford deployment.
There are three notable ways that SSDs were positioned for use in the enterprise during 2009, making SSDs my third major trend of 2009. [To read about the other two trends, see “2009: The Beginning of the Corporate Love Affair with Cloud Storage” and “Deduplication is the Big Success Story of 2009.” ]
First, most SSDs are being initially deployed on external networked storage systems. Vendors such as Compellent, Dot Hill, EMC, HDS, Pillar Data Systems and others have added support for SSDs to their storage systems or can virtualize external SSD systems (e.g., NetApp's support for Texas Memory Systems’ RamSan 500).
Installing SSD drives, from providers such as STEC or Pliant Technologies, in existing storage systems was certainly one of the fastest and easiest ways to bring SSDs to market in 2009. However, I’m already questioning how long this trend of putting SSDs into storage systems lasts before companies figure out that they’re not getting the full range of anticipated benefits.
This scenario may take two or three years to play out, but what I eventually see emerging as the problem with putting SSDs into storage systems is the storage network itself.
Most networked storage infrastructures in use today are Ethernet, Fibre Channel, or a mix of both. Unfortunately, neither one of these technologies - even the latest versions - currently offers sufficient bandwidth to deliver on SSD's potential.
One technology that seems best positioned to break this bottleneck is InfiniBand. The problem is that no one outside of the high performance computing (HPC) space seems ready to commit to this protocol or even take a serious look at it. However, the forthcoming network bandwidth logjam is already recognized within storage circles and, in talking with CTOs of a number of storage providers, some are already testing InfiniBand with their storage systems.
Second, SSDs are showing up in servers. In a trend that runs counter to trends over the last decade, storage is moving back inside servers again. This trend is being largely (and I would say almost exclusively) driven by Fusion-io. While I have written a number of blogs about Fusion-io's ioDrives in the past, and their disruptive nature, that’s only part of the story. (Fusion-io puts SSDs on PCI-Express cards that are placed inside of a server to achieve higher performance than what high-end storage systems can achieve.)
The rest of the story has to do with the elimination of cost and complexity. Many storage networks are still neither easy nor intuitive to manage, especially in high-end storage environments.
Fusion-io eliminates the need, and associated expense, to deploy any storage network at all. While its ioDrive reintroduces the inability to easily share unused, excess storage capacity with other servers, its design is so compelling (especially when viewed in the context of cloud computing and cloud storage architectures) that the ioDrive will be a force in the coming years - maybe more so than any other SSD architecture.
Third, SSDs are moving into the storage network. This approach is currently being championed by Dataram with its XcelaSAN product. What makes this approach so different from the other SSD architectures is that it’s not inside the server or the storage system, but resides in the storage network between the servers and back-end storage.
The argument that Dataram makes for this implementation of SSD is that approximately 5% of data on a storage system is active at any one time. However, most storage systems only have a fraction of that much cache. For instance, a disk array with 20TB of active data should in theory have about 1TB of cache in order to provide the read and write performance that high-end applications need.
A quick check will reveal that many midrange storage systems are lucky to offer a fraction of that, with most supporting in the range of 3GB to 5GB of cache. Dataram argues that by putting its XcelaSAN in front of an existing midrange storage system, it can act as an alternative to cache while turning a midrange storage system into a high-end solution at a fraction of the cost.
Not a bad idea, but there are some hurdles to overcome. First, Dataram needs to convince users that placing its device inline between the server and back-end storage is a good idea.
Another hurdle the company faces is from midrange storage system providers. Depending on how Dataram positions its product, why should vendors such as Compellent, Pillar and other midrange providers agree to play second fiddle to Dataram and sit behind its solution, especially if they are offering SSD in their own systems? Further, SSD is becoming lucrative, which means that the disk-array providers would be giving up the big margins to Dataram.
Another potential obstacle Dataram faces is from traditional network storage providers such as Brocade and Cisco. These guys never take kindly to anyone intruding on what they view as their turf, so if Dataram garners any momentum at all, expect these heavyweights to step in and try to snuff it out. So while I like both Dataram's architecture and argument as to why deploying SSD in the network makes sense, convincing the channel to sell it, end-users to buy it, midrange storage system providers to endorse it, and storage networking providers to leave them alone is going to be an uphill challenge.
All this said, SSD gained solid momentum and mind share among end-users in 2009 and is poised to emerge as a major trend in 2010. However, which of these architectures becomes the predominant one remains to be seen. In the near term (2010 to 2011) I’m placing my bets on storage system solutions, but until technologies such as those from Fusion-io reach critical mass, and others from vendors such as Dataram are tested in the market, the real impact of SSD is yet to come.
To read more of Jerome Wendt’s blogs, visit the DCIG web site.
Wednesday, December 30, 2009
Sunday, December 20, 2009
Who are the (perceived) leaders in storage virtualization?
December 21, 2009 – IT Brand Pulse recently surveyed 146 IT professionals, asking them which vendors they perceived as leaders in storage virtualization. In addition to the overall leader, other categories included price, performance, reliability, service and support, and innovation.
IT Brand Pulse gave the survey participants 12 vendor choices in each category: 3PAR, DataCore, Dell, EMC, FalconStor, Hitachi, HP, IBM, LSI, NetApp, Symantec and VMware.
And the winners are . . .
EMC took top honors as the perceived leader in the overall storage virtualization market, and also took the top spot in the reliability and service/support categories, while placing second in the innovation and performance categories.
NetApp took the top spots in innovation and performance and, strangely enough, placed second in all other categories.
Not surprisingly, Dell was #1 in the price category and, surprisingly, was #3 in the market leader and performance categories.
VMware was #3 in price and innovation, while IBM placed third in reliability and service/support.
The other seven vendors each got less than 10% of the vote in all of the categories.
Here are the win, place, show vendors in each category, with the percentage of end-user votes they received.
Market leader: EMC (26%), NetApp (19.9%), Dell (12.3%)
Price: Dell (27.4%), NetApp (22.6%), VMware (17.1%)
Performance: NetApp (24.7%), EMC (21.2%), Dell (11%)
Reliability: EMC (28.1%), NetApp (15.8%), IBM (12.3%)
Service and support: EMC (28.8%), NetApp (19.9%), IBM (11.6%)
Innovation: NetApp (25.3%), EMC (21.9%), VMware (11%)
Frank Berry, IT Brand Pulse’s CEO and senior analyst – and infostor.com’s newest blogger – recently blogged about storage virtualization. I like his lead:
“They say humans recognize the smell of chili, but dogs can detect the smell of different spices in the chili. Similarly, humans recognize storage virtualization, but industry experts see clear distinctions between the different types of storage virtualization.”
Frank goes on to outline the first two phases of storage virtualization, and provides a glimpse into the emerging third phase (heterogeneous storage virtualization in the cloud). And he also gives his view on which vendor was first to market with storage virtualization. Hint: It was in 1970.
Read Frank’s blog here.
IT Brand Pulse gave the survey participants 12 vendor choices in each category: 3PAR, DataCore, Dell, EMC, FalconStor, Hitachi, HP, IBM, LSI, NetApp, Symantec and VMware.
And the winners are . . .
EMC took top honors as the perceived leader in the overall storage virtualization market, and also took the top spot in the reliability and service/support categories, while placing second in the innovation and performance categories.
NetApp took the top spots in innovation and performance and, strangely enough, placed second in all other categories.
Not surprisingly, Dell was #1 in the price category and, surprisingly, was #3 in the market leader and performance categories.
VMware was #3 in price and innovation, while IBM placed third in reliability and service/support.
The other seven vendors each got less than 10% of the vote in all of the categories.
Here are the win, place, show vendors in each category, with the percentage of end-user votes they received.
Market leader: EMC (26%), NetApp (19.9%), Dell (12.3%)
Price: Dell (27.4%), NetApp (22.6%), VMware (17.1%)
Performance: NetApp (24.7%), EMC (21.2%), Dell (11%)
Reliability: EMC (28.1%), NetApp (15.8%), IBM (12.3%)
Service and support: EMC (28.8%), NetApp (19.9%), IBM (11.6%)
Innovation: NetApp (25.3%), EMC (21.9%), VMware (11%)
Frank Berry, IT Brand Pulse’s CEO and senior analyst – and infostor.com’s newest blogger – recently blogged about storage virtualization. I like his lead:
“They say humans recognize the smell of chili, but dogs can detect the smell of different spices in the chili. Similarly, humans recognize storage virtualization, but industry experts see clear distinctions between the different types of storage virtualization.”
Frank goes on to outline the first two phases of storage virtualization, and provides a glimpse into the emerging third phase (heterogeneous storage virtualization in the cloud). And he also gives his view on which vendor was first to market with storage virtualization. Hint: It was in 1970.
Read Frank’s blog here.
Monday, December 14, 2009
Can Citrix, Microsoft put a dent in VMware? (and is storage the hammer?)
December 14, 2009 – According to a recent Fortune 1000 end-user survey conducted by TheInfoPro research firm, users are increasingly considering alternatives, or additions, to VMware.
In the TheInfoPro survey, just over 75% of the respondents were using VMware today. However, nearly two-thirds of the companies have tested a hypervisor other than VMware, with Microsoft and Citrix cited most often, followed by Red Hat. Of those who have tested a VMware alternative, 27% plan to use the alternative, while an additional 20% say they “may” use the alternative.
On the other hand, when VMware users were asked if they would switch to an alternative, only 2% cited firm plans and an additional 9% were considering it. Bob Gill, TheInfoPro’s managing director of server and virtualization research, concludes that “The analysis reveals that VMware users aren’t switching away from VMware, but [many] are embracing competing technologies in heterogeneous deployments.”
One possible scenario is that VMware will retain its hegemony in production environments, while vendors such as Citrix and Microsoft make inroads through deployments in development and testing scenarios.
In comparing the relative strengths and weaknesses of the various virtual server platform alternatives, one often-overlooked area is storage – not third-party storage optimized for virtual environments but, rather, the native storage tools that come from the virtualization vendors themselves.
Could storage be the hammer that enables vendors such as Citrix and Microsoft to put a dent in VMware? Well, it’s only part of the overall puzzle, but it’s an increasingly important one as server/storage administrators rapidly grow their virtual environments, winding up with virtual server “sprawl” and the associated storage challenges.
I recently read a Solution Profile (white paper) written by the Taneja Group that takes an in-depth look at Citrix’s StorageLink, which is part of Citrix Essentials. Taneja analysts contrast two approaches to storage in the context of virtual servers: those that attempt to replicate, and those that integrate with, the enterprise storage infrastructure. The former approach is referred to as ‘monolithic,’ and the latter is referred to as an ‘assimilated’ approach (characterized by Citrix StorageLink).
If you’re grappling with the storage issues in your virtualization environment, and/or considering virtual server alternatives, check out the Taneja Group’s “Multiplying Virtualization Value by the Full Power of Enterprise Storage with Citrix StorageLink.”
In the TheInfoPro survey, just over 75% of the respondents were using VMware today. However, nearly two-thirds of the companies have tested a hypervisor other than VMware, with Microsoft and Citrix cited most often, followed by Red Hat. Of those who have tested a VMware alternative, 27% plan to use the alternative, while an additional 20% say they “may” use the alternative.
On the other hand, when VMware users were asked if they would switch to an alternative, only 2% cited firm plans and an additional 9% were considering it. Bob Gill, TheInfoPro’s managing director of server and virtualization research, concludes that “The analysis reveals that VMware users aren’t switching away from VMware, but [many] are embracing competing technologies in heterogeneous deployments.”
One possible scenario is that VMware will retain its hegemony in production environments, while vendors such as Citrix and Microsoft make inroads through deployments in development and testing scenarios.
In comparing the relative strengths and weaknesses of the various virtual server platform alternatives, one often-overlooked area is storage – not third-party storage optimized for virtual environments but, rather, the native storage tools that come from the virtualization vendors themselves.
Could storage be the hammer that enables vendors such as Citrix and Microsoft to put a dent in VMware? Well, it’s only part of the overall puzzle, but it’s an increasingly important one as server/storage administrators rapidly grow their virtual environments, winding up with virtual server “sprawl” and the associated storage challenges.
I recently read a Solution Profile (white paper) written by the Taneja Group that takes an in-depth look at Citrix’s StorageLink, which is part of Citrix Essentials. Taneja analysts contrast two approaches to storage in the context of virtual servers: those that attempt to replicate, and those that integrate with, the enterprise storage infrastructure. The former approach is referred to as ‘monolithic,’ and the latter is referred to as an ‘assimilated’ approach (characterized by Citrix StorageLink).
If you’re grappling with the storage issues in your virtualization environment, and/or considering virtual server alternatives, check out the Taneja Group’s “Multiplying Virtualization Value by the Full Power of Enterprise Storage with Citrix StorageLink.”
Tuesday, December 8, 2009
We’re looking for end-user bloggers
December 8, 2009 -- InfoStor.com is looking for end users, as well as channel professionals (integrators and VARs), to guest blog on our site. Our goal is to develop a greater sense of community among infostor.com visitors.
You can blog as frequently or infrequently as you like, with either opinion pieces or, better yet, success stories (case studies). Share your thoughts, insights and solutions with other storage professionals in a colleague community where you can gain experience through the experience of other storage specialists.
Contributing will also call attention to your, and your company’s, areas of expertise and innovation.
If you're interested in participating, contact me at daves@pennwell.com or give me a call at (818) 484-5645 to discuss.
And follow us on Twitter at InfoStorOnline
And on LinkedIn at the InfoStor Group.
-Dave
You can blog as frequently or infrequently as you like, with either opinion pieces or, better yet, success stories (case studies). Share your thoughts, insights and solutions with other storage professionals in a colleague community where you can gain experience through the experience of other storage specialists.
Contributing will also call attention to your, and your company’s, areas of expertise and innovation.
If you're interested in participating, contact me at daves@pennwell.com or give me a call at (818) 484-5645 to discuss.
And follow us on Twitter at InfoStorOnline
And on LinkedIn at the InfoStor Group.
-Dave
Monday, December 7, 2009
Cloud storage predictions for 2010
December 7, 2009 – As much as we might rail against the terminology itself, and despite the rampant hype around the technology, cloud storage is here to stay. And at least for a while, the seemingly hundreds of cloud storage providers can survive (pending the inevitable shakeout), jockeying and jousting for position and differentiation.
As cloud storage garners the attention of both large and small IT organizations, technological advances will come rapidly next year.
I recently spoke to Sajai Krishnan, ParaScale’s CEO (and fellow skiing enthusiast) about what to look for in cloud storage in 2010. Here are his predictions and comments.
Virtualization will drive adoption of private clouds in the enterprise. Virtualization has driven huge efficiencies in organizations but the weak link today is the storage infrastructure behind virtualized servers. The need to eliminate the SAN bottleneck and automate provisioning, configuration, management and recovery across the compute and storage tiers will drive enterprises to begin to adopt cloud storage.
“The SAN performance issue is the controller,” says Krishnan. “VMware has a best practices for storage page that starts by saying you need to figure out what the I/O requirements are for each of your applications. Any storage administrator would love to have that information, but admins often don’t have a clue about the I/O requirements of their VMized applications.”
“You have controller bottlenecks. How many disks are you going to stack up behind the controller as you scale out? It’s much more manageable when you can buy cheap servers like popcorn and just keep scaling out [in a cloud architecture].”
Service providers will start to unify cloud computing and cloud storage. In 2009 service providers began expanding cloud storage offerings. The cost and management efficiencies provided a much-needed storage alternative for consumers, SMBs and enterprises. To further drive efficiencies, cost savings and management ease of use, service providers will begin executing VMs directly on cloud storage platforms. Unifying cloud compute and storage infrastructure will also provide a platform for service providers to build new value-added services.
“With large cloud storage implementations you discover that 90% of the data is cold, and you have a lot of CPUs idle,” Krishnan explains. “Some people are looking at running hypervisors and VMs on the storage nodes.”
The middle tier emerges. The strategic importance of a low-cost, self-managing tier that provides a platform for analysis and integrated applications will emerge in organizations with large stores of file data. This middle tier will provide opportunity for administrators to automate storage management and optimize for performance and cost; support large-scale analysis, while eliminating related data migration and administrative tasks; and enable “cloud bursting,” the seamless ability for service providers to offer spillover capacity and compute to enterprises.
“The middle tier will expand significantly next year,” says Krishnan. “Traditionally, you have the processing tier and the archive tier. But now you’ll need a smart middle tier that’s going to keep the 90% of data. The archive tier is ok, but it’s very inflexible and could be proprietary. The middle tier is priced like the archive tier, it’s persistent, and it’s ‘burstable.’”
Krishnan says that ‘cloud bursting’ refers to the ability to burst into the public cloud for certain periods of time, but not use the public cloud as the repository where you keep data on a permanent basis (although if you Google the term you’ll wind up with altogether different discussions and clips from The Men Who Stare at Goats film.)
Commodity hardware displaces proprietary storage. “Just as Linux displaced expensive server gear with its attractive commodity footprint, Linux-based cloud storage will displace expensive legacy storage for the same reason: it gives the user choice and it’s inexpensive, highly scalable and easy to manage,” says Krishnan.
Opex, not capex, will emerge as the most important criteria driving storage purchases. “Management and operating costs will figure much more prominently in storage decisions in 2010. Maintenance costs on existing gear will be under heavy review with the emergence of commodity-based hardware storage options. Customers will look for a storage platform that is self-managed – including the ability to monitor, configure, manage and heal itself,” according to Krishnan.
Cloud becomes an action verb. “We’ve already seen ‘cloud’ taken to new heights as an overused adjective and noun. In 2010, marketers will outdo themselves by clouding the landscape with more product names and descriptions. Admittedly, this prediction is somewhat tongue-in-cheek, but unfortunately fairly close to the mark,” says Krishnan.
(ParaScale is a cloud storage software provider that recently released the 2.0 version of its software. See “ParaScale beefs up cloud storage platform.” )
As cloud storage garners the attention of both large and small IT organizations, technological advances will come rapidly next year.
I recently spoke to Sajai Krishnan, ParaScale’s CEO (and fellow skiing enthusiast) about what to look for in cloud storage in 2010. Here are his predictions and comments.
Virtualization will drive adoption of private clouds in the enterprise. Virtualization has driven huge efficiencies in organizations but the weak link today is the storage infrastructure behind virtualized servers. The need to eliminate the SAN bottleneck and automate provisioning, configuration, management and recovery across the compute and storage tiers will drive enterprises to begin to adopt cloud storage.
“The SAN performance issue is the controller,” says Krishnan. “VMware has a best practices for storage page that starts by saying you need to figure out what the I/O requirements are for each of your applications. Any storage administrator would love to have that information, but admins often don’t have a clue about the I/O requirements of their VMized applications.”
“You have controller bottlenecks. How many disks are you going to stack up behind the controller as you scale out? It’s much more manageable when you can buy cheap servers like popcorn and just keep scaling out [in a cloud architecture].”
Service providers will start to unify cloud computing and cloud storage. In 2009 service providers began expanding cloud storage offerings. The cost and management efficiencies provided a much-needed storage alternative for consumers, SMBs and enterprises. To further drive efficiencies, cost savings and management ease of use, service providers will begin executing VMs directly on cloud storage platforms. Unifying cloud compute and storage infrastructure will also provide a platform for service providers to build new value-added services.
“With large cloud storage implementations you discover that 90% of the data is cold, and you have a lot of CPUs idle,” Krishnan explains. “Some people are looking at running hypervisors and VMs on the storage nodes.”
The middle tier emerges. The strategic importance of a low-cost, self-managing tier that provides a platform for analysis and integrated applications will emerge in organizations with large stores of file data. This middle tier will provide opportunity for administrators to automate storage management and optimize for performance and cost; support large-scale analysis, while eliminating related data migration and administrative tasks; and enable “cloud bursting,” the seamless ability for service providers to offer spillover capacity and compute to enterprises.
“The middle tier will expand significantly next year,” says Krishnan. “Traditionally, you have the processing tier and the archive tier. But now you’ll need a smart middle tier that’s going to keep the 90% of data. The archive tier is ok, but it’s very inflexible and could be proprietary. The middle tier is priced like the archive tier, it’s persistent, and it’s ‘burstable.’”
Krishnan says that ‘cloud bursting’ refers to the ability to burst into the public cloud for certain periods of time, but not use the public cloud as the repository where you keep data on a permanent basis (although if you Google the term you’ll wind up with altogether different discussions and clips from The Men Who Stare at Goats film.)
Commodity hardware displaces proprietary storage. “Just as Linux displaced expensive server gear with its attractive commodity footprint, Linux-based cloud storage will displace expensive legacy storage for the same reason: it gives the user choice and it’s inexpensive, highly scalable and easy to manage,” says Krishnan.
Opex, not capex, will emerge as the most important criteria driving storage purchases. “Management and operating costs will figure much more prominently in storage decisions in 2010. Maintenance costs on existing gear will be under heavy review with the emergence of commodity-based hardware storage options. Customers will look for a storage platform that is self-managed – including the ability to monitor, configure, manage and heal itself,” according to Krishnan.
Cloud becomes an action verb. “We’ve already seen ‘cloud’ taken to new heights as an overused adjective and noun. In 2010, marketers will outdo themselves by clouding the landscape with more product names and descriptions. Admittedly, this prediction is somewhat tongue-in-cheek, but unfortunately fairly close to the mark,” says Krishnan.
(ParaScale is a cloud storage software provider that recently released the 2.0 version of its software. See “ParaScale beefs up cloud storage platform.” )
Wednesday, December 2, 2009
Flexibility is a prerequisite to avoiding storage cloud lock-in
December 2, 2009 - Note: This post is a guest blog from Jerome Wendt, president and lead analyst with DCIG Inc. and a former IT professional. DCIG provides analysis for hardware and software companies in the data storage and electronically stored information (ESI) industries.
Flexibility is a prerequisite to avoiding storage cloud lock-in
By Jerome M. Wendt
Google searches for "cloud storage" started to take off in late 2007 according to a Google trends report and have only increased since. But as organizations move from searching for and reading about cloud storage to actually selecting cloud storage solutions, they need to ensure that those solutions offers flexibility in order to avoid public storage cloud lock-in.
Cloud storage is breaking down into two distinct camps: private and public storage clouds. Private storage clouds were discussed in a previous DCIG blog in terms of the attributes that they possess and what is motivating organizations to adopt them.
Public storage clouds are a different animal. They align more closely with the common definition of "cloud storage," but are developing their own set of characteristics that should influence which public storage cloud offering an organization selects. One such feature that an organization may not initially consider in the evaluation process is the flexibility of the public storage cloud solution.
Understanding a public storage cloud's flexibility is not as intuitive as it may sound. Most organizations assume they will fall into the category of being either a public storage cloud consumer or a provider. A public storage cloud consumer obtains storage on demand from a public storage cloud provider. Conversely, a public cloud storage provider may only offer cloud-based storage services to subscribing clients.
A few organizations may fall into a third category that both subscribes to and provides a public storage cloud. This category is primarily reserved for those few enterprise organizations, such as telcos, that may have the infrastructure, IT staff and business model to support offering a public storage cloud that serves both internal and external clients.
However, a fourth category also exists that organizations may fall into sooner rather than later. For example, an organization starts out as public cloud storage consumer but eventually decides to become a public storage cloud provider driven by data inaccessibility, increasing costs or inadequate capacity (network or storage) from its current public storage cloud provider. Once this scenario exists, suddenly an organization has a very real desire to go it alone only to find out that they are locked-in because the technology their provider uses is not available for them to purchase.
Already this issue is surfacing among organizations that have subscribed to online backup offerings -- the forerunners of the current public storage cloud trend. The challenge that early consumers of online backup services are encountering is that as they store more data with a provider, their costs inevitably go up.
So while using an online backup service is initially more economical than doing backup themselves, as they store more backup data with an online backup provider, eventually it reaches the point where the annual cost of the monthly service now exceeds what it would cost for them to provide the service themselves. However, they find themselves in a position where they can do nothing or minimally their options are extremely limited since they cannot purchase the online backup software. It is for this reason that some online backup providers are making their software available for purchase -- not just to service providers but to enterprises as well so organizations have the option to host online backup services themselves.
Organizations should expect to eventually face a similar decision when storing their data with a public storage cloud provider. While they may initially want to store data with a public storage cloud provider for ease of startup, if data availability, increased monthly storage costs, or inadequate performance becomes an issue, having the option to bring their public storage cloud in-house can start to look very appealing. However, that may only be a possibility if the underlying public cloud storage solution that the provider uses is available for purchase by the organization.
This is already happening to some degree. For example, Symantec is designing its new FileStore product so either enterprises or service providers that wish to offer cloud storage services can procure it and do so. This gives organizations the flexibility to first use public cloud storage from a provider, with FileStor as the underlying technology, while giving organizations the option to purchase the FileStore solution directly from Symantec and implement it in-house should they ever decide to do so. (Note: FileStore in its first release is intended for use only as a private storage cloud though Symantec plans to offer a public storage cloud option for FileStore in the future.)
Public storage clouds provide organizations with some exciting new options for lowering their upfront storage costs while also meeting immediate storage needs. However, public storage cloud technology is not a panacea and organizations should carefully weigh their options as they seek a provider for this service. As part of the decision, organizations need to determine if the flexibility to purchase the underlying technology that powers their provider's public cloud storage offering will eventually become a requirement. Based on what we have already seen in the online backup space, having this flexibility is an absolute necessity.
Jerome Wendt is the president and lead analyst with DCIG Inc.
Flexibility is a prerequisite to avoiding storage cloud lock-in
By Jerome M. Wendt
Google searches for "cloud storage" started to take off in late 2007 according to a Google trends report and have only increased since. But as organizations move from searching for and reading about cloud storage to actually selecting cloud storage solutions, they need to ensure that those solutions offers flexibility in order to avoid public storage cloud lock-in.
Cloud storage is breaking down into two distinct camps: private and public storage clouds. Private storage clouds were discussed in a previous DCIG blog in terms of the attributes that they possess and what is motivating organizations to adopt them.
Public storage clouds are a different animal. They align more closely with the common definition of "cloud storage," but are developing their own set of characteristics that should influence which public storage cloud offering an organization selects. One such feature that an organization may not initially consider in the evaluation process is the flexibility of the public storage cloud solution.
Understanding a public storage cloud's flexibility is not as intuitive as it may sound. Most organizations assume they will fall into the category of being either a public storage cloud consumer or a provider. A public storage cloud consumer obtains storage on demand from a public storage cloud provider. Conversely, a public cloud storage provider may only offer cloud-based storage services to subscribing clients.
A few organizations may fall into a third category that both subscribes to and provides a public storage cloud. This category is primarily reserved for those few enterprise organizations, such as telcos, that may have the infrastructure, IT staff and business model to support offering a public storage cloud that serves both internal and external clients.
However, a fourth category also exists that organizations may fall into sooner rather than later. For example, an organization starts out as public cloud storage consumer but eventually decides to become a public storage cloud provider driven by data inaccessibility, increasing costs or inadequate capacity (network or storage) from its current public storage cloud provider. Once this scenario exists, suddenly an organization has a very real desire to go it alone only to find out that they are locked-in because the technology their provider uses is not available for them to purchase.
Already this issue is surfacing among organizations that have subscribed to online backup offerings -- the forerunners of the current public storage cloud trend. The challenge that early consumers of online backup services are encountering is that as they store more data with a provider, their costs inevitably go up.
So while using an online backup service is initially more economical than doing backup themselves, as they store more backup data with an online backup provider, eventually it reaches the point where the annual cost of the monthly service now exceeds what it would cost for them to provide the service themselves. However, they find themselves in a position where they can do nothing or minimally their options are extremely limited since they cannot purchase the online backup software. It is for this reason that some online backup providers are making their software available for purchase -- not just to service providers but to enterprises as well so organizations have the option to host online backup services themselves.
Organizations should expect to eventually face a similar decision when storing their data with a public storage cloud provider. While they may initially want to store data with a public storage cloud provider for ease of startup, if data availability, increased monthly storage costs, or inadequate performance becomes an issue, having the option to bring their public storage cloud in-house can start to look very appealing. However, that may only be a possibility if the underlying public cloud storage solution that the provider uses is available for purchase by the organization.
This is already happening to some degree. For example, Symantec is designing its new FileStore product so either enterprises or service providers that wish to offer cloud storage services can procure it and do so. This gives organizations the flexibility to first use public cloud storage from a provider, with FileStor as the underlying technology, while giving organizations the option to purchase the FileStore solution directly from Symantec and implement it in-house should they ever decide to do so. (Note: FileStore in its first release is intended for use only as a private storage cloud though Symantec plans to offer a public storage cloud option for FileStore in the future.)
Public storage clouds provide organizations with some exciting new options for lowering their upfront storage costs while also meeting immediate storage needs. However, public storage cloud technology is not a panacea and organizations should carefully weigh their options as they seek a provider for this service. As part of the decision, organizations need to determine if the flexibility to purchase the underlying technology that powers their provider's public cloud storage offering will eventually become a requirement. Based on what we have already seen in the online backup space, having this flexibility is an absolute necessity.
Jerome Wendt is the president and lead analyst with DCIG Inc.
Tuesday, December 1, 2009
How many terabytes do you manage?
December 1, 2009 – Forrester Research analyst Andrew Reichman recently wrote a report that provides a lot of advice on how to use key performance indicators (KPIs) to measure the efficiency of your storage operations/personnel, noting that building an effective storage environment is a balancing act between performance, reliability and efficiency.
Cruise to How Efficient is Your Storage Environment to read the full report ($499), which was co-authored by Forrester analysts Stephanie Balaouras and Margaret Ryan, but one area of the report that jumped out at me was the section on the TB/FTE metric. (FTE stands for either full-time employee or full-time equivalent.)
Basically, TB/FTE is a measure of the quantity of primary storage per staff member. To get the number, you divide the total number of raw TBs by the total number of permanent, contract, and professional-services employees engaged in managing storage.
Forrester’s Reichman notes that “a few years ago the accepted best practice benchmark for TB/FTE hovered around 35 to 40, but since then most storage environments have grown significantly without adding staff.” Of course, that’s due partly to the IT mandate, or mantra, “to do more with less.” According to Reichman, the best practice benchmark for TB/FTE today is approximately 100.
Assuming that you calculate metrics such as TB/FTE (and I realize that most firms don’t), drop me an email at daves@pennwell.com and let me know what the TB/FTE metric is at your shop. Of course, I’ll keep the responses anonymous, but I’d like to get a feel for how many terabytes are actually under management per employee for a future post or article.
Cruise to How Efficient is Your Storage Environment to read the full report ($499), which was co-authored by Forrester analysts Stephanie Balaouras and Margaret Ryan, but one area of the report that jumped out at me was the section on the TB/FTE metric. (FTE stands for either full-time employee or full-time equivalent.)
Basically, TB/FTE is a measure of the quantity of primary storage per staff member. To get the number, you divide the total number of raw TBs by the total number of permanent, contract, and professional-services employees engaged in managing storage.
Forrester’s Reichman notes that “a few years ago the accepted best practice benchmark for TB/FTE hovered around 35 to 40, but since then most storage environments have grown significantly without adding staff.” Of course, that’s due partly to the IT mandate, or mantra, “to do more with less.” According to Reichman, the best practice benchmark for TB/FTE today is approximately 100.
Assuming that you calculate metrics such as TB/FTE (and I realize that most firms don’t), drop me an email at daves@pennwell.com and let me know what the TB/FTE metric is at your shop. Of course, I’ll keep the responses anonymous, but I’d like to get a feel for how many terabytes are actually under management per employee for a future post or article.
Tuesday, November 24, 2009
Brocade gives thanks
November 24, 2009 – Brocade mildly surprised financial analysts this week when it reported its fourth fiscal quarter and full fiscal year end with, for the most part, good news. Highlights: Quarterly revenues increased 31% year-over-year to $521.8 million, and annual revenues increased 33% year-over-year to more than $1.95 billion.
CEO Mike Klayko noted that Brocade had exceeded the Street’s consensus non-GAAP EPS estimates for the seventeenth consecutive quarter.
In the fourth quarter, Brocade shipped about one million SAN ports.
Comparing Q4 2009 to Q4 2008, OEM revenues were down from 88% to 65% of total revenues, while channel/direct revenues jumped from 12% to 35% of total revenues. On a related note: storage-specific revenues, as a percent of total revenues, fell from 84% to 58%, while revenues from Ethernet products accounted for 25% of the total in Q4 2009 compared to 0% in the pre-Foundry-acquisition Q4 2008.
Klayko had plenty to say, but his comment about a possible acquisition was what made the news. “We’re not actively shopping ourselves. That’s false.” I suppose that depends on what “actively” means, but the financial community seems to think that an acquisition isn’t about to happen now that HP opted for 3Com, which squelched speculation that HP would scoop up Brocade. Since the HP-3Com announcement, Brocade shares have been down more than 10%. They closed at $7.10 today. The 52-week range was $2.05 to $9.84.
I’m sure stockholders were hoping for (betting on) an acquisition, but Brocade seems to be in great shape on its own. Cisco isn’t giving them much of a run for the money in the FC switch space; Brocade is gaining a toehold in the FC HBA space; it’s gaining share in the Ethernet market; and the company will be well-positioned in the FCoE/CEE field by the time that battle hits full stride. In other words, when end users actually start deploying FCoE.
For more info from Brocade:
Slides
Prepared Remarks
Summary Sheet
CEO Mike Klayko noted that Brocade had exceeded the Street’s consensus non-GAAP EPS estimates for the seventeenth consecutive quarter.
In the fourth quarter, Brocade shipped about one million SAN ports.
Comparing Q4 2009 to Q4 2008, OEM revenues were down from 88% to 65% of total revenues, while channel/direct revenues jumped from 12% to 35% of total revenues. On a related note: storage-specific revenues, as a percent of total revenues, fell from 84% to 58%, while revenues from Ethernet products accounted for 25% of the total in Q4 2009 compared to 0% in the pre-Foundry-acquisition Q4 2008.
Klayko had plenty to say, but his comment about a possible acquisition was what made the news. “We’re not actively shopping ourselves. That’s false.” I suppose that depends on what “actively” means, but the financial community seems to think that an acquisition isn’t about to happen now that HP opted for 3Com, which squelched speculation that HP would scoop up Brocade. Since the HP-3Com announcement, Brocade shares have been down more than 10%. They closed at $7.10 today. The 52-week range was $2.05 to $9.84.
I’m sure stockholders were hoping for (betting on) an acquisition, but Brocade seems to be in great shape on its own. Cisco isn’t giving them much of a run for the money in the FC switch space; Brocade is gaining a toehold in the FC HBA space; it’s gaining share in the Ethernet market; and the company will be well-positioned in the FCoE/CEE field by the time that battle hits full stride. In other words, when end users actually start deploying FCoE.
For more info from Brocade:
Slides
Prepared Remarks
Summary Sheet
Wednesday, November 18, 2009
SSDs: STEC on a roll
November 18, 2009 – I don’t invest in storage-specific vendors’ stocks, but if I did I’d be taking a close look at the solid-state disk (SSD) drive arena; in particular, a vendor that seems to be the early leader in this space, at least as measured in terms of OEM design wins: STEC.
This company is on a roll, having racked up design wins with most of the major disk array vendors, including Compellent, EMC, Fujitsu, HP, Hitachi Data Systems, IBM, LSI/Engenio, Sun and others.
The company hosted an Analyst Day earlier this week in NYC, where it announced/previewed a number of new products, including a third-generation ZeusIOPS drive with multi-level cell (MLC) technology (in qualification now), a SATA version of its Mach SSD (2Q10), and Fibre Channel and SAS SSDs based on ASICs. (The Fibre Channel versions of the Zeus SSDs were previously based on field-programmable gate arrays, or FPGAs).
At the Analyst Day conference, STEC officials identified the company’s primary competitors in the enterprise SSD space as Hitachi/Intel (mid-2010), Pliant (2010), Toshiba/Fujitsu (2011/12), Fusion-io (on servers) and, eventually, disk drive manufacturers such as Seagate and Western Digital.
“With six- to nine-month+ qualification cycles, the top slots at OEMs will be filled early, making it a horse race to see who can be second to STEC at most OEMs,” wrote Needham & Co. analyst Richard Kugele in a research note on STEC’s Analyst Day presentations. “ . . .we (and STEC) expect second sourcing to be at the company level rather than the product level, with specific SKUs devoted to certain suppliers due to the lack of [SSD] standards and interoperability.”
Not surprisingly, financial analysts have extremely optimistic stock price expectations for STEC.
For more information on solid-state disk drives, visit InfoStor’s SSD Topic Center.
On a somewhat related note, Adaptec and Intel will be hosting a webinar tomorrow (Thursday, Nov. 19) at 2 p.m. ET “to discuss how data center managers, system integrators, and storage and server OEMs can seamlessly integrate SSDs to reduce IT costs, reduce power consumption and minimize maintenance costs and physical space requirements, all with maximum I/O performance.”
You can register for the webinar here.
This company is on a roll, having racked up design wins with most of the major disk array vendors, including Compellent, EMC, Fujitsu, HP, Hitachi Data Systems, IBM, LSI/Engenio, Sun and others.
The company hosted an Analyst Day earlier this week in NYC, where it announced/previewed a number of new products, including a third-generation ZeusIOPS drive with multi-level cell (MLC) technology (in qualification now), a SATA version of its Mach SSD (2Q10), and Fibre Channel and SAS SSDs based on ASICs. (The Fibre Channel versions of the Zeus SSDs were previously based on field-programmable gate arrays, or FPGAs).
At the Analyst Day conference, STEC officials identified the company’s primary competitors in the enterprise SSD space as Hitachi/Intel (mid-2010), Pliant (2010), Toshiba/Fujitsu (2011/12), Fusion-io (on servers) and, eventually, disk drive manufacturers such as Seagate and Western Digital.
“With six- to nine-month+ qualification cycles, the top slots at OEMs will be filled early, making it a horse race to see who can be second to STEC at most OEMs,” wrote Needham & Co. analyst Richard Kugele in a research note on STEC’s Analyst Day presentations. “ . . .we (and STEC) expect second sourcing to be at the company level rather than the product level, with specific SKUs devoted to certain suppliers due to the lack of [SSD] standards and interoperability.”
Not surprisingly, financial analysts have extremely optimistic stock price expectations for STEC.
For more information on solid-state disk drives, visit InfoStor’s SSD Topic Center.
On a somewhat related note, Adaptec and Intel will be hosting a webinar tomorrow (Thursday, Nov. 19) at 2 p.m. ET “to discuss how data center managers, system integrators, and storage and server OEMs can seamlessly integrate SSDs to reduce IT costs, reduce power consumption and minimize maintenance costs and physical space requirements, all with maximum I/O performance.”
You can register for the webinar here.
Monday, November 16, 2009
Emulex, QLogic jockey for position
November 16, 2009 – The Dell’Oro Group market research firm recently released its Q3 2009 SAN Report, which includes market share figures for SAN product categories such as Fibre Channel HBAs and converged network adapters (CNAs) based on the Fibre Channel over Ethernet (FCoE) standard.
As they always do, Emulex and QLogic almost immediately released press releases claiming market share gains and/or leadership. Nice piece of PR one-upmanship. In each case, the companies just emphasize the slice of those markets that they happen to have good news about.
For example, Emulex reported that it is the leader in FCoE CNAs, according to the Dell’Oro report, with revenue growth of 160% over the previous quarter. The company also highlighted the fact that it gained 4% market share in the 8Gbps Fibre Channel HBA market.
QLogic reported that it’s the leader in the overall (4Gbps and 8Gbps) Fibre Channel HBA market, with a 54.4% share in Q3. The company boosted its lead over Emulex by 1.9%, for a lead margin of 17.9%. In addition, according to the press release:
“On the 8Gb Fibre Channel adapter front, QLogic increased its revenue share by 3.6 percentage points compared to the previous quarter, increasing its revenue share lead to 56.6 percent for Q3. In the category of 8Gb Fibre Channel mezzanine cards, QLogic continued to dominate with 78 percent of revenue share for Q3 compared to the nearest competitor's 22 percent share. In the broader category of all mezzanine adapters (4Gb and 8Gb), QLogic increased revenue share by 1.6 percentage points to 72.4 percent and held a 44.8 percentage point lead for the quarter.”
On the FCoE CNA front, these two rivals had better ramp revenues, shipments and market share, because in the first half of next year NIC giants Broadcom and Intel will put both feet in the FCoE market and this will no longer be a Storage Vendors Only skirmish.
An Industry Brief released by IT Brand Pulse notes that CNA port volume shipments increased 133% in Q3, to a total of 9,800 ports, and revenue increased 148% to about $4 million.
In a blog post on FCoE CNAs, IT Brand Pulse CEO and senior analyst Frank Berry says that, “We could see a quantum increase in CNA volume very quickly if Broadcom and Intel position their 10Gb CNAs to displace some or all of the 10Gb NICs they have been shipping for years and that are now ramping at over 50% per quarter.”
For more of Frank’s opinions on the CNA market, see “Radar Picks Up CNA Growth In Q3.”
As they always do, Emulex and QLogic almost immediately released press releases claiming market share gains and/or leadership. Nice piece of PR one-upmanship. In each case, the companies just emphasize the slice of those markets that they happen to have good news about.
For example, Emulex reported that it is the leader in FCoE CNAs, according to the Dell’Oro report, with revenue growth of 160% over the previous quarter. The company also highlighted the fact that it gained 4% market share in the 8Gbps Fibre Channel HBA market.
QLogic reported that it’s the leader in the overall (4Gbps and 8Gbps) Fibre Channel HBA market, with a 54.4% share in Q3. The company boosted its lead over Emulex by 1.9%, for a lead margin of 17.9%. In addition, according to the press release:
“On the 8Gb Fibre Channel adapter front, QLogic increased its revenue share by 3.6 percentage points compared to the previous quarter, increasing its revenue share lead to 56.6 percent for Q3. In the category of 8Gb Fibre Channel mezzanine cards, QLogic continued to dominate with 78 percent of revenue share for Q3 compared to the nearest competitor's 22 percent share. In the broader category of all mezzanine adapters (4Gb and 8Gb), QLogic increased revenue share by 1.6 percentage points to 72.4 percent and held a 44.8 percentage point lead for the quarter.”
On the FCoE CNA front, these two rivals had better ramp revenues, shipments and market share, because in the first half of next year NIC giants Broadcom and Intel will put both feet in the FCoE market and this will no longer be a Storage Vendors Only skirmish.
An Industry Brief released by IT Brand Pulse notes that CNA port volume shipments increased 133% in Q3, to a total of 9,800 ports, and revenue increased 148% to about $4 million.
In a blog post on FCoE CNAs, IT Brand Pulse CEO and senior analyst Frank Berry says that, “We could see a quantum increase in CNA volume very quickly if Broadcom and Intel position their 10Gb CNAs to displace some or all of the 10Gb NICs they have been shipping for years and that are now ramping at over 50% per quarter.”
For more of Frank’s opinions on the CNA market, see “Radar Picks Up CNA Growth In Q3.”
Tuesday, November 10, 2009
Who are the leaders in InfiniBand?
November 10, 2009 – With the Supercomputing 09 conference kicking off next week, the spotlight will be on InfiniBand, which has become the interconnect of choice for high performance computing (HPC) clusters. Despite InfiniBand’s success in the HPC market, there are still relatively few vendors, and only three are perceived as leaders, according to a recent survey conducted by IT Brand Pulse.
According to the survey, Mellanox is (by a long shot) perceived as the overall leader in InfiniBand networking and InfiniBand adapters, and Voltaire is perceived as the leader in InfiniBand switches. QLogic was also cited in each of the three categories.
Specifically, Mellanox was cited as the leader in InfiniBand networking by 82.4% of the survey respondents, followed by Voltaire (11.8%) and QLogic (5.9%).
More than 91% said that Mellanox was the leader in InfiniBand adapters, followed by QLogic at 8%.
And 66.4% of the HPC users cited Voltaire as the market leader in InfiniBand switches, followed by QLogic at 27.7% and Mellanox at 5.9%.
IT Brand Pulse surveys the end-user community on an ongoing basis, and provides reports based on the survey results. Highlights of the survey results are posted on infostor.com.
IT Brand Pulse is currently surveying end users regarding their perceptions of vendors in the data deduplication, iSCSI storage and adapters, and tape library markets. The survey takes less than 5 minutes and participants have a chance to win an Apple iPhone 3GS from IT Brand Pulse. If you want to participate in these surveys (and I highly encourage you to do so), click here.
According to the survey, Mellanox is (by a long shot) perceived as the overall leader in InfiniBand networking and InfiniBand adapters, and Voltaire is perceived as the leader in InfiniBand switches. QLogic was also cited in each of the three categories.
Specifically, Mellanox was cited as the leader in InfiniBand networking by 82.4% of the survey respondents, followed by Voltaire (11.8%) and QLogic (5.9%).
More than 91% said that Mellanox was the leader in InfiniBand adapters, followed by QLogic at 8%.
And 66.4% of the HPC users cited Voltaire as the market leader in InfiniBand switches, followed by QLogic at 27.7% and Mellanox at 5.9%.
IT Brand Pulse surveys the end-user community on an ongoing basis, and provides reports based on the survey results. Highlights of the survey results are posted on infostor.com.
IT Brand Pulse is currently surveying end users regarding their perceptions of vendors in the data deduplication, iSCSI storage and adapters, and tape library markets. The survey takes less than 5 minutes and participants have a chance to win an Apple iPhone 3GS from IT Brand Pulse. If you want to participate in these surveys (and I highly encourage you to do so), click here.
Monday, October 26, 2009
VARs rank vendors
October 26, 2009 – In my last post I recapped the results of a channel survey conducted by Robert W. Baird & Co., concluding that the majority of resellers are optimistic about their revenue prospects in the fourth quarter. In fact, it was the most optimistic response since Baird’s Q2 2008 channel survey, indicating that a turnaround in IT spending is underway (see “VARs upbeat about Q4” ).
Baird analysts also surveyed resellers about the performance of vendors in a variety of areas. When asked which vendors were above plan, on plan or below plan, VMware topped the list, followed by NetApp, Cisco, Symantec and Hitachi. Among major vendors, IBM was ranked lowest, with many more VARs reporting that sales of IBM gear were below plan than above plan or on plan.
In terms of “channel friendliness,” LeftHand and NetApp led the pack, with CommVault, HP and Hitachi rounding out the top five. At the bottom of the list (although still posting scores of better than 5 out of a possible 10, were Isilon, IBM, EMC, and EqualLogic.
When asked which vendors’ products they would sell more of in 2010, VARs cited VMware, EMC, Cisco, NetApp and HP most often, with IBM in last place. (It was EMC’s strongest showing in this category since Baird’s Q3 2008 channel survey.)
When asked how they perceived various storage vendors in the next 3 to 6 months, Data Domain topped the list, followed by NetApp, EMC, LeftHand, CommVault and Hitachi. In negative territory (more VARs saying sales were declining vs. improving) were HP, Dell, IBM and Sun.
Baird analysts also asked VARs to rank vendors by specific technology. Here are the leaders in various categories, in decreasing order:
Fibre Channel SANs: EMC, Hitachi, HP, IBM, Compellent, NetApp
NAS: NetApp (by a long shot), EMC, HP, IBM
iSCSI SANs: NetApp, EqualLogic, LeftHand, HP, EMC, IBM
In one final note on the emerging technology front: Baird analysts queried VARs about end-user interest in FCoE, and 74% of the respondents indicated that there is “some interest, mostly evaluating;” 15% said that demand was “ramping moderately;” and 2% said that there was “strong demand” for FCoE.
Baird analysts also surveyed resellers about the performance of vendors in a variety of areas. When asked which vendors were above plan, on plan or below plan, VMware topped the list, followed by NetApp, Cisco, Symantec and Hitachi. Among major vendors, IBM was ranked lowest, with many more VARs reporting that sales of IBM gear were below plan than above plan or on plan.
In terms of “channel friendliness,” LeftHand and NetApp led the pack, with CommVault, HP and Hitachi rounding out the top five. At the bottom of the list (although still posting scores of better than 5 out of a possible 10, were Isilon, IBM, EMC, and EqualLogic.
When asked which vendors’ products they would sell more of in 2010, VARs cited VMware, EMC, Cisco, NetApp and HP most often, with IBM in last place. (It was EMC’s strongest showing in this category since Baird’s Q3 2008 channel survey.)
When asked how they perceived various storage vendors in the next 3 to 6 months, Data Domain topped the list, followed by NetApp, EMC, LeftHand, CommVault and Hitachi. In negative territory (more VARs saying sales were declining vs. improving) were HP, Dell, IBM and Sun.
Baird analysts also asked VARs to rank vendors by specific technology. Here are the leaders in various categories, in decreasing order:
Fibre Channel SANs: EMC, Hitachi, HP, IBM, Compellent, NetApp
NAS: NetApp (by a long shot), EMC, HP, IBM
iSCSI SANs: NetApp, EqualLogic, LeftHand, HP, EMC, IBM
In one final note on the emerging technology front: Baird analysts queried VARs about end-user interest in FCoE, and 74% of the respondents indicated that there is “some interest, mostly evaluating;” 15% said that demand was “ramping moderately;” and 2% said that there was “strong demand” for FCoE.
Wednesday, October 21, 2009
VARs upbeat about Q4
October 21, 2009 – Indicating that a turnaround in IT spending is imminent, or underway, a recent survey of the channel conducted by Robert W. Baird & Co. shows that VARs are upbeat about fourth quarter prospects. In fact, it’s the most upbeat response that Baird has seen since its Q2 2008 survey.
The results are based on a survey of 47 enterprise resellers with total annual revenue of $8.2 billion and average revenue of $198 million per year.
In terms of Q4 expectations vs. Q3 reality, 32% of the resellers were positive (expecting increased revenues), 54% were neutral, and only 12% were negative re Q4 performance.
Large VARs (more than $100 million in revenues) were much more positive than VARs with less than $100 million in revenues. For example, 47% of the large VARs expect increased revenues in the fourth quarter, 32% were neutral, and only 9% were negative. (The rest said it was too early to tell.)
For perspective, in the third quarter 46% of the VARs were below plan, 39% were on plan, and only 15% were above plan.
Baird also surveyed resellers about which storage technologies were in highest demand. Topping the list were cost-saving technologies such as data deduplication and thin provisioning. And VARs cited Data Domain as the leader in data deduplication, and 3PAR as the leader in thin provisioning
In my next blog, I’ll take a closer look at the channel’s views on specific vendors. In other words, the apparent winners and losers going into 2010.
The results are based on a survey of 47 enterprise resellers with total annual revenue of $8.2 billion and average revenue of $198 million per year.
In terms of Q4 expectations vs. Q3 reality, 32% of the resellers were positive (expecting increased revenues), 54% were neutral, and only 12% were negative re Q4 performance.
Large VARs (more than $100 million in revenues) were much more positive than VARs with less than $100 million in revenues. For example, 47% of the large VARs expect increased revenues in the fourth quarter, 32% were neutral, and only 9% were negative. (The rest said it was too early to tell.)
For perspective, in the third quarter 46% of the VARs were below plan, 39% were on plan, and only 15% were above plan.
Baird also surveyed resellers about which storage technologies were in highest demand. Topping the list were cost-saving technologies such as data deduplication and thin provisioning. And VARs cited Data Domain as the leader in data deduplication, and 3PAR as the leader in thin provisioning
In my next blog, I’ll take a closer look at the channel’s views on specific vendors. In other words, the apparent winners and losers going into 2010.
Friday, October 16, 2009
Check out the Channel Chargers
October 16, 2009 – I spent most of this week at the Storage Networking World (SNW) show in Phoenix. For the past couple years I’ve been advocating that SNW management consider (a) collapsing the show into one event per year and (b) getting more storage channel professionals (VARs and integrators) to attend and participate in the show.
I’ve changed my mind on (a). Assuming reports of a turnaround in IT spending have not been greatly exaggerated, I think the storage industry needs, and can sustain, two SNWs per year. Although attendance, the number of exhibitors, and product introductions were way down compared to the show’s heyday, it’s still a vibrant forum for the exchange of information between vendors, users and the channel.
And there did seem to be more channel presence at this week’s show.
In addition to meeting with a boatload of vendors, industry analysts and some end users, I had an interesting dinner with a startup that doesn’t fit into any of those categories: the Channel Chargers. Since they’re based in San Diego, their company name is even cooler than it seems.
Basically, Channel Chargers delivers an active link between vendors and reseller/integrators, providing channel strategies and also connecting vendors and VARs. Other outfits do this, but the management team at Channel Chargers has more channel experience than the other outfits, with a strong focus on storage.
If you’re a vendor, particularly if you’re a startup, that needs to establish or expand relationships in the channel, or a VAR looking for some interesting new products to sell, check out the Channel Chargers.
Back to SNW: There weren’t many products introduced at the show, but for a quick rundown of announcements see “Product highlights from SNW.”
As for the Most Interesting Product, I’d give that award to WhipTail Technologies, a startup specializing in solid-state disk (SSD) appliances
WhipTail introduced the Racerunner version of its SSD appliances, which include Exar’s Hifn BitWackr data deduplication and compression cards and software. The configuration provides an inline (as opposed to post-processing) capacity optimization appliance for primary storage. The SSD appliances are available in 1.5TB, 3TB and 6TB versions.
James Candelaria, WhipTail’s CTO, claims a minimum 10:1 deduplication ratio in virtual server environments, and on average 2:1 to 4:1 optimization ratios in non-virtual server environments (although deduplication ratios vary widely depending on data types). Factoring in the overhead associated with compression and deduplication, Candelaria cites performance of 60,000 I/Os per second (IOPS) on reads and 25,000 to 30,000 IOPS on writes, although the company has not completed its testing of the Racerunner appliances with the BitWackr cards. The deduplication/compression cards are offered at no extra charge.
Bottom line: When you combine compression and deduplication with the falling prices of SSDs, you approach price parity between SSDs and HDDs.
WhipTail also launched its VIPER channel partner program.
Overall, at SNW I sensed a level of optimism that hasn’t been present at the show for the last two years or so, both in terms of the general storage business and the show itself. I hope they can maintain the 2x/year format.
I’ve changed my mind on (a). Assuming reports of a turnaround in IT spending have not been greatly exaggerated, I think the storage industry needs, and can sustain, two SNWs per year. Although attendance, the number of exhibitors, and product introductions were way down compared to the show’s heyday, it’s still a vibrant forum for the exchange of information between vendors, users and the channel.
And there did seem to be more channel presence at this week’s show.
In addition to meeting with a boatload of vendors, industry analysts and some end users, I had an interesting dinner with a startup that doesn’t fit into any of those categories: the Channel Chargers. Since they’re based in San Diego, their company name is even cooler than it seems.
Basically, Channel Chargers delivers an active link between vendors and reseller/integrators, providing channel strategies and also connecting vendors and VARs. Other outfits do this, but the management team at Channel Chargers has more channel experience than the other outfits, with a strong focus on storage.
If you’re a vendor, particularly if you’re a startup, that needs to establish or expand relationships in the channel, or a VAR looking for some interesting new products to sell, check out the Channel Chargers.
Back to SNW: There weren’t many products introduced at the show, but for a quick rundown of announcements see “Product highlights from SNW.”
As for the Most Interesting Product, I’d give that award to WhipTail Technologies, a startup specializing in solid-state disk (SSD) appliances
WhipTail introduced the Racerunner version of its SSD appliances, which include Exar’s Hifn BitWackr data deduplication and compression cards and software. The configuration provides an inline (as opposed to post-processing) capacity optimization appliance for primary storage. The SSD appliances are available in 1.5TB, 3TB and 6TB versions.
James Candelaria, WhipTail’s CTO, claims a minimum 10:1 deduplication ratio in virtual server environments, and on average 2:1 to 4:1 optimization ratios in non-virtual server environments (although deduplication ratios vary widely depending on data types). Factoring in the overhead associated with compression and deduplication, Candelaria cites performance of 60,000 I/Os per second (IOPS) on reads and 25,000 to 30,000 IOPS on writes, although the company has not completed its testing of the Racerunner appliances with the BitWackr cards. The deduplication/compression cards are offered at no extra charge.
Bottom line: When you combine compression and deduplication with the falling prices of SSDs, you approach price parity between SSDs and HDDs.
WhipTail also launched its VIPER channel partner program.
Overall, at SNW I sensed a level of optimism that hasn’t been present at the show for the last two years or so, both in terms of the general storage business and the show itself. I hope they can maintain the 2x/year format.
Monday, October 12, 2009
Brocade on the block
October 12, 2009 – The rumors are swirling that Brocade has quietly put itself up for sale, and the buzz will probably pick up at the Storage Networking World show in Phoenix this week. It all started with an article in the Wall Street Journal, “Network Specialist Brocade Up for Sale."
I don’t think Brocade will be acquired, at least not in the next year. The company’s in great shape from a financial and market-share perspective, and its value will continue to grow. Why sell now?
When these acquisition rumors start flying, I like to check out what the financial analysts have to say. And two of my favorites are Paul Mansky at Canaccord/Adams and Glenn Hanus (and colleagues) at Needham & Company. Both analysts focus on the storage industry.
Here’s a clip from Paul Mansky’s notes on the subject:
“We agree Brocade boasts strong strategic positioning . . . but we take issue with the prevailing argument the company is a near-term acquisition target. The independent OEM-centric business model is central to the company's ability to maintain its storage footprint during the pending protocol transitions, while the ‘anybody but Cisco’ story in Ethernet switching has not even begun to ramp. Ownership by a single OEM would hasten the demise of Fibre Channel (60% of revs) while limiting the Ethernet switch TAM expansion story. In our view, a sale today is like punting on second down.”
Or tossing a Hail May on first possession, first down.
More Mansky:”To the degree that there is an appetite to flesh out or build Ethernet switch product lines or gain access to closely held Fibre Channel technology, we view there to be several less expensive candidates. In Ethernet switching, this would include 3Com, Extreme and Force10. In Fibre Channel, we view QLogic as an eager seller of its switching product line -- possibly for as little as $300-350 million.”
I agree: There are much cheaper ways to get what Brocade has to offer, even if it means acquiring two companies.
Speculation centers on HP and Oracle as the most likely acquirers, but Needham’s Hanus puts a number of other vendors in the potential mix, including an “A” list of HP, IBM and Juniper, with Oracle and Dell as remote possibilities.
Commenting on the most likely suitor, Hanus (with colleagues Rich Kugele and Greg Mesniaeff) writes:
“Historically, HP has had a strong relationship with BRCD in the fibre channel arena. It is speculated that HP will eventually at least resell/OEM BRCD's Foundry related gear as HP defends itself against Cisco getting into the server space and resells less Cisco IP networking gear. HP has its Procurve switch line as well. If BRCD were to get acquired by IBM, it would be a big problem for HP as BRCD has 70% or so of the fibre channel switch market. Cisco is the distant #2 player with Qlogic #3. Obviously, HP does not want to source FC switching from IBM or Cisco. If HP owned it, a challenge could be some reduction in BRCD revenue base as HP competitors IBM/Dell (current large BRCD partners) would try and move away. Also HP does not have a reputation for paying up for M&A targets. Overall though, we see HP as one of the leading contenders if BRCD decides to sell.”
I’ll be at the Storage Networking World conference for the next couple days, and if anything interesting happens – regarding Brocade or otherwise – I’ll let you know.
I don’t think Brocade will be acquired, at least not in the next year. The company’s in great shape from a financial and market-share perspective, and its value will continue to grow. Why sell now?
When these acquisition rumors start flying, I like to check out what the financial analysts have to say. And two of my favorites are Paul Mansky at Canaccord/Adams and Glenn Hanus (and colleagues) at Needham & Company. Both analysts focus on the storage industry.
Here’s a clip from Paul Mansky’s notes on the subject:
“We agree Brocade boasts strong strategic positioning . . . but we take issue with the prevailing argument the company is a near-term acquisition target. The independent OEM-centric business model is central to the company's ability to maintain its storage footprint during the pending protocol transitions, while the ‘anybody but Cisco’ story in Ethernet switching has not even begun to ramp. Ownership by a single OEM would hasten the demise of Fibre Channel (60% of revs) while limiting the Ethernet switch TAM expansion story. In our view, a sale today is like punting on second down.”
Or tossing a Hail May on first possession, first down.
More Mansky:”To the degree that there is an appetite to flesh out or build Ethernet switch product lines or gain access to closely held Fibre Channel technology, we view there to be several less expensive candidates. In Ethernet switching, this would include 3Com, Extreme and Force10. In Fibre Channel, we view QLogic as an eager seller of its switching product line -- possibly for as little as $300-350 million.”
I agree: There are much cheaper ways to get what Brocade has to offer, even if it means acquiring two companies.
Speculation centers on HP and Oracle as the most likely acquirers, but Needham’s Hanus puts a number of other vendors in the potential mix, including an “A” list of HP, IBM and Juniper, with Oracle and Dell as remote possibilities.
Commenting on the most likely suitor, Hanus (with colleagues Rich Kugele and Greg Mesniaeff) writes:
“Historically, HP has had a strong relationship with BRCD in the fibre channel arena. It is speculated that HP will eventually at least resell/OEM BRCD's Foundry related gear as HP defends itself against Cisco getting into the server space and resells less Cisco IP networking gear. HP has its Procurve switch line as well. If BRCD were to get acquired by IBM, it would be a big problem for HP as BRCD has 70% or so of the fibre channel switch market. Cisco is the distant #2 player with Qlogic #3. Obviously, HP does not want to source FC switching from IBM or Cisco. If HP owned it, a challenge could be some reduction in BRCD revenue base as HP competitors IBM/Dell (current large BRCD partners) would try and move away. Also HP does not have a reputation for paying up for M&A targets. Overall though, we see HP as one of the leading contenders if BRCD decides to sell.”
I’ll be at the Storage Networking World conference for the next couple days, and if anything interesting happens – regarding Brocade or otherwise – I’ll let you know.
Monday, October 5, 2009
iSCSI + NAS for virtual servers?
October 8, 2009 – When end users first started the mass migration to virtual servers, the majority did so with direct-attached storage (DAS) as the underlying storage architecture. They quickly found out that virtualization required networked storage, and the incumbent SAN protocol – Fibre Channel – became the front runner in the race toward dominance in server virtualization storage architectures.
Fibre Channel SANs are still the dominant storage architectures in virtual operating environments, but iSCSI – and also NAS – are coming on strong.
In an infostor.com QuickVote poll (conducted last year), we asked virtual server users what their primary storage configuration was. More than half (53%) cited Fibre Channel SANs, 28% were relying primarily on iSCSI, 10% on NAS, and only 9% on DAS. Since then, I’m sure the percentages for iSCSI and NAS have shifted upward.
That survey didn’t ask whether users were combining different storage architectures to support their burgeoning virtual server implementations. However, combining SANs and NAS appears to be a strong trend.
In an end-user survey, Enterprise Strategy Group (ESG) posed the following question:
Does your organization have plans to consolidate NAS and SAN storage resources into a unified storage architecture that supports both file-based NAS and block-based SAN storage?
With the caveat that the question was not posed in a virtual server context, it’s interesting to note that well over half of the respondents (67%) planned to pursue a “unified storage” (SAN+NAS) strategy. In fact, 18% were already underway – and the ESG survey was conducted late last year.
Among those that are moving to unified storage architectures, 41% plan to rely primarily on NAS gateways to front-end SANs; 18% plan to rely primarily on unified NAS-SAN storage systems; and 40% plan to use both approaches.
If you’re among the growing group of users that are deploying iSCSI + NAS architectures – particularly if you have virtual servers – check out our Webcast, “Unified Storage for Virtual Servers: iSCSI SANs and NAS.” If you missed the "live" version earlier this week, you can view the archived version here.
ESG analyst Terri McClure presents interesting data from ESG’s end-user surveys and discusses trends in the context of unified storage and virtual servers. And John McArthur, president and co-founder of Walden Technology Partners, discusses solutions that enable you to combine iSCSI SANs and NAS to optimize efficiencies in virtual server environments – at a low cost and with minimal management overhead.
Choosing the right storage architecture for your virtual server environment is not an either-or decision.
Fibre Channel SANs are still the dominant storage architectures in virtual operating environments, but iSCSI – and also NAS – are coming on strong.
In an infostor.com QuickVote poll (conducted last year), we asked virtual server users what their primary storage configuration was. More than half (53%) cited Fibre Channel SANs, 28% were relying primarily on iSCSI, 10% on NAS, and only 9% on DAS. Since then, I’m sure the percentages for iSCSI and NAS have shifted upward.
That survey didn’t ask whether users were combining different storage architectures to support their burgeoning virtual server implementations. However, combining SANs and NAS appears to be a strong trend.
In an end-user survey, Enterprise Strategy Group (ESG) posed the following question:
Does your organization have plans to consolidate NAS and SAN storage resources into a unified storage architecture that supports both file-based NAS and block-based SAN storage?
With the caveat that the question was not posed in a virtual server context, it’s interesting to note that well over half of the respondents (67%) planned to pursue a “unified storage” (SAN+NAS) strategy. In fact, 18% were already underway – and the ESG survey was conducted late last year.
Among those that are moving to unified storage architectures, 41% plan to rely primarily on NAS gateways to front-end SANs; 18% plan to rely primarily on unified NAS-SAN storage systems; and 40% plan to use both approaches.
If you’re among the growing group of users that are deploying iSCSI + NAS architectures – particularly if you have virtual servers – check out our Webcast, “Unified Storage for Virtual Servers: iSCSI SANs and NAS.” If you missed the "live" version earlier this week, you can view the archived version here.
ESG analyst Terri McClure presents interesting data from ESG’s end-user surveys and discusses trends in the context of unified storage and virtual servers. And John McArthur, president and co-founder of Walden Technology Partners, discusses solutions that enable you to combine iSCSI SANs and NAS to optimize efficiencies in virtual server environments – at a low cost and with minimal management overhead.
Choosing the right storage architecture for your virtual server environment is not an either-or decision.
Tuesday, September 29, 2009
How to manage 2PB+
September 30, 2009 – I dialed in to a wikibon.org Peer Incite teleconference yesterday. I like these things because they typically feature end users, often representing very large IT facilities.
In yesterday’s meeting, that would be the California Institute of Technology (Caltech) which, among many other things, is the academic home of NASA’s Jet Propulsion Laboratory (JPL).
One facility at Caltech hosts 2.3PB to 2.5PB of data, according to Eugean Hacopians, a senior systems engineer at Caltech and the speaker on the wikibon.org Peer Incite gathering. Since the facility’s files are very small, that 2.5PB translates into about two trillion files, according to Hacopians. Translated another way, the facility has about five million files per TB of storage capacity. (In its Infrared Processing and Analysis Center, or IPAC, the applications are primarily astronomy imaging related to space projects.)
You can listen to Hacopians’ hour-long chat here, but a few things jumped out at me while I was listening.
Instead of using a traditional SAN, Hacopians uses what he refers to as building blocks. In the file-serving area (as opposed to its compute servers and database servers), a typical building block consists of a Sun server (with two 4Gbps Fibre Channel HBAs and one 4Gbps Fibre Channel switch from QLogic) attached to SATA-based disk subsystems. About 99% of the capacity is on SATABeast disk arrays from Nexsan, and up to three SATABeast arrays can be attached to each file server.
“A large, shared SAN would have created more hassles and headaches than the building block approach,” says Hacopians. “A SAN would have introduced 3x to 5x more complexity.” That’s in part because the Caltech facility has a lot of different projects (with 10 to 14 projects going on simultaneously), which poses problems from a charge-back and accounting standpoint, according to Hacopians.
To cut energy costs (which are high when you have more than 2,500 spinning disks) Hacopians’ Caltech facility takes advantage of Nexsan’s autoMAID (massive array of idle disks) technology, which offers three levels of disk spin-down modes. (Caltech uses two of the three modes to maximize the performance/savings tradeoffs.)
Although some IT sites are leery of disk spin-down technology, Hacopians says that Caltech has not had any negative issues with Nexsan’s autoMAID technology.
So how many storage pros does it take to manage almost 2.5PB of storage capacity in a building block architecture? Until about two years ago, Hacopians managed about 1.5PB on his own. Today, he has help from two other people, each spending about one-fourth of their time on storage management.
In yesterday’s meeting, that would be the California Institute of Technology (Caltech) which, among many other things, is the academic home of NASA’s Jet Propulsion Laboratory (JPL).
One facility at Caltech hosts 2.3PB to 2.5PB of data, according to Eugean Hacopians, a senior systems engineer at Caltech and the speaker on the wikibon.org Peer Incite gathering. Since the facility’s files are very small, that 2.5PB translates into about two trillion files, according to Hacopians. Translated another way, the facility has about five million files per TB of storage capacity. (In its Infrared Processing and Analysis Center, or IPAC, the applications are primarily astronomy imaging related to space projects.)
You can listen to Hacopians’ hour-long chat here, but a few things jumped out at me while I was listening.
Instead of using a traditional SAN, Hacopians uses what he refers to as building blocks. In the file-serving area (as opposed to its compute servers and database servers), a typical building block consists of a Sun server (with two 4Gbps Fibre Channel HBAs and one 4Gbps Fibre Channel switch from QLogic) attached to SATA-based disk subsystems. About 99% of the capacity is on SATABeast disk arrays from Nexsan, and up to three SATABeast arrays can be attached to each file server.
“A large, shared SAN would have created more hassles and headaches than the building block approach,” says Hacopians. “A SAN would have introduced 3x to 5x more complexity.” That’s in part because the Caltech facility has a lot of different projects (with 10 to 14 projects going on simultaneously), which poses problems from a charge-back and accounting standpoint, according to Hacopians.
To cut energy costs (which are high when you have more than 2,500 spinning disks) Hacopians’ Caltech facility takes advantage of Nexsan’s autoMAID (massive array of idle disks) technology, which offers three levels of disk spin-down modes. (Caltech uses two of the three modes to maximize the performance/savings tradeoffs.)
Although some IT sites are leery of disk spin-down technology, Hacopians says that Caltech has not had any negative issues with Nexsan’s autoMAID technology.
So how many storage pros does it take to manage almost 2.5PB of storage capacity in a building block architecture? Until about two years ago, Hacopians managed about 1.5PB on his own. Today, he has help from two other people, each spending about one-fourth of their time on storage management.
Wednesday, September 23, 2009
What is Dedupe 2.0?
September 28, 2009 – Dedupe 2.0 is a term used primarily by Permabit, and it involves extending the benefits of data deduplication across the entire storage environment – not just the traditional application of deduplication on secondary (backup/archive) devices but on primary storage devices as well.
And, yes, it has applicability to cloud storage, whether it’s internal/private or external/hosted. (Then again, what doesn’t have applicability to cloud storage these days?)
If reducing your overall storage costs by 10x or 20x via across-the-board deduplication and capacity optimization sounds appealing, you might want to view our recent Webcast, “Dedupe 2.0: The Benefits of Optimizing Primary and Secondary Storage.” It originally aired last week, but is accessible here.
Jeff Boles, a senior analyst with the Taneja Group research and consulting firm, addresses the IT challenges and issues and Mike Ivanov, vice president of marketing at Permabit, explores solutions.
And, yes, it has applicability to cloud storage, whether it’s internal/private or external/hosted. (Then again, what doesn’t have applicability to cloud storage these days?)
If reducing your overall storage costs by 10x or 20x via across-the-board deduplication and capacity optimization sounds appealing, you might want to view our recent Webcast, “Dedupe 2.0: The Benefits of Optimizing Primary and Secondary Storage.” It originally aired last week, but is accessible here.
Jeff Boles, a senior analyst with the Taneja Group research and consulting firm, addresses the IT challenges and issues and Mike Ivanov, vice president of marketing at Permabit, explores solutions.
Monday, September 21, 2009
Will network and storage teams merge?
September 21, 2009 – Server virtualization brings up an old question: Within IT, should the network and storage groups combine? The question is valid because virtualization has created an unprecedented inter-dependency within IT teams – and not only between storage and networking, but also the server and applications teams.
The question isn’t new. It first arose when NAS hit the scene, and came up again when iSCSI gained momentum. But for the most part, NAS and iSCSI are still the domain of the storage professionals.
The question is moot at most SMBs, because the same person or small team is responsible for virtually all IT disciplines. But at larger enterprises, merging IT groups is a stickier issue.
I don’t think virtualization alone will lead to a merger of network and storage groups, and a recent survey of Fortune 1000 firms conducted by TheInfoPro (TIP) seems to, partially, support that opinion.
According to the TIP survey, although more than half (54%) of the respondents said that server virtualization has had a ‘significant’ or ‘major’ impact on addressing storage needs, 78% of the respondents said they do not expect storage and networking teams to combine.
In addition, most (77%) of the respondents said they do not have a separate virtualization group, and 60% said their organization sees major operational benefit in having a separate data management (storage) group.
Still, that seems to leave a lot of IT pros on the fence regarding the merger of network and storage groups.
But if NAS, iSCSI and virtualization don’t drive the various IT groups together, one emerging technology could be the tipping point: Fibre Channel over Ethernet (FCoE) and so-called 'converged networks.'
FCoE, together with the other storage/networking technologies, could cause interesting internal IT battles. And if it doesn’t lead to an actual merger of IT teams, it will most certainly lead to the need for an unprecedented level of cooperation between them.
For another take on TheInfoPro’s study, see Kevin Komiega’s blog post, “Come together? Not now . . .in IT”
The question isn’t new. It first arose when NAS hit the scene, and came up again when iSCSI gained momentum. But for the most part, NAS and iSCSI are still the domain of the storage professionals.
The question is moot at most SMBs, because the same person or small team is responsible for virtually all IT disciplines. But at larger enterprises, merging IT groups is a stickier issue.
I don’t think virtualization alone will lead to a merger of network and storage groups, and a recent survey of Fortune 1000 firms conducted by TheInfoPro (TIP) seems to, partially, support that opinion.
According to the TIP survey, although more than half (54%) of the respondents said that server virtualization has had a ‘significant’ or ‘major’ impact on addressing storage needs, 78% of the respondents said they do not expect storage and networking teams to combine.
In addition, most (77%) of the respondents said they do not have a separate virtualization group, and 60% said their organization sees major operational benefit in having a separate data management (storage) group.
Still, that seems to leave a lot of IT pros on the fence regarding the merger of network and storage groups.
But if NAS, iSCSI and virtualization don’t drive the various IT groups together, one emerging technology could be the tipping point: Fibre Channel over Ethernet (FCoE) and so-called 'converged networks.'
FCoE, together with the other storage/networking technologies, could cause interesting internal IT battles. And if it doesn’t lead to an actual merger of IT teams, it will most certainly lead to the need for an unprecedented level of cooperation between them.
For another take on TheInfoPro’s study, see Kevin Komiega’s blog post, “Come together? Not now . . .in IT”
Wednesday, September 16, 2009
Cloud computing and the US Open
September 16, 2009 – I logged plenty of TV hours watching U.S. Open tennis (not to mention NFL season openers) over the last few weeks, all in attempt to gain some respite from the IT world. However, I was subjected to IBM’s cloud computing ad for what seemed like dozens of times during the Open. (Big Blue seems to be sparing football fans from the same torture.)
A guy comes on the screen and asks, “What is cloud computing?,” piquing my interest, albeit to a very minor degree.
So, with a huge international TV audience at its disposal, IBM’s response is: “A cloud is a workload-optimized service management platform enabling new consumption and delivery models.”
Just as I’m about to say “What?” a young girl comes on the screen and says, “It’s what?”
The remainder of the ad doesn’t de-obfuscate cloud computing any further. (In fact, it makes it cloudier.)
To the degree that tennis fans were in the mood for a good definition of cloud computing, they’re just as confused about it now as they were with the Serena Williams foot-fault call and ensuing diatribe.
I have no idea what the TV networks get for 30-second spots during the U.S. Open, but that didn’t seem to be money well spent.
But it could have been worse: I could have been subjected to TV spots about cloud storage. (Yeh, right, like cloud storage providers have that kind of cash sitting around.)
In any case, if you are interested in cloud storage, check out the recently-posted “Cloud storage opportunities and challenges” by Margalla Communications analyst Saqib Jang. Saqib delves into end-user and cloud-provider requirements in the areas of scalability, privacy, data protection, manageability and security.
You can also find some fresh videos, blogs and articles on cloud computing and cloud storage at Glasshouse Technologies’ Web site.
A guy comes on the screen and asks, “What is cloud computing?,” piquing my interest, albeit to a very minor degree.
So, with a huge international TV audience at its disposal, IBM’s response is: “A cloud is a workload-optimized service management platform enabling new consumption and delivery models.”
Just as I’m about to say “What?” a young girl comes on the screen and says, “It’s what?”
The remainder of the ad doesn’t de-obfuscate cloud computing any further. (In fact, it makes it cloudier.)
To the degree that tennis fans were in the mood for a good definition of cloud computing, they’re just as confused about it now as they were with the Serena Williams foot-fault call and ensuing diatribe.
I have no idea what the TV networks get for 30-second spots during the U.S. Open, but that didn’t seem to be money well spent.
But it could have been worse: I could have been subjected to TV spots about cloud storage. (Yeh, right, like cloud storage providers have that kind of cash sitting around.)
In any case, if you are interested in cloud storage, check out the recently-posted “Cloud storage opportunities and challenges” by Margalla Communications analyst Saqib Jang. Saqib delves into end-user and cloud-provider requirements in the areas of scalability, privacy, data protection, manageability and security.
You can also find some fresh videos, blogs and articles on cloud computing and cloud storage at Glasshouse Technologies’ Web site.
Friday, September 4, 2009
Subscribe to our new newsletter
September 10, 2009 – InfoStor recently expanded the content in our weekly e-newsletters by about 75%. In addition to the top five news stories, we added blog posts from yours truly and senior editor Kevin Komiega, in-depth feature articles from our staff and independent consultants and analysts, links to Topic Centers focused on specific technologies, and links to recommended white papers and Webcasts.
To subscribe to the newsletter, which is delivered weekly to your email inbox, click here.
If you’re not a subscriber you missed out on the following content in the most recent newsletter:
Latest News
Dell taps Brocade for FCoE, CEE gear
Brocade adds VM visibility to SAN management software
FalconStor extends virtual appliance line for VMware
Storage highlights from VMworld
Latet Blogs
Dave's Blog:
VMware and Cisco and EMC, oh my
Rumors swirled at VMworld regarding the imminent announcement of a deep, formal partnership between VMware, Cisco and EMC (aka VCE).
Kevin's Blog:
EMC scoops up FastScale Technology, Kazeon
The EMC acquisition machine has been busy. . .
Featured Articles
SAN management for virtual servers
Sorting out SSD strengths and weaknesses
Why you need virtual infrastructure optimization
Consider data reduction for primary storage
Market prepped for 6Gbps SAS
Featured Topic Center: Virtualization
For in-depth features and breaking news on virtualization from a storage perspective, visit InfoStor's Virtualization Topic Center
White Papers and Webcasts
D2D2T Backup Architectures and the Impact of Data Deduplication
Backup and Recovery: The Benefits of Multiple Deduplication Policies
Optimizing Performance and Maximizing Investment in Tape Storage Systems
Dedupe 2.0: The Benefits of Optimizing Primary and Secondary Storage
To subscribe to the newsletter, which is delivered weekly to your email inbox, click here.
If you’re not a subscriber you missed out on the following content in the most recent newsletter:
Latest News
Dell taps Brocade for FCoE, CEE gear
Brocade adds VM visibility to SAN management software
FalconStor extends virtual appliance line for VMware
Storage highlights from VMworld
Latet Blogs
Dave's Blog:
VMware and Cisco and EMC, oh my
Rumors swirled at VMworld regarding the imminent announcement of a deep, formal partnership between VMware, Cisco and EMC (aka VCE).
Kevin's Blog:
EMC scoops up FastScale Technology, Kazeon
The EMC acquisition machine has been busy. . .
Featured Articles
SAN management for virtual servers
Sorting out SSD strengths and weaknesses
Why you need virtual infrastructure optimization
Consider data reduction for primary storage
Market prepped for 6Gbps SAS
Featured Topic Center: Virtualization
For in-depth features and breaking news on virtualization from a storage perspective, visit InfoStor's Virtualization Topic Center
White Papers and Webcasts
D2D2T Backup Architectures and the Impact of Data Deduplication
Backup and Recovery: The Benefits of Multiple Deduplication Policies
Optimizing Performance and Maximizing Investment in Tape Storage Systems
Dedupe 2.0: The Benefits of Optimizing Primary and Secondary Storage
Tuesday, September 1, 2009
VMware and Cisco and EMC, oh my
September 1, 2009 – There’s a rumor swirling at this week’s VMworld show regarding the imminent announcement of a deep, formal partnership between VMware, Cisco and EMC (aka VCE).
The troika already has a loose relationship, but scuttlebutt suggests a much more formidable triumvirate. The announcement might come at VMworld this week, although next week may be more likely.
Paul Mansky, a principal in equity research, data center infrastructure, at CANACCORD Adams, issued a note this week that hints at what the three-way relationship might entail.
Citing industry sources, Mansky says the deal will revolve around Cisco’s Unified Computing System (UCS) platform and, of course, EMC storage systems (with an emphasis on Atmos?) and VMware in bundled configurations that address cloud computing (internal or external) environments. The venture is also expected to include joint testing and marketing, as well as a road show in September or October.
Interestingly, sources also say that EMC plans to compensate both EMC and Cisco field reps for 100% of the value of the combined solution sale.
If anybody doubted whether Cisco’s UCS would gain traction, this deal would put those notions to rest.
The troika already has a loose relationship, but scuttlebutt suggests a much more formidable triumvirate. The announcement might come at VMworld this week, although next week may be more likely.
Paul Mansky, a principal in equity research, data center infrastructure, at CANACCORD Adams, issued a note this week that hints at what the three-way relationship might entail.
Citing industry sources, Mansky says the deal will revolve around Cisco’s Unified Computing System (UCS) platform and, of course, EMC storage systems (with an emphasis on Atmos?) and VMware in bundled configurations that address cloud computing (internal or external) environments. The venture is also expected to include joint testing and marketing, as well as a road show in September or October.
Interestingly, sources also say that EMC plans to compensate both EMC and Cisco field reps for 100% of the value of the combined solution sale.
If anybody doubted whether Cisco’s UCS would gain traction, this deal would put those notions to rest.
Tuesday, August 25, 2009
IT spending 2009: A glass half . . .
August 25, 2009 -- According to recently released mid-year research from Computer Economics, 45% of IT organizations are expected to increase IT spending in 2009 vs. 2008. (<50% is typical in a recession.) And 30% are cutting IT spending. The remaining organizations expect IT spending to be about the same as last year.
Silver lining: The 45% increasing IT spending is well above the 36% in 2002 after the 2001 recession. That’s because the 2001 bust was led by the technology sector.
In terms of spending vs. budget: 49% of IT execs expect to spend less than what is allocated in their budget; 9% expect to spend more, and the remaining 42% expect to be inline with budget.
On a completely unrelated note . . .
VMworld kicks off next week at The Moscone Center in San Francisco, and there will be no signs of a slump in IT spending at this show. Although the focus is obviously on virtual servers, there will be a lot of emphasis on storage issues in the sessions, and the list of storage vendors exhibiting at the show is huge – maybe more than will be exhibiting at Storage Networking World in October.
We expect a lot of storage-related product and business announcements at VMworld, and we’ll start covering them on Monday. Check the Industry News and Analysis section on infostor.com for daily postings.
Silver lining: The 45% increasing IT spending is well above the 36% in 2002 after the 2001 recession. That’s because the 2001 bust was led by the technology sector.
In terms of spending vs. budget: 49% of IT execs expect to spend less than what is allocated in their budget; 9% expect to spend more, and the remaining 42% expect to be inline with budget.
On a completely unrelated note . . .
VMworld kicks off next week at The Moscone Center in San Francisco, and there will be no signs of a slump in IT spending at this show. Although the focus is obviously on virtual servers, there will be a lot of emphasis on storage issues in the sessions, and the list of storage vendors exhibiting at the show is huge – maybe more than will be exhibiting at Storage Networking World in October.
We expect a lot of storage-related product and business announcements at VMworld, and we’ll start covering them on Monday. Check the Industry News and Analysis section on infostor.com for daily postings.
Friday, August 14, 2009
Emulex vs. QLogic: Who's on first?
August 14, 2009 – The Dell’Oro Group market research firm recently released market figures for SANs, including Fibre Channel HBAs. Both Emulex and QLogic immediately put out positive press releases, both claiming market share gains.
Emulex’s release
QLogic’s release
How can these arch rivals both have gained market share? Through the miracle of spin. It depends on how you slice and dice the numbers. For the record, here are the results direct from the Dell’Oro Group:
Share Mfg Revenue (%)
SAN HBAs - Worldwide Fibre Channel Total
2Q08 1Q09 2Q09
Emulex 37.6% 37.2% 38.2%
QLogic 53.7% 56.2% 54.2%
(The “rest” of the HBA market includes vendors such as Apple, Atto, Brocade and LSI.)
I’m betting that this one-upmanship spin will get way out of control when research firms start tracking the market for FCoE-based converged network adapters (CNAs).
Emulex’s release
QLogic’s release
How can these arch rivals both have gained market share? Through the miracle of spin. It depends on how you slice and dice the numbers. For the record, here are the results direct from the Dell’Oro Group:
Share Mfg Revenue (%)
SAN HBAs - Worldwide Fibre Channel Total
2Q08 1Q09 2Q09
Emulex 37.6% 37.2% 38.2%
QLogic 53.7% 56.2% 54.2%
(The “rest” of the HBA market includes vendors such as Apple, Atto, Brocade and LSI.)
I’m betting that this one-upmanship spin will get way out of control when research firms start tracking the market for FCoE-based converged network adapters (CNAs).
Monday, August 10, 2009
Moving beyond SNW (guest blog)
August 10, 2009 – For a couple years now the industry has been debating how Storage Networking World (SNW) can regain some of its former glory and relevance. Due in part to the fact that physical trade shows have been dwindling in face of the macro economic climate and slump in IT spending, exhibitors have been questioning the ROI of shows such as SNW. And they haven’t been pleased with the quantity and quality of end-user attendees either.
For some time I’ve been advocating one change for SNW: Actively pursue involvement from the channel (e.g., distributors, VARs and integrators). The design of the show, with the combination of fairly technical sessions and vendor exhibits, is geared more to the needs of the channel than it is for end users anyway. And most storage hardware/software goes through the channel.
I recently chatted with Mike Alvarado, an independent IT consultant and a former member of the Storage Networking Industry Association (SNIA) Board of Directors. (SNIA is a co-organizer, with Computerworld, of SNW.) Mike shares my opinion that SNW should reinvent itself with a focus on the channel.
Here’s Mike’s take:
Moving Beyond SNW
Storage Networking World (SNW) developed in a time that desperately called for better vendor cooperation to reduce inhibitors to adopting network storage.
It is time to declare victory and move on to the next important task.
What is behind my conviction? First, the storage networking channel is very large and growing; it deserves its own dedicated venue.
Second, the emphasis at SNW on end users has detracted from giving solution implementers the dedicated focus and support they need to succeed. Vendors may do this for their own channel partners, but SNIA is still the only forum for cross-vendor and cross-reseller/integrator interaction, collaboration and validation.
Third, end users would be more confident in storage networking technology with even more visible cooperation and coordination between these two parties (vendors and the channel). End users may still choose to attend the successor show but the focus should always remain on the work between resellers/ integrators and their vendors.
Resellers and integrators are a vital network storage industry segment. I have seen many great conversations take place between vendors and these partners at different SNWs; those exchanges represent opportunity to drive great value for our industry. I believe if SNW focused on optimizing the interaction between vendors and resellers/integrators, it would pay large dividends. Calling the show something else or founding a new show with different sponsorship would be needed, but whatever it takes the sooner this happens the better.
Evolutionary success is based on successful adaptation to new circumstances. It is time to move forward as an industry once again.
--Mike Alvarado, IT consultant and former member of the SNIA Board of Directors.
michael_j_alvarado@earthlink.net
For some time I’ve been advocating one change for SNW: Actively pursue involvement from the channel (e.g., distributors, VARs and integrators). The design of the show, with the combination of fairly technical sessions and vendor exhibits, is geared more to the needs of the channel than it is for end users anyway. And most storage hardware/software goes through the channel.
I recently chatted with Mike Alvarado, an independent IT consultant and a former member of the Storage Networking Industry Association (SNIA) Board of Directors. (SNIA is a co-organizer, with Computerworld, of SNW.) Mike shares my opinion that SNW should reinvent itself with a focus on the channel.
Here’s Mike’s take:
Moving Beyond SNW
Storage Networking World (SNW) developed in a time that desperately called for better vendor cooperation to reduce inhibitors to adopting network storage.
It is time to declare victory and move on to the next important task.
What is behind my conviction? First, the storage networking channel is very large and growing; it deserves its own dedicated venue.
Second, the emphasis at SNW on end users has detracted from giving solution implementers the dedicated focus and support they need to succeed. Vendors may do this for their own channel partners, but SNIA is still the only forum for cross-vendor and cross-reseller/integrator interaction, collaboration and validation.
Third, end users would be more confident in storage networking technology with even more visible cooperation and coordination between these two parties (vendors and the channel). End users may still choose to attend the successor show but the focus should always remain on the work between resellers/ integrators and their vendors.
Resellers and integrators are a vital network storage industry segment. I have seen many great conversations take place between vendors and these partners at different SNWs; those exchanges represent opportunity to drive great value for our industry. I believe if SNW focused on optimizing the interaction between vendors and resellers/integrators, it would pay large dividends. Calling the show something else or founding a new show with different sponsorship would be needed, but whatever it takes the sooner this happens the better.
Evolutionary success is based on successful adaptation to new circumstances. It is time to move forward as an industry once again.
--Mike Alvarado, IT consultant and former member of the SNIA Board of Directors.
michael_j_alvarado@earthlink.net
Thursday, August 6, 2009
Toigo hosts C-4 Summit
August 6, 2009 – IT veteran, consultant, columnist and author Jon Toigo, in conjunction with Toigo Partners International and the Data Management Institute, will host the C-4 Summit Cyberspace Edition tomorrow (Friday, August 7). “C-4” stands for cost containment, compliance, continuity and carbon footprint reduction.
The summit is not focused solely on storage, but a look at some of the presenters indicates that there will be some good storage content. A short list of expert presenters includes representatives from vendors such as CA, DataCore Software, Digital Reef, FalconStor Software, Panasonic, Xsigo and Xiotech.
For more information, visit the C-4 Summit site. And here’s the press release:
C-4 Summit Cyberspace Edition Launches
Online Community Established to Help Companies Develop Strategies for Business IT C-4 Issues: Cost Containment, Compliance, Continuity and Carbon Footprint Reduction
DUNEDIN, Fla., August 6, 2009 – If you are like most business IT planners today, you are probably struggling to find ways to contain costs, ensure compliance with regulatory and legal mandates, provide continuity for business critical services, and manage energy demand in your data center. Tactical measures won’t do the job, and neither will stove-piping applications and data in a single-vendor technology stack – you need a C-4 Strategy.
Help has arrived in the form of an online summit and open community called the C-4 Project. The project kicks off Friday with a C-4 Summit Cyberspace Edition at www.c4project.org.
The C-4 Summit features video interviews and presentations by 17 speakers representing both business IT vendors and consumers, who offer their views on business IT strategy going forward. The event is free of charge and serves also to kick off a C-4 Community portal where business and IT planners, vendors and integrators can come together to discuss technology, strategy and best practices going forward.
The C-4 Summit defines the components of a sustainable C-4 Strategy as
• purpose-built infrastructure and virtualization
• unified infrastructure management via Web Services, and
• intelligent information management.
Perspectives are offered by technology experts drawn from both brand name and start-up firms including CA, DataCore Software, Digital Reef, FalconStor Software, Panasonic Corporation, Xsigo Systems, and Xiotech Corporation. A case study is provided by Mike Gayle, IT Director for Calvary Chapel of Fort Lauderdale, FL. Judi Britt, President of e-STORM, a Central Ohio IT user group, gives a report on the challenges confronting her members today.
The C-4 Project is an initiative of Toigo Partners International and the Data Management Institute. Jon Toigo keynotes the event and chairs the project. Toigo is a 25 year IT veteran, consultant, trade press columnist and author of 15 books on business and IT.
Originally scheduled as a live event to take place in Tampa Bay, FL in May, the Summit was re-cast as an online event in response to delegate concerns about travel budgets. To ensure a dialog, on-line question and answer sessions will be scheduled with speakers in September.
C-4 Summit videos will be re-broadcast by several on-line trade press publications, including Enterprise Systems (esj.com), commencing next month, and new videos will be added on an on-going basis.
Come join the C-4 Community.
About Toigo Partners International’s Data Management Institute: DMI is a membership organization to establish a professional identity for those who administer, manage and design data storage infrastructure for their organizations. It provides a location for end users to share their problems and solutions in day-to-day storage operations and compare experiences with specific storage products implemented in their shops. The institute offers a training and certification program designed to educate and empower IT workers who specialize in data management and storage administration. Toigo is also a prominent columnist for several online and print magazines covering IT, including SearchStorage.com, Enterprise Systems Journal online (esj.com), and Storage Networking World online, where his commentary on data storage technology and data protection is read by more than nearly 500,000 subscribers monthly.
CONTACT:
Jon Toigo
Toigo Partners International
727-736-5367
jtoigo@toigopartners.com.
The summit is not focused solely on storage, but a look at some of the presenters indicates that there will be some good storage content. A short list of expert presenters includes representatives from vendors such as CA, DataCore Software, Digital Reef, FalconStor Software, Panasonic, Xsigo and Xiotech.
For more information, visit the C-4 Summit site. And here’s the press release:
C-4 Summit Cyberspace Edition Launches
Online Community Established to Help Companies Develop Strategies for Business IT C-4 Issues: Cost Containment, Compliance, Continuity and Carbon Footprint Reduction
DUNEDIN, Fla., August 6, 2009 – If you are like most business IT planners today, you are probably struggling to find ways to contain costs, ensure compliance with regulatory and legal mandates, provide continuity for business critical services, and manage energy demand in your data center. Tactical measures won’t do the job, and neither will stove-piping applications and data in a single-vendor technology stack – you need a C-4 Strategy.
Help has arrived in the form of an online summit and open community called the C-4 Project. The project kicks off Friday with a C-4 Summit Cyberspace Edition at www.c4project.org.
The C-4 Summit features video interviews and presentations by 17 speakers representing both business IT vendors and consumers, who offer their views on business IT strategy going forward. The event is free of charge and serves also to kick off a C-4 Community portal where business and IT planners, vendors and integrators can come together to discuss technology, strategy and best practices going forward.
The C-4 Summit defines the components of a sustainable C-4 Strategy as
• purpose-built infrastructure and virtualization
• unified infrastructure management via Web Services, and
• intelligent information management.
Perspectives are offered by technology experts drawn from both brand name and start-up firms including CA, DataCore Software, Digital Reef, FalconStor Software, Panasonic Corporation, Xsigo Systems, and Xiotech Corporation. A case study is provided by Mike Gayle, IT Director for Calvary Chapel of Fort Lauderdale, FL. Judi Britt, President of e-STORM, a Central Ohio IT user group, gives a report on the challenges confronting her members today.
The C-4 Project is an initiative of Toigo Partners International and the Data Management Institute. Jon Toigo keynotes the event and chairs the project. Toigo is a 25 year IT veteran, consultant, trade press columnist and author of 15 books on business and IT.
Originally scheduled as a live event to take place in Tampa Bay, FL in May, the Summit was re-cast as an online event in response to delegate concerns about travel budgets. To ensure a dialog, on-line question and answer sessions will be scheduled with speakers in September.
C-4 Summit videos will be re-broadcast by several on-line trade press publications, including Enterprise Systems (esj.com), commencing next month, and new videos will be added on an on-going basis.
Come join the C-4 Community.
About Toigo Partners International’s Data Management Institute: DMI is a membership organization to establish a professional identity for those who administer, manage and design data storage infrastructure for their organizations. It provides a location for end users to share their problems and solutions in day-to-day storage operations and compare experiences with specific storage products implemented in their shops. The institute offers a training and certification program designed to educate and empower IT workers who specialize in data management and storage administration. Toigo is also a prominent columnist for several online and print magazines covering IT, including SearchStorage.com, Enterprise Systems Journal online (esj.com), and Storage Networking World online, where his commentary on data storage technology and data protection is read by more than nearly 500,000 subscribers monthly.
CONTACT:
Jon Toigo
Toigo Partners International
727-736-5367
jtoigo@toigopartners.com.
Monday, August 3, 2009
MLC for enterprise SSDs: The controller angle
August 3, 2009 -- According to conventional wisdom, enterprise-class SSDs require single-level cell (SLC) technology, as opposed to the lower-cost multi-level cell (MLC) technology – both of which fall under the NAND flash memory category. That’s because, generally speaking, SLC has performance, reliability and endurance (longevity) advantages. However, SLC media is much more expensive than MLC media. (In terms of raw media, the cost difference is about 4x.) As such, a variety of vendors are working on ways to combine the advantages of SLC with the low cost of MLC, in many cases using MLC media with advanced software and/or hardware.
In my last post (see “WhipTail: Software solves MLC SSD issues” ) I looked at WhipTail Technologies’ software-based approach to this issue. Other SSD vendors, such as STEC, are addressing the issue primarily from the controller angle (which, of course, is very much a software issue as it involves firmware and algorithms).
I recently spoke to Scott Shadley, senior product manager at STEC. He identifies the following as the key areas that will require advancements if MLC is going to succeed in enterprise-class SSDs.
--Existing ECC algorithms in SSD devices need to be improved, because ECC requirements for MLC are much more stringent than for SLC.
--Vendors need to mitigate the wear issues with MLC, because there’s about a 10x difference in the number of recycles per data sheet that SLC can withstand vs. MLC. Many SSD vendors utilize wear-leveling techniques to mitigate this problem, but “the goal is to limit, or minimize, writes in general, which wear leveling does not address,” says Shadley.
“Vendors have to come up with new ways for the controller to handle, or manipulate, incoming data so that the SSD is only writing a minimal amount of times, either by controller caching, external caching, and/or algorithms that manipulate the data to minimize the amount of data stored on the media,” he explains.
--Shadley also notes that, because MLC is slower than SLC, enterprise-class SSDs based on MLC technology will require faster processors, more flash channels, better ways of accessing the flash on those channels, and the ability to take advantage of the new NAND interfaces. “The goal is to minimize the performance and cost differences between MLC and SLC,” he says.
I expect to see some announcements in this space at next week’s Flash Memory Summit, August 11—13 at the Santa Clara Convention Center.
For more information on SSDs, see InfoStor’s SSD Topic Center.
In my last post (see “WhipTail: Software solves MLC SSD issues” ) I looked at WhipTail Technologies’ software-based approach to this issue. Other SSD vendors, such as STEC, are addressing the issue primarily from the controller angle (which, of course, is very much a software issue as it involves firmware and algorithms).
I recently spoke to Scott Shadley, senior product manager at STEC. He identifies the following as the key areas that will require advancements if MLC is going to succeed in enterprise-class SSDs.
--Existing ECC algorithms in SSD devices need to be improved, because ECC requirements for MLC are much more stringent than for SLC.
--Vendors need to mitigate the wear issues with MLC, because there’s about a 10x difference in the number of recycles per data sheet that SLC can withstand vs. MLC. Many SSD vendors utilize wear-leveling techniques to mitigate this problem, but “the goal is to limit, or minimize, writes in general, which wear leveling does not address,” says Shadley.
“Vendors have to come up with new ways for the controller to handle, or manipulate, incoming data so that the SSD is only writing a minimal amount of times, either by controller caching, external caching, and/or algorithms that manipulate the data to minimize the amount of data stored on the media,” he explains.
--Shadley also notes that, because MLC is slower than SLC, enterprise-class SSDs based on MLC technology will require faster processors, more flash channels, better ways of accessing the flash on those channels, and the ability to take advantage of the new NAND interfaces. “The goal is to minimize the performance and cost differences between MLC and SLC,” he says.
I expect to see some announcements in this space at next week’s Flash Memory Summit, August 11—13 at the Santa Clara Convention Center.
For more information on SSDs, see InfoStor’s SSD Topic Center.
Wednesday, July 29, 2009
WhipTail: Software solves MLC SSD issues
July 29, 2009 – One of the more promising developments in the solid-state disk (SSD) drive space is the potential use of low-cost multi-level cell (MLC) NAND flash memory in enterprise applications and arrays, vs. the high-cost – but more reliable and durable – single-level cell (SLC) technology, as I mentioned in my previous post (see “Intel slashes SSD prices” ). This can basically be accomplished in two ways: via software or via controller enhancements.
Relative newcomer WhipTail Technologies is an example of a vendor that’s using software techniques to overcome some of the inherent limitations of MLC flash memory; namely, write amplification issues that limit the ability of NAND to perform random writes in an effective manner (a performance issue), and wear-out issues (a reliability, or durability, or endurance problem).
To address the performance part of the equation, WhipTail uses buffering (not caching) techniques, in which writes are aggregated into a buffer that’s sized to the erase block of the NAND, according to WhipTail CTO James Candelaria, who claims that this technique enables performance close to the performance specs of the NAND media.
Specifically, the company claims performance of more than 100,000 I/Os per second (IOPS) of sustained, random I/O with 4KB block sizes and a 70/30 read/write split. Other performance specs include a latency of 0.1 milliseconds, and bandwidth of 1.7GBps (internal to the chassis).
The other major problem with MLC flash is wear-out. For example, SLC is rated at about 100,000 cycles per cell, while MLC is rated at only 10,000 cycles/cell before the cell becomes unreliable (and that may go down to 2,000 to 4,000 writes/cell with smaller die sizes).
To address the wear-out issue, WhipTail uses a technique called linearization, which essentially entails writing forward across the disk and not revisiting blocks until the entire array has been utilized. This not only decreases wear on the media, but also increases performance. Working in conjunction with linearization, a defrag process ensures that there is always a minimum amount of free space. This technique also works in conjunction with the drive’s wear-leveling algorithms.
The company’s internal tests indicate that if you rewrite an entire array once a day, the device will last seven years (or longer than most other components in the storage hierarchy).
You can get the details on these two techniques on WhipTail’s web site, as well as details on their products, but what about pricing?
Candelaria contends that WhipTail “provides tier-0 [SSD] performance at the price of tier-1 arrays.”
Well, a 1.5TB WhipTail SSD array is priced at $46,000 retail; a 3TB version at $75,600; and the new 6TB configuration, introduced this month, at $122,500.
Summit NJ-based WhipTail was spun out of TheAdmins, a reseller, early this year and has been working on its SSD technology since late 2007. Its first product went GA in February. The company sells through resellers, with eight VARs signed up so far.
For more information on SSDs, see InfoStor’s SSD Topic Center.
And if you’re really interested in solid-state technology, consider attending the Flash Memory Summit, August 11—13 at the Santa Clara Convention Center.
Relative newcomer WhipTail Technologies is an example of a vendor that’s using software techniques to overcome some of the inherent limitations of MLC flash memory; namely, write amplification issues that limit the ability of NAND to perform random writes in an effective manner (a performance issue), and wear-out issues (a reliability, or durability, or endurance problem).
To address the performance part of the equation, WhipTail uses buffering (not caching) techniques, in which writes are aggregated into a buffer that’s sized to the erase block of the NAND, according to WhipTail CTO James Candelaria, who claims that this technique enables performance close to the performance specs of the NAND media.
Specifically, the company claims performance of more than 100,000 I/Os per second (IOPS) of sustained, random I/O with 4KB block sizes and a 70/30 read/write split. Other performance specs include a latency of 0.1 milliseconds, and bandwidth of 1.7GBps (internal to the chassis).
The other major problem with MLC flash is wear-out. For example, SLC is rated at about 100,000 cycles per cell, while MLC is rated at only 10,000 cycles/cell before the cell becomes unreliable (and that may go down to 2,000 to 4,000 writes/cell with smaller die sizes).
To address the wear-out issue, WhipTail uses a technique called linearization, which essentially entails writing forward across the disk and not revisiting blocks until the entire array has been utilized. This not only decreases wear on the media, but also increases performance. Working in conjunction with linearization, a defrag process ensures that there is always a minimum amount of free space. This technique also works in conjunction with the drive’s wear-leveling algorithms.
The company’s internal tests indicate that if you rewrite an entire array once a day, the device will last seven years (or longer than most other components in the storage hierarchy).
You can get the details on these two techniques on WhipTail’s web site, as well as details on their products, but what about pricing?
Candelaria contends that WhipTail “provides tier-0 [SSD] performance at the price of tier-1 arrays.”
Well, a 1.5TB WhipTail SSD array is priced at $46,000 retail; a 3TB version at $75,600; and the new 6TB configuration, introduced this month, at $122,500.
Summit NJ-based WhipTail was spun out of TheAdmins, a reseller, early this year and has been working on its SSD technology since late 2007. Its first product went GA in February. The company sells through resellers, with eight VARs signed up so far.
For more information on SSDs, see InfoStor’s SSD Topic Center.
And if you’re really interested in solid-state technology, consider attending the Flash Memory Summit, August 11—13 at the Santa Clara Convention Center.
Thursday, July 23, 2009
Intel slashes SSD prices
July 23, 2009 –There are still some issues that need to be ironed out with solid-state disk (SSD) drives (e.g., reliability and endurance), but the biggest problem -- and gating factor to adoption -- has been the outrageous price of these devices.
One way to reduce prices is to use the less expensive multi-level cell (MLC) NAND flash technology, as opposed to the more expensive, reliable and durable single-level cell (SLC) technology. But at least for enterprise-class applications, that requires improvements in either controller and/or software technology (which I’ll blog about in an upcoming post).
Another way to reduce SSD prices is to go with a different manufacturing process. That’s what Intel announced this week for its X25-M (Mainstream) line of SSDs, which are admittedly designed primarily for desktops and laptops as opposed to enterprise arrays and applications.
Intel claims a 60% price reduction due to moving from a 50-nanometer manufacturing process to a 34nm process (smaller die size), and a quick price check seems to legitimize those claims.
For example, the 80GB X25-M SSD is channel-priced at $225 in 1,000-unit quantities, a 62% reduction from the original price of $595 a year ago. And the 160GB version is priced at $440, down from $945 when it was first introduced. Both of those SSDs come in a 2.5-inch form factor, with a 1.8-inch version, the X18-M, due in August or September.
Intel claims performance of “the same or better” compared to the 50nm predecessors, citing up to 6,600 I/Os per second (IOPS) on 4KB write operations, and up to 35,000 IOPS on read operations. The company also claims a 25% reduction in latency, to 65 microseconds.
Calculated on a cost-per-GB basis, SSDs are still way more expensive than traditional spinning disk drives, but SSD price wars should come as good news for users with the need for speed.
For more info on Intel’s SSDs, click here.
For general information and news, visit InfoStor’s SSD Topic Center.
One way to reduce prices is to use the less expensive multi-level cell (MLC) NAND flash technology, as opposed to the more expensive, reliable and durable single-level cell (SLC) technology. But at least for enterprise-class applications, that requires improvements in either controller and/or software technology (which I’ll blog about in an upcoming post).
Another way to reduce SSD prices is to go with a different manufacturing process. That’s what Intel announced this week for its X25-M (Mainstream) line of SSDs, which are admittedly designed primarily for desktops and laptops as opposed to enterprise arrays and applications.
Intel claims a 60% price reduction due to moving from a 50-nanometer manufacturing process to a 34nm process (smaller die size), and a quick price check seems to legitimize those claims.
For example, the 80GB X25-M SSD is channel-priced at $225 in 1,000-unit quantities, a 62% reduction from the original price of $595 a year ago. And the 160GB version is priced at $440, down from $945 when it was first introduced. Both of those SSDs come in a 2.5-inch form factor, with a 1.8-inch version, the X18-M, due in August or September.
Intel claims performance of “the same or better” compared to the 50nm predecessors, citing up to 6,600 I/Os per second (IOPS) on 4KB write operations, and up to 35,000 IOPS on read operations. The company also claims a 25% reduction in latency, to 65 microseconds.
Calculated on a cost-per-GB basis, SSDs are still way more expensive than traditional spinning disk drives, but SSD price wars should come as good news for users with the need for speed.
For more info on Intel’s SSDs, click here.
For general information and news, visit InfoStor’s SSD Topic Center.
Monday, July 20, 2009
VARs see improvements in Q3
July 20, 2009 – Robert W. Baird & Co. recently completed its quarterly survey of enterprise VARs, and although Q2 results were flat there is some optimism regarding the second half of this year – particularly regarding technologies such as, not surprisingly, data deduplication and thin provisioning.
The firm surveyed 47 IT resellers with total annual revenue of $11.4 billion and average revenues of $261 million per year.
Results for the second quarter were split, with 43% of the server/storage VARs below plan, 44% on plan, and 13% above plan. However, there were signs of optimism (albeit guarded) for the rest of the year, with more than 75% of the survey participants expecting Q3 to be flat or up and the remaining VARs reporting either limited visibility or expectations that Q3 will be worse than Q2.
Specifically, 54% of the survey respondents expect the third quarter to be the same as the second quarter in terms of revenue; 22% expect is to be more positive; 12% expect it to be more negative than Q2; and 12 % said that it was too early to tell.
In terms of technology, as in the previous few surveys, the hottest revenue growth opportunities lie in cost-saving, infrastructure-optimization technologies. In the storage sector, that means data deduplication and thin provisioning. (ok, maybe EMC’s acquisition of Data Domain was worth $2.1 billion, although I still say it wasn’t. See “EMC out-trumps NetApp, or not.” ) VARs also cited solid-state disk (SSD) drives as a growth technology.
In a more general sense, storage and virtualization are expected to be the strongest areas for IT spending, while PCs and servers are expected to remain relatively weak.
Server virtualization is increasingly seen as a “must have” technology, with 68% of the Baird survey respondents saying that server virtualization is in strong demand and the remaining 32% saying that it is ramping moderately. In a related finding, virtual desktop infrastructure (VDI) is gaining momentum, with 62% of the resellers noting that VDI is either ramping moderately or experiencing strong growth – up from 39% in Baird’s Q1 survey.
Interestingly, in terms of vendor strength Baird analysts note that heavyweights such as EMC, Sun, HP, Dell and IBM lagged vendors of lower-cost, more innovative technologies, such as Compellent, LeftHand and Data Domain. Dell and IBM were particularly weak in Q2, with 83% and 69% of the VARs below plan, respectively. Conversely, Cisco and Compellent had notable sequential improvements in the reseller rankings, according to Baird analysts.
Finally, VARs ranked NetApp and LeftHand as “the most channel friendly vendors.”
The firm surveyed 47 IT resellers with total annual revenue of $11.4 billion and average revenues of $261 million per year.
Results for the second quarter were split, with 43% of the server/storage VARs below plan, 44% on plan, and 13% above plan. However, there were signs of optimism (albeit guarded) for the rest of the year, with more than 75% of the survey participants expecting Q3 to be flat or up and the remaining VARs reporting either limited visibility or expectations that Q3 will be worse than Q2.
Specifically, 54% of the survey respondents expect the third quarter to be the same as the second quarter in terms of revenue; 22% expect is to be more positive; 12% expect it to be more negative than Q2; and 12 % said that it was too early to tell.
In terms of technology, as in the previous few surveys, the hottest revenue growth opportunities lie in cost-saving, infrastructure-optimization technologies. In the storage sector, that means data deduplication and thin provisioning. (ok, maybe EMC’s acquisition of Data Domain was worth $2.1 billion, although I still say it wasn’t. See “EMC out-trumps NetApp, or not.” ) VARs also cited solid-state disk (SSD) drives as a growth technology.
In a more general sense, storage and virtualization are expected to be the strongest areas for IT spending, while PCs and servers are expected to remain relatively weak.
Server virtualization is increasingly seen as a “must have” technology, with 68% of the Baird survey respondents saying that server virtualization is in strong demand and the remaining 32% saying that it is ramping moderately. In a related finding, virtual desktop infrastructure (VDI) is gaining momentum, with 62% of the resellers noting that VDI is either ramping moderately or experiencing strong growth – up from 39% in Baird’s Q1 survey.
Interestingly, in terms of vendor strength Baird analysts note that heavyweights such as EMC, Sun, HP, Dell and IBM lagged vendors of lower-cost, more innovative technologies, such as Compellent, LeftHand and Data Domain. Dell and IBM were particularly weak in Q2, with 83% and 69% of the VARs below plan, respectively. Conversely, Cisco and Compellent had notable sequential improvements in the reseller rankings, according to Baird analysts.
Finally, VARs ranked NetApp and LeftHand as “the most channel friendly vendors.”
Monday, July 13, 2009
EMC out-trumps NetApp, or not
July 13, 2009 – Now that the bidding battle for Data Domain is over, with EMC set to lay out a whopping $2.1 billion, the question is: What’s next?
To get the opinions of analysts such as the Enterprise Strategy Group’s Steve Duplessie and Wikibon.org’s Dave Vellante, check out senior editor Kevin Komiega’s blog post, “Is Data Domain a good fit for EMC?”
I draw two conclusions from this saga:
--EMC paid way too much
--NetApp did the right thing
I thought the original bid of $1.5 billion was too high, but $2.1 billion for a data deduplication vendor?? With some vendors giving away deduplication for free (most notably, EMC and NetApp) users’ expectations re the cost of deduplication are going down. I’m told that a Data Domain implementation can get real costly real quickly, but even EMC won’t be able to keep those margins up over the long run. As such, EMC’s ROI for Data Domain appears questionable. And that’s a lot of money to pay just to keep a technology out of a competitor’s hands. EMC may appear to be victorious, but it’s a Pyrrhic victory at best.
NetApp officials did the right thing by ditching their egos and walking away from the bidding war. In fact, you could almost argue that NetApp is the victor in this battle.
So what’s next for NetApp? The conventional wisdom is that the company must make acquisitions – particularly on the software front – to round out its IT stack and stay competitive with EMC, IBM, HP, etc. And if you cruise the blogs you’ll find that acquisition speculation tends to focus on vendors such as CommVault, FalconStor, etc. and primary storage optimization vendors such as Ocarina. That assumes that NetApp is as dizzy over dedupe as it appears to be.
My guess is that NetApp will resume its acquisition attempts, but not in the deduplication arena. I think we’re in for some more surprises, and probably in the near future.
And in unrelated news . . .
Also last week, Broadcom appears to have dropped its hostile takeover bid for Emulex after getting rebuffed yet again on its sweetened offer. Something tells me this one isn’t over yet. Broadcom needs Fibre Channel – or at least Fibre Channel over Ethernet – technology, and Emulex isn’t the only Fibre Channel expert in the OC.
To get the opinions of analysts such as the Enterprise Strategy Group’s Steve Duplessie and Wikibon.org’s Dave Vellante, check out senior editor Kevin Komiega’s blog post, “Is Data Domain a good fit for EMC?”
I draw two conclusions from this saga:
--EMC paid way too much
--NetApp did the right thing
I thought the original bid of $1.5 billion was too high, but $2.1 billion for a data deduplication vendor?? With some vendors giving away deduplication for free (most notably, EMC and NetApp) users’ expectations re the cost of deduplication are going down. I’m told that a Data Domain implementation can get real costly real quickly, but even EMC won’t be able to keep those margins up over the long run. As such, EMC’s ROI for Data Domain appears questionable. And that’s a lot of money to pay just to keep a technology out of a competitor’s hands. EMC may appear to be victorious, but it’s a Pyrrhic victory at best.
NetApp officials did the right thing by ditching their egos and walking away from the bidding war. In fact, you could almost argue that NetApp is the victor in this battle.
So what’s next for NetApp? The conventional wisdom is that the company must make acquisitions – particularly on the software front – to round out its IT stack and stay competitive with EMC, IBM, HP, etc. And if you cruise the blogs you’ll find that acquisition speculation tends to focus on vendors such as CommVault, FalconStor, etc. and primary storage optimization vendors such as Ocarina. That assumes that NetApp is as dizzy over dedupe as it appears to be.
My guess is that NetApp will resume its acquisition attempts, but not in the deduplication arena. I think we’re in for some more surprises, and probably in the near future.
And in unrelated news . . .
Also last week, Broadcom appears to have dropped its hostile takeover bid for Emulex after getting rebuffed yet again on its sweetened offer. Something tells me this one isn’t over yet. Broadcom needs Fibre Channel – or at least Fibre Channel over Ethernet – technology, and Emulex isn’t the only Fibre Channel expert in the OC.
Wednesday, July 8, 2009
The drawbacks to data reduction
July 8, 2009 – Data reduction, or capacity optimization, has succeeded in the backup/archive space (i.e., secondary storage), but applying data reduction techniques such as a deduplication and/or compression to primary storage is a horse of a different color. This is why the leading vendors in data deduplication for secondary storage (e.g., Data Domain, EMC, IBM, FalconStor, etc.) are not the same players as we find in the market for data reduction on primary storage.
A lot of articles have been written about primary storage optimization (as the Taneja Group consulting firm refers to it), but most of them focus on the advantages while ignoring the ‘gotchas’ associated with the technology. InfoStor (me, in particular) has been guilty of this (see “Consider data reduction for primary storage” ).
In that article, I focused on the advantages of data reduction for primary storage, and introduced the key players (NetApp, EMC, Ocarina, Storwize, Hifn/Exar, and greenBytes) and their different approaches to capacity optimization. But I didn’t get into the drawbacks.
In a recent blog post, Wikibon.org president and founder Dave Vellante drills into the drawbacks associated with data reduction on primary storage (which Wikibon refers to broadly as “online or primary data compression”).
Vellante divides the market into three approaches:
--“Data deduplication light” approaches such as those used by NetApp and EMC
--Host-managed data reduction (e.g., Ocarina Networks)
--In-line data compression (e.g., Storwize)
All of these approaches have the same benefits (reduced capacity and costs), but each has a few drawbacks. Recommended reading: “Pitfalls of compressing online storage.”
A lot of articles have been written about primary storage optimization (as the Taneja Group consulting firm refers to it), but most of them focus on the advantages while ignoring the ‘gotchas’ associated with the technology. InfoStor (me, in particular) has been guilty of this (see “Consider data reduction for primary storage” ).
In that article, I focused on the advantages of data reduction for primary storage, and introduced the key players (NetApp, EMC, Ocarina, Storwize, Hifn/Exar, and greenBytes) and their different approaches to capacity optimization. But I didn’t get into the drawbacks.
In a recent blog post, Wikibon.org president and founder Dave Vellante drills into the drawbacks associated with data reduction on primary storage (which Wikibon refers to broadly as “online or primary data compression”).
Vellante divides the market into three approaches:
--“Data deduplication light” approaches such as those used by NetApp and EMC
--Host-managed data reduction (e.g., Ocarina Networks)
--In-line data compression (e.g., Storwize)
All of these approaches have the same benefits (reduced capacity and costs), but each has a few drawbacks. Recommended reading: “Pitfalls of compressing online storage.”
Thursday, July 2, 2009
Nirvanix brings storage from the moon to the cloud
July 3, 2009 – I’ve read a few interesting case studies about cloud storage (and a lot more non-interesting ones) but, for your July 4th reading pleasure, this one from Nirvanix gets my vote as the most interesting application of cloud storage. And you gotta love Nirvanix president and CEO Jim Zierick’s quote.
Here’s the press release:
NIRVANIX BRINGS STORAGE FROM THE MOON TO THE CLOUD WITH SUCCESSFUL LAUNCH OF LUNAR RECONNAISSANCE ORBITER
Comprehensive imagery data from onboard cameras providing deeper understanding of the moon and its environment to be copied to CloudNAS-based solution
SAN DIEGO – June 29, 2009 – An Atlas V 401 rocket carrying two lunar satellites launched from Cape Canaveral Air Force Station in Florida at 5:32 p.m. EDT on June 18th in what is being described as America’s first step to the lasting return to the moon. One of the satellites, the Lunar Reconnaissance Orbiter (LRO), will begin to provide high-definition imagery of the moon once in orbit with a copy of all data stored on the Nirvanix Storage Delivery Network™ via CloudNAS®, a software based gateway to secure enterprise cloud storage.
After a four-day trip, the LRO will begin orbiting the moon, spending at least a year in a low polar orbit collecting detailed information about the lunar environment that will help in future robotic and human missions to the moon. Images from the Lunar Reconnaissance Orbiter Camera will be transmitted from the satellite to a project team at Arizona State University for systematic processing, replicated to secondary high-performance storage in a separate building at ASU and then replicated to the Nirvanix Storage Delivery Network (SDN™). Nirvanix provides a method for storing a tertiary copy of the data offsite by installing CloudNAS and writing a copy directly from the data-receiving servers. ASU and NASA have already transferred multiple TBs of original Apollo mission imagery to the Nirvanix CloudNAS-based solution.
“While this project may be one small step for NASA’s program to extend human presence in the solar system, it definitely represents a giant leap in cloud storage’s ability to provide a reliable, scalable and accessible alternative to tape for long-term retention of enterprise-class data,” said Jim Zierick, President and CEO of Nirvanix. “The tertiary copy of images from the LRO Camera stored on the Nirvanix CloudNAS is online and accessible within seconds and the project managers at ASU do not need to worry about managing offsite storage, allowing them to focus on the more important mission at hand. We are pleased to be part of such a historic project and value our contribution to finding a deeper understanding of the moon and its environment.”
Nirvanix CloudNAS is a fast, secure and easy way to gain access to the benefits of Cloud Storage. As the world's first software-only NAS solution accessible via CIFS or NFS, CloudNAS offers enhanced secure data transfers to any of Nirvanix's globally distributed storage nodes using integrated AES 256-bit encryption and SSL options. Through the Nirvanix CloudNAS, organizations have access to unlimited storage via the Nirvanix Storage Delivery Network with the ability to turn any server on their network into a gateway to the cloud accessible by many existing applications and processes.
For more news and in-depth features on cloud-based storage, see InfoStor’s cloud storage Topic Center.
Here’s the press release:
NIRVANIX BRINGS STORAGE FROM THE MOON TO THE CLOUD WITH SUCCESSFUL LAUNCH OF LUNAR RECONNAISSANCE ORBITER
Comprehensive imagery data from onboard cameras providing deeper understanding of the moon and its environment to be copied to CloudNAS-based solution
SAN DIEGO – June 29, 2009 – An Atlas V 401 rocket carrying two lunar satellites launched from Cape Canaveral Air Force Station in Florida at 5:32 p.m. EDT on June 18th in what is being described as America’s first step to the lasting return to the moon. One of the satellites, the Lunar Reconnaissance Orbiter (LRO), will begin to provide high-definition imagery of the moon once in orbit with a copy of all data stored on the Nirvanix Storage Delivery Network™ via CloudNAS®, a software based gateway to secure enterprise cloud storage.
After a four-day trip, the LRO will begin orbiting the moon, spending at least a year in a low polar orbit collecting detailed information about the lunar environment that will help in future robotic and human missions to the moon. Images from the Lunar Reconnaissance Orbiter Camera will be transmitted from the satellite to a project team at Arizona State University for systematic processing, replicated to secondary high-performance storage in a separate building at ASU and then replicated to the Nirvanix Storage Delivery Network (SDN™). Nirvanix provides a method for storing a tertiary copy of the data offsite by installing CloudNAS and writing a copy directly from the data-receiving servers. ASU and NASA have already transferred multiple TBs of original Apollo mission imagery to the Nirvanix CloudNAS-based solution.
“While this project may be one small step for NASA’s program to extend human presence in the solar system, it definitely represents a giant leap in cloud storage’s ability to provide a reliable, scalable and accessible alternative to tape for long-term retention of enterprise-class data,” said Jim Zierick, President and CEO of Nirvanix. “The tertiary copy of images from the LRO Camera stored on the Nirvanix CloudNAS is online and accessible within seconds and the project managers at ASU do not need to worry about managing offsite storage, allowing them to focus on the more important mission at hand. We are pleased to be part of such a historic project and value our contribution to finding a deeper understanding of the moon and its environment.”
Nirvanix CloudNAS is a fast, secure and easy way to gain access to the benefits of Cloud Storage. As the world's first software-only NAS solution accessible via CIFS or NFS, CloudNAS offers enhanced secure data transfers to any of Nirvanix's globally distributed storage nodes using integrated AES 256-bit encryption and SSL options. Through the Nirvanix CloudNAS, organizations have access to unlimited storage via the Nirvanix Storage Delivery Network with the ability to turn any server on their network into a gateway to the cloud accessible by many existing applications and processes.
For more news and in-depth features on cloud-based storage, see InfoStor’s cloud storage Topic Center.
Monday, June 29, 2009
Will (should) NetApp get acquired?
June 29, 2009 – While we wait for the outcomes of the proposed acquisitions of Data Domain (by EMC or NetApp) and Emulex (by Broadcom), why not turn back to the longest-running acquisition speculation in the storage industry: NetApp.
For some reason, rumors about a potential acquisition of NetApp have been around for almost as long as NetApp has been around. I never gave them (the rumors, not NetApp) much credibility because (a) I don’t see which vendor would benefit sufficiently from a (roughly estimated) $8 billion acquisition of the company and (b) NetApp looks like a real winner in standalone mode.
But I recently read a blog post from Wikibon.org president and founder Dave Vellante that calls these suppositions into question (even though Dave’s conclusion is the same as mine: That NetApp will not get acquired).
Although NetApp is looking pretty good right now, as data centers collapse (in a positive sense) around converged networks, the eventual winners will be the vendors that have the deepest penetration into the whole IT stack; say, Cisco, EMC, HP, IBM and/or Oracle-Sun. Will a pure-play storage vendor be able to prosper in that scenario?
Here are a couple snippets from Dave’s blog, and a link to the full post:
“NetApp is a $3.4 billion company with 8,000 employees and a good balance sheet. But it’s a ‘tweener’ in the IT sector. Not huge like HP and IBM, but much larger than smaller pure plays like 3PAR and Compellent. In his keynote, Warmenhoven raised the question that he said he’s frequently asked: “Who is going to buy NetApp?” His answer is essentially “no one,” because no company wants to own or can afford to own NetApp.
…
The question we have is how much further can NetApp go? Are we witnessing a trend similar to the minicomputer days where the likes of Prime Computer, Wang Labs and Data General, while highfliers in their day, were big but not attractive enough growth prospects to be acquired (notwithstanding DG’s smart move to re-invent the company as a storage player and subsequently sell to EMC)?
Click here for the full post.
For some reason, rumors about a potential acquisition of NetApp have been around for almost as long as NetApp has been around. I never gave them (the rumors, not NetApp) much credibility because (a) I don’t see which vendor would benefit sufficiently from a (roughly estimated) $8 billion acquisition of the company and (b) NetApp looks like a real winner in standalone mode.
But I recently read a blog post from Wikibon.org president and founder Dave Vellante that calls these suppositions into question (even though Dave’s conclusion is the same as mine: That NetApp will not get acquired).
Although NetApp is looking pretty good right now, as data centers collapse (in a positive sense) around converged networks, the eventual winners will be the vendors that have the deepest penetration into the whole IT stack; say, Cisco, EMC, HP, IBM and/or Oracle-Sun. Will a pure-play storage vendor be able to prosper in that scenario?
Here are a couple snippets from Dave’s blog, and a link to the full post:
“NetApp is a $3.4 billion company with 8,000 employees and a good balance sheet. But it’s a ‘tweener’ in the IT sector. Not huge like HP and IBM, but much larger than smaller pure plays like 3PAR and Compellent. In his keynote, Warmenhoven raised the question that he said he’s frequently asked: “Who is going to buy NetApp?” His answer is essentially “no one,” because no company wants to own or can afford to own NetApp.
…
The question we have is how much further can NetApp go? Are we witnessing a trend similar to the minicomputer days where the likes of Prime Computer, Wang Labs and Data General, while highfliers in their day, were big but not attractive enough growth prospects to be acquired (notwithstanding DG’s smart move to re-invent the company as a storage player and subsequently sell to EMC)?
Click here for the full post.
Subscribe to:
Posts (Atom)