I’ve been doing some background research on continuous data protection (CDP) for an upcoming Special Report, and I’m starting to wonder whether the term – not the technology – will survive.
For one, we still have the “true CDP” (recovery to any point in time) vs. “near CDP” (snapshots on steroids) argument confusing things. I spoke to a number of vendors, and they all went to great lengths to explain why the distinction is irrelevant, and then they went on to explain why their approach was (or in some cases, wasn’t) true CDP.
Maybe it is meaningless. The fact is that very few companies really need true CDP. It just depends on how much granularity you need in your recovery point objective (RPO), and for many companies snapshots are more than adequate.
In addition, many vendors now offer a spectrum of recovery points, ranging from snapshots to APIT recovery: Users can choose based on their specific RPO requirements, capacity issues, cost, service level agreements (SLAs), “application-consistent” vs. “crash-consistent” requirements, the importance of the data being protected, etc.
Further, many traditional backup software vendors have integrated some level of CDP functionality into their backup suites, or at least offer it as an option. This blurs the lines between CDP and backup and, in fact, standalone CDP applications/platforms are becoming rare.
As such, it’s possible that the CDP term will eventually fade away, despite the fact that the Storage Networking Industry Association’s Data Management Forum has a special interest group dedicated to CDP.
As Fadi Albatal, FalconStor Software’s director of marketing, says, “The argument [over true CDP vs. near CDP] is irrelevant. It all depends on your SLA, RPO and RTO requirements. It’s a matter of what is acceptable risk and/or data loss.”
Or as John Ferraro, InMage Systems’ president and CEO says, “We don’t sell CDP; we focus on disaster recovery.”
Monday, December 22, 2008
Wednesday, December 17, 2008
Storage spending according to TIP
Amid the general doom and gloom in the IT industry, the storage segment is showing some signs of surprising resiliency. Then again . . .
On the down side, a recent storage spending survey of Fortune 1000 managers conducted by TheInfoPro research firm, shows that storage budgets will decrease 14% on average in 2009.
In addition, 68% of the companies expect to spend less in 2009 vs. 2008, with only 21% of all respondents planning to spend more than $10 million (remember, all respondents were from Fortune 1000 firms) – which is about a 10% drop from 2008.
Other key findings from the TheInfoPro survey:
--32% of the respondents spent less in 2008 compared to what was originally budgeted
--32% expect to spend the same or more in 2009 vs. 2008 – down from 81% in the 2008 vs. 2007 period.
The main culprit, of course, is the macro economic malaise, but TheInfoPro CEO (and fellow Boston College alumnus) Ken Male also notes that “Excess capacity and the procurement of low-cost tiers of storage have also contributed to the considerably lower Q4 ‘budget flush’ that we’re witnessing compared to previous years.”
On the positive side, Gartner’s Q3 2008 revenue stats for the external controller-based disk storage market (aka disk arrays) is somewhat upbeat. For example, revenue totaled $4.3 billion in the third quarter, a 10% increase over Q3 2007. However, when comparing Q2 and Q3 of 2008, worldwide revenues declined 4.5%.
For those keeping score, the leader board in the disk array market (with market shares in the third quarter) goes like this: EMC (26.1%), IBM (13%), HP (11.3%), Hitachi Data Systems (9.8%), Dell (8.9%), NetApp (8%), Sun (4.3%), Fujitsu (2.5%), and “others” (16.1%).
In another gloomy end-of-year report, investment analysts at Needham & Company lead off with this: “After a year of acting as grief counselors as much as research analysts, we welcome the arrival of a new year. The bad news is that we expect the first half of 2009 (and in particular, Q1) to see a dramatic decline in IT spending.” The Needham analysts are predicting a 2% to 3% overall increase in IT spending in 2009, noting that it will be heavily weighted toward the second half of the year.
On the down side, a recent storage spending survey of Fortune 1000 managers conducted by TheInfoPro research firm, shows that storage budgets will decrease 14% on average in 2009.
In addition, 68% of the companies expect to spend less in 2009 vs. 2008, with only 21% of all respondents planning to spend more than $10 million (remember, all respondents were from Fortune 1000 firms) – which is about a 10% drop from 2008.
Other key findings from the TheInfoPro survey:
--32% of the respondents spent less in 2008 compared to what was originally budgeted
--32% expect to spend the same or more in 2009 vs. 2008 – down from 81% in the 2008 vs. 2007 period.
The main culprit, of course, is the macro economic malaise, but TheInfoPro CEO (and fellow Boston College alumnus) Ken Male also notes that “Excess capacity and the procurement of low-cost tiers of storage have also contributed to the considerably lower Q4 ‘budget flush’ that we’re witnessing compared to previous years.”
On the positive side, Gartner’s Q3 2008 revenue stats for the external controller-based disk storage market (aka disk arrays) is somewhat upbeat. For example, revenue totaled $4.3 billion in the third quarter, a 10% increase over Q3 2007. However, when comparing Q2 and Q3 of 2008, worldwide revenues declined 4.5%.
For those keeping score, the leader board in the disk array market (with market shares in the third quarter) goes like this: EMC (26.1%), IBM (13%), HP (11.3%), Hitachi Data Systems (9.8%), Dell (8.9%), NetApp (8%), Sun (4.3%), Fujitsu (2.5%), and “others” (16.1%).
In another gloomy end-of-year report, investment analysts at Needham & Company lead off with this: “After a year of acting as grief counselors as much as research analysts, we welcome the arrival of a new year. The bad news is that we expect the first half of 2009 (and in particular, Q1) to see a dramatic decline in IT spending.” The Needham analysts are predicting a 2% to 3% overall increase in IT spending in 2009, noting that it will be heavily weighted toward the second half of the year.
Monday, December 1, 2008
2008: The Year of . . .
It’s customary to take a year-end look back at the key trends and technologies that shaped, or re-shaped, our industry. In storage, it’s usually difficult to zero down on one or two topics, but in the case of 2008 it was relatively easy.
In my view, two technologies dominated this year: virtualization and data de-duplication.
In the case of virtualization, I’m referring to server virtualization and its impact on storage, rather than storage virtualization per se (although the two are careening toward convergence). Server virtualization is arguably the hottest trend in the overall IT market, and virtually all storage vendors are hustling to take advantage of this trend by providing integration hooks and even brand new products that help end users maximize the benefits of server virtualization.
With Microsoft’s introduction of Hyper-V this year, which will ignite competition with VMware and other platforms, adoption of virtualization will continue to surge, and the storage implications – from HBAs to backup software – will take center stage for storage administrators in 2009.
But I’d have to give top honors for the dominant storage technology of 2008 to data de-duplication.
The rapid rise of data de-duplication is no surprise: From the end-user perspective, it’s a no-brainer. There aren’t any cogent arguments against reducing your capacity requirements and costs, particularly in these belt-tightening times.
But the thing I really like about data de-duplication is that it’s one of those technologies that show how end users win in the end. Data de-dupe works against the profit motive of vendors (at least the hardware vendors) because, like thin provisioning, it enables users to significantly reduce the amount of storage space they need to buy – or at least defer purchases. And let’s give credit to the early pioneers of data de-duplication, such as Data Domain, that gave users a technology that was so useful that they in turn forced the big vendors to offer a technology that in a way was not in their near-term best interests.
Here are some recent InfoStor articles on data de-duplication:
Users weigh de-dupe options
Users react to de-duplication deals
EMC refreshes data protection, de-dupe software
In my view, two technologies dominated this year: virtualization and data de-duplication.
In the case of virtualization, I’m referring to server virtualization and its impact on storage, rather than storage virtualization per se (although the two are careening toward convergence). Server virtualization is arguably the hottest trend in the overall IT market, and virtually all storage vendors are hustling to take advantage of this trend by providing integration hooks and even brand new products that help end users maximize the benefits of server virtualization.
With Microsoft’s introduction of Hyper-V this year, which will ignite competition with VMware and other platforms, adoption of virtualization will continue to surge, and the storage implications – from HBAs to backup software – will take center stage for storage administrators in 2009.
But I’d have to give top honors for the dominant storage technology of 2008 to data de-duplication.
The rapid rise of data de-duplication is no surprise: From the end-user perspective, it’s a no-brainer. There aren’t any cogent arguments against reducing your capacity requirements and costs, particularly in these belt-tightening times.
But the thing I really like about data de-duplication is that it’s one of those technologies that show how end users win in the end. Data de-dupe works against the profit motive of vendors (at least the hardware vendors) because, like thin provisioning, it enables users to significantly reduce the amount of storage space they need to buy – or at least defer purchases. And let’s give credit to the early pioneers of data de-duplication, such as Data Domain, that gave users a technology that was so useful that they in turn forced the big vendors to offer a technology that in a way was not in their near-term best interests.
Here are some recent InfoStor articles on data de-duplication:
Users weigh de-dupe options
Users react to de-duplication deals
EMC refreshes data protection, de-dupe software
Subscribe to:
Posts (Atom)