Tuesday, December 21, 2010

And the Top 6 storage software vendors are . . .

IDC recently released its Q3 2010 report on the storage software market, and there weren’t any changes on the leader board in terms of the top six vendors’ market shares in Q3 vs. Q2. EMC is still in the top spot with a 24.4% market share on revenue of $768 million in the third quarter.

Pure-play software vendor Symantec held on to its #2 ranking with a 16.5% share on revenue of $518 million, followed by IBM (13.4%) with revenue of $421 million and NetApp (8.4%) with $263 million.

The race is a bit tighter for fifth and sixth place, with CA (3.3%) pulling in $104 million and edging out #6 HP (3.2%, $99 million).

As was the case in the disk array market in the third quarter, the real winner appears to be NetApp, which experienced a growth rate of 19.8% in Q3 2010 vs. Q3 2009. EMC had the second highest growth rate at 13.9%. HP was the only vendor to have negative growth (-9.4%).

The overall storage software market racked up $3.1 billion in revenue, for a growth rate of 8.7% over the same quarter a year ago and a 6.3% boost over the previous quarter, according to Laura DuBois, IDC’s program vice president, storage software.

In terms of storage market segment growth, the top three were storage infrastructure (+37.3% year-over-year), archiving (+12%) and data protection and recovery (+10.7%).

DuBois notes that the big boost in storage infrastructure can be attributed largely to increased spending on automated storage tiering. Other segments of the storage software market include replication, storage management, device management, and file systems.

For more info, see IDC’s press release: “Storage Software Market Continues on Its Growth Trajectory in the Third Quarter”

Related blog post: “Disk arrays: NetApp, HP duke it out for #3 spot”

Friday, December 17, 2010

Broadcom claims 2 million+ IOPS on converged controller

December 17, 2010 – Broadcom this week announced a 10GbE converged (TCP, iSCSI, FCoE, RDMA) controller that the company claims can exceed two million I/Os per second (IOPS). The BCM578X0 chip family is designed for converged network adapters and LAN-On-Motherboard (LOM) implementations.

“It’s the first quad-port, fully converged 10GbE controller, and it can perform at full line rate on all four ports, or 40Gbps bidirectionally,” claims Robert Lusinsky, director of product marketing in Broadcom’s Ethernet Controller Group.

One caveat: Although Broadcom is currently sampling the chip to OEMs, and claims to have some design wins with Tier-1 vendors, production shipments of the chip aren’t expected until the third quarter of 2011.

Broadcom cites Intel, Emulex and QLogic as its primary competitors in the 10GbE converged networking space. Lusinsky says that Broadcom’s key differentiators vs. some of its competitors are port count, chip size and performance.

To date, the highest performance claims in the converged adapter market have been around one million IOPS.

As always, however, performance claims should be taken with a grain – or large block – of salt, and they depend on a wide variety of factors, including CPU utilization.

For server vendors needing to cram more and more chips on their motherboards, Broadcom points to the size of the BCM5784X0 chip: four ports on a 23mmx23mm (0.82 square inches) device. That compares, for example, to Intel’s dual-port 25mmx25mm 82599 processor.

Broadcom’s converged controllers are available in three configurations: the dual-port BCM57800 and BCM57810, and the four-port BCM57840.

Other features of Broadcom’s 40-nanometer converged controller include support for PCIe 3.0, Energy Efficient Ethernet, and virtualization technologies such as SR-IOV, NIC partitioning and Virtual Embedded Bridge (VEB). The chips provide full hardware offload for iSCSI, FCoE, TCP and RDMA.

For more info, see the company’s press release.

Monday, December 13, 2010

Dell’s Compellent bid approaches $1 billion

December 13, 2010 – Dell this morning said it has reached a definitive agreement to acquire Compellent for $27.75 per share, or 25 cents more than Dell offered last Thursday. The $27.75 per share offer translates into about $960 million, or $820 million net of Compellent’s cash.
As I mentioned in my earlier post on this saga, the weird thing about it is that Dell’s offer is still significantly below what Compellent had been trading at. Compellent’s stock at one time hit $34 a share. So this is what the financial people call a “take-under.”

In fact, as I write this Compellent’s stock is trading at $27.92, which is still above Dell’s bid price. That would suggest that investors expect the price to go higher, although almost nobody expects a bidding war to ensue at this stage of the game.

Although most of the financial community seems to think that this is a done deal, it’s important to note that Compellent’s shareholders have yet to approve the deal (although both the Dell and Compellent boards have approved the terms of the deal). And it’s likely that lawsuits against Compellent’s board will be filed.

Since this story may not be over yet, it may be premature to speculate on how Dell will position the Compellent line vs. the Dell/EqualLogic line, but I presume that Dell would position Compellent’s arrays “above” the EqualLogic iSCSI arrays and primarily in the Fibre Channel SAN space.

Whatever the positioning between Compellent and EqualLogic, it’s certainly doubtful that the Dell-EMC reseller relation will last the two years left on the agreement (although Dell officials this morning said that they will continue to offer the EMC options).

According to a note by Stifel Nicolaus analyst Aaron C. Rakers: “For its October 2010 quarter, Dell generated storage revenue of $543 million (3.5% of total; ~3.8% if we include Compellent’s most recent results) revenue, which was up only 7% yr/yr (vs. industry growth rate at +19% yr/yr) and down 13% sequentially. Dell had reported that its EqualLogic (iSCSI SAN) revenue grew 66% yr/yr, which leaves us to estimate ~$165 million in revenue (~30% of Dell’s total storage revenue). IDC recently estimated that Dell had a ~9.1% revenue share in the total external disk storage market, which includes the Dell/EMC relationship; Dell having a ~34% estimated revenue share in the fast growing iSCSI SAN market with EqualLogic.”

Related blog post:

Dell closing in on Compellent acquisition (last Thursday)

Friday, December 10, 2010

Musings on the future of data dedupe

December 10, 2010 – I recently chatted with a few vendors in the data deduplication space. As conversations often do at this time of year, the talk turned toward the future of data deduplication. Here are a few snippets.

“Tier 1 storage vendors will move past point solutions for deduplication next year,” says Tom Cook, CEO at Permabit. “They’re working toward end-to-end deduplication, across SAN, NAS, unified [block and file], nearline and backup.”


“When that happens, once their customers ingest data and get it into a deduplicated state they’ll never have to re-hydrate that data throughout its lifecycle. The data will stay deduplicated through processes such as replication and backup. That’s a huge savings in workflow, footprint and bandwidth,” say Cook.

“Today, the big vendors use a variety of point solutions, but they’d like to use a single data optimization product across all their platforms, whether it’s block or file, primary or secondary. End-to-end deduplication will creep into the market in 2011 and 2012,” Cook predicts. (Permabit sells deduplication software – dubbed Albireo – to OEMs.)

Personally, I don’t think that single-solution, end-to-end deduplication will happen that quickly, in part because of the huge investments that the Tier 1 vendors have made in their “point solutions,” but we’ll see.

Dennis Rolland, director of advanced technology at Sepaton, has some predictions that are similar to Cook’s, as well as some differing opinions regarding trends in the data deduplication market.

“Dedupe will be required in more places going forward, including primary storage in addition to nearline storage, and end users will have to cut down on how many dedupe solutions they have because of the complexity in managing many disparate solutions,” says Rolland, “but we’ll probably still have distinct solutions for primary and nearline storage deduplication.”

Rolland thinks that the emphasis on deduplication benefits such as capacity, footprint and cost savings is shifting. “Dedupe enables low-bandwidth replication, which in turn enables companies to economically deploy DR [disaster recovery] sites,” he says.

Rolland also links two technologies that will no doubt make my list of The Hottest Storage Technologies for 2011 (assuming I get around to making such a list): data deduplication and cloud storage.

“Dedupe is an enabler for cloud storage,” says Rolland. “It makes it practical to deploy cloud storage because you’re sending, say, 10x less data over the WAN. That has significant implications for deploying cloud-based DR.”

(Sepaton bundles data deduplication software with its virtual tape libraries, or VTLs.)

Meanwhile, Quantum released the results of an end-user survey this week that suggests U.S. companies could save $6 billion annually in file restore costs by adopting deduplication.

According to the survey of 300 IT professionals, respondents spend an average of 131 hours annually on file restore activities, with 65% restoring files at least once a week. Based on the average wage for IT professionals in the US ($31.55 per hour according to PayScale.com), that equates to $9.5 billion. However, Quantum’s survey also found that those companies that are most efficient at file restoration predominantly use deduplication and can complete restores in approximately one-third the average time of all respondents. So, according to Quantum’s press release: “If the broader US market was to achieve similar data restore efficiencies, the potential annual savings for US businesses would be approximately $6 billion.”

This survey seems a bit misleading to me because it’s not really focused on the advantages of data deduplication per se in a file restore context but, rather, the advantages of disk-based backup/recovery.

Steve Whitner, Quantum’s product marketing manager for DXi, explains: “If you back up to regular [non-deduplicated] disk and you have a need for DR, you have to get that data to another site and you can’t keep data on conventional disk for very long – maybe a few days or a week. So the real issue is not the speed of restore; it’s the fact that companies can now store a month or two of deduplicated backup data on disk.”

You be the judge. Here’s Quantum’s press release and here are some supporting slides from the survey results.

One thing is clear: In 2011, the focus will shift from deduplication for nearline/secondary storage to deduplication for primary storage. Witness two of this year’s biggest storage acquisitions: Dell buying Ocarina Networks and IBM acquiring Storwize. (Storwize’s technology is now in the IBM Real-time Compression business unit.)

Related blog posts:

What is progressive deduplication?

Data deduplication: Permabit finds success with OEM model

Thursday, December 9, 2010

Dell closing in on Compellent acquisition

December 9, 2010 – Putting to rest weeks of speculation, Dell announced today that it is in “advanced discussions” to buy Compellent for $27.50 per share, or approximately $876 million. The weird thing about this is that the $27.50 offer is about 18% less than Compellent’s closing price yesterday. That’s a rarity in the high-tech acquisition space, and the financial folks refer to it as a “take-under.”

It looks like I will be proved wrong. Although many financial analysts predicted that, after losing the battle with HP for 3PAR, Dell would set its sights on Compellent, I predicted that Dell would turn away from the over-priced storage stocks and make a few acquisitions in non-storage IT sectors.

As I write this, Compellent is trading at $29.37 per share. That would suggest that investors think that (a) a bidding war will ensue and/or (b) negotiations will drive the price up.

I doubt that a bidding war will ensue, mainly because none of the cash-rich storage giants really need Compellent’s technology (although at least one financial analyst predicted that NetApp might jump into the ring). Making a bidding war even less likely: Dell has more than $13 billion in cash, and even though it backed out of the 3PAR bidding there’s no way Compellent will go for much more than $1 billion vs. the $2.4 billion that 3PAR commanded.

Compellent’s stock has almost doubled over the past couple months due to acquisition rumors. And back in August (prior to the start of the 3PAR bidding contest) the stock was trading at about $12 a share.

If this deal goes through, it will be interesting to see how Dell positions Compellent’s products relative to Dell’s (enormously successful) EqualLogic line. There’s certainly a good deal of overlap there. But what will be more interesting is what happens to the Dell-EMC relationship, or lack thereof.

Both Dell and Compellent officials said that there was no assurance that the acquisition deal will be finalized, and they don’t plan to comment further until the deal is either consummated or goes south.

Tuesday, December 7, 2010

How bad is SSD performance degradation?

December 7, 2010 -- “It's a fairly well known fact that solid-state disk (SSD) performance can suffer over time. This was quite common in early SSDs, but newer controllers have helped reduce this problem through a variety of techniques. In part one of this two-part look at SSDs, we examine the origins of the performance problems and some potential solutions.”

That’s how Enterprise Storage Forum contributor Jeffrey Layton begins his exhaustive two-part series on SSD performance degradation.

For years (decades, actually), the focus on SSDs was on the exorbitant prices of the devices. Then the attention shifted to reliability, or endurance, issues. But SSD and controller manufacturers made great strides in those areas over the past couple years.

Now, the focus may be turning to performance degradation over time.

Layton’s articles are the best I’ve read on this subject. However, a warning: The articles are very long, very technical, and very detailed. But if you (a) are using, or considering using, SSDs (b) have sufficient technical credentials and (c) have a lot of time, I strongly recommend reading the articles.

Part 1 examines the issues that cause performance degradation in SSDs, and looks at some of the solutions (or “workarounds,” because all SSD solutions seem to involve trade-offs) that vendors have implemented.

Part 2 looks at technologies/issues such as write amplification, over-provisioning and the TRIM command, and then delves into some very in-depth testing of Intel’s X25-E SSD in ‘before’ and ‘after’ stress test scenarios. Check it out on Enterprise Storage Forum:

Fixing SSD Performance Degradation, Part 1

Fixing SSD Performance Degradation, Part 2

Friday, December 3, 2010

Disk arrays: NetApp, HP duke it out for #3 spot

December 3, 2010 – In more good news for the rebounding storage industry, revenues from external disk systems grew 19% in Q3 2010 vs. Q3 2009, topping the $5 billion mark, according to a report from IDC. Revenues for the total (external and internal) disk systems market grew to almost $7 billion, representing an 18.5% year-over-year growth rate.

Total capacity shipped grew 65.2%.

In the external array market, EMC held on to its #1 spot by a wide margin, with $1.35 billion in Q3 revenue and a 26.1% market share. IBM was a distant second with $667 million in revenue and a 12.9% market share.

But the real race is for the #3 position, where NetApp and HP are in a virtual dead heat. (Even dead heats are virtual these days.) NetApp had an 11.6% market share in Q3, followed closely by HP with an 11.1% slice. IDC considers it to be a statistical tie when less than a one percent revenue difference separates two vendors.

Dell finished fifth, with a 9.1% market share on revenue of $471 million.

All of the top five vendors had healthy, double-digit revenue growth (ranging from 11.3% for HP to 28.3% for EMC), but it was NetApp that busted the charts with a whopping 54.9% growth rate.

Looking at the leader board trends over the past few quarters, it would seem safe to say that NetApp has blown past HP and is closing in on Big Blue, except for HP’s 3PAR acquisition. With HP’s marketing muscle behind the 3PAR product line, revenue could crank up pretty quickly. For now, however, 3PAR had a market share of only 0.83% in the third quarter. (Isilon’s slice was 0.75%.)

Other highlights from the IDC report: The NAS market was the fastest-growing segment of the overall storage systems market, posting 49.8% growth in Q3 2010 vs. Q3 2009. EMC led the NAS market with a 46.6% share, followed by NetApp with a 28.9% share.

The iSCSI segment of the overall market also did well, posting 41.4% revenue growth, with Dell/EqualLogic in the top spot (33.8% share) followed by EMC and HP in a tie for second place.

For more details, read the IDC press release: “External Disk Storage Systems Market Records Fourth-Highest Quarterly Revenue in Third Quarter”

Thursday, December 2, 2010

Who says tape is dead?

December 2, 2010 – People have been predicting the termination of tape almost as long as they’ve been predicting the demise of mainframes. But both technologies keep hanging in there.

In the third quarter, tape media manufacturers pulled in almost $203 million in revenue, which is expected to actually increase slightly in the next quarter to $205 million, according to the Santa Clara Consulting Group (SCCG). And that’s just from tape cartridges; those figures don’t include revenue from tape drives and libraries.

All segments (formats) of the tape market are declining rapidly, with one exception: the LTO format.

LTO tape cartridges accounted for more than 86% ($175 million) of total cartridge shipments in the third quarter, or 6.2 million units, according to SCCG. Sales of LTO-5 (the latest generation, introduced early this year) cartridges were up 100% in Q3 vs. Q2, and accounted for 5% of unit shipments and 15% of revenues. LTO-4 cartridges accounted for 48% of unit shipments and 46% of the total LTO revenues. Even the LTO-3 segment grew in the third quarter (up 3%).

One reason that users stick with tape is that it’s still way cheaper than disk. Those LTO-5 cartridges cost only four cents per GB (assuming 2:1 compression).

According to a study conducted by The Clipper Group research and consulting firm, the cost ratio of storing 1TB of data on SATA disk vs. LTO-4 tape is 23:1. Even if you compare the costs of archiving data on a VTL with data deduplication vs. tape, the cost ratio is 5:1, according to the Clipper Group study.

And for the energy-conscious: Tape is 290X less expensive than disk in terms of energy costs. (The Clipper Group study was conducted two years ago, but with cost reductions in both disk and tape those ratios are probably still accurate.)

Sure, accessing data from tape is painfully slow, but successive generations of LTO have typically doubled the transfer rate. (Exception: LTO-5 only provided a nominal increase in transfer rate vs. LTO-4 – 280MBps vs. 240MBps – but LTO-6 is expected to almost double the transfer rate to 525MBps. That assumes 2.5:1 compression, vs. today’s average compression ratio of 2:1, which will come from the use of larger compression history buffers, according to Bruce Master, an LTO Program representative and senior program manager of tape storage systems at IBM.

And LTO-6 will boost native cartridge capacity to 3.2TB, or 8TB in compressed mode.

The LTO Program vendors (HP, IBM and Quantum) claim that more than 3.3 million LTO tape drives, and more than 150 million LTO cartridges, have been shipped.

In addition to the usual capacity and speed improvements, LTO-5 includes some nifty features such as media partitioning (which enables users to create two partitions on the cartridge, one for content and one for an index) and the Linear Tape File System (which is a free download that leverages dual partitioning and provides file system access at the operating system level).

The LTO Program celebrated its 10th anniversary last month.