Wednesday, July 29, 2009

WhipTail: Software solves MLC SSD issues

July 29, 2009 – One of the more promising developments in the solid-state disk (SSD) drive space is the potential use of low-cost multi-level cell (MLC) NAND flash memory in enterprise applications and arrays, vs. the high-cost – but more reliable and durable – single-level cell (SLC) technology, as I mentioned in my previous post (see “Intel slashes SSD prices” ). This can basically be accomplished in two ways: via software or via controller enhancements.

Relative newcomer WhipTail Technologies is an example of a vendor that’s using software techniques to overcome some of the inherent limitations of MLC flash memory; namely, write amplification issues that limit the ability of NAND to perform random writes in an effective manner (a performance issue), and wear-out issues (a reliability, or durability, or endurance problem).

To address the performance part of the equation, WhipTail uses buffering (not caching) techniques, in which writes are aggregated into a buffer that’s sized to the erase block of the NAND, according to WhipTail CTO James Candelaria, who claims that this technique enables performance close to the performance specs of the NAND media.

Specifically, the company claims performance of more than 100,000 I/Os per second (IOPS) of sustained, random I/O with 4KB block sizes and a 70/30 read/write split. Other performance specs include a latency of 0.1 milliseconds, and bandwidth of 1.7GBps (internal to the chassis).

The other major problem with MLC flash is wear-out. For example, SLC is rated at about 100,000 cycles per cell, while MLC is rated at only 10,000 cycles/cell before the cell becomes unreliable (and that may go down to 2,000 to 4,000 writes/cell with smaller die sizes).

To address the wear-out issue, WhipTail uses a technique called linearization, which essentially entails writing forward across the disk and not revisiting blocks until the entire array has been utilized. This not only decreases wear on the media, but also increases performance. Working in conjunction with linearization, a defrag process ensures that there is always a minimum amount of free space. This technique also works in conjunction with the drive’s wear-leveling algorithms.

The company’s internal tests indicate that if you rewrite an entire array once a day, the device will last seven years (or longer than most other components in the storage hierarchy).

You can get the details on these two techniques on WhipTail’s web site, as well as details on their products, but what about pricing?

Candelaria contends that WhipTail “provides tier-0 [SSD] performance at the price of tier-1 arrays.”

Well, a 1.5TB WhipTail SSD array is priced at $46,000 retail; a 3TB version at $75,600; and the new 6TB configuration, introduced this month, at $122,500.

Summit NJ-based WhipTail was spun out of TheAdmins, a reseller, early this year and has been working on its SSD technology since late 2007. Its first product went GA in February. The company sells through resellers, with eight VARs signed up so far.

For more information on SSDs, see InfoStor’s SSD Topic Center.

And if you’re really interested in solid-state technology, consider attending the Flash Memory Summit, August 11—13 at the Santa Clara Convention Center.

Thursday, July 23, 2009

Intel slashes SSD prices

July 23, 2009 –There are still some issues that need to be ironed out with solid-state disk (SSD) drives (e.g., reliability and endurance), but the biggest problem -- and gating factor to adoption -- has been the outrageous price of these devices.

One way to reduce prices is to use the less expensive multi-level cell (MLC) NAND flash technology, as opposed to the more expensive, reliable and durable single-level cell (SLC) technology. But at least for enterprise-class applications, that requires improvements in either controller and/or software technology (which I’ll blog about in an upcoming post).

Another way to reduce SSD prices is to go with a different manufacturing process. That’s what Intel announced this week for its X25-M (Mainstream) line of SSDs, which are admittedly designed primarily for desktops and laptops as opposed to enterprise arrays and applications.

Intel claims a 60% price reduction due to moving from a 50-nanometer manufacturing process to a 34nm process (smaller die size), and a quick price check seems to legitimize those claims.

For example, the 80GB X25-M SSD is channel-priced at $225 in 1,000-unit quantities, a 62% reduction from the original price of $595 a year ago. And the 160GB version is priced at $440, down from $945 when it was first introduced. Both of those SSDs come in a 2.5-inch form factor, with a 1.8-inch version, the X18-M, due in August or September.

Intel claims performance of “the same or better” compared to the 50nm predecessors, citing up to 6,600 I/Os per second (IOPS) on 4KB write operations, and up to 35,000 IOPS on read operations. The company also claims a 25% reduction in latency, to 65 microseconds.

Calculated on a cost-per-GB basis, SSDs are still way more expensive than traditional spinning disk drives, but SSD price wars should come as good news for users with the need for speed.

For more info on Intel’s SSDs, click here.

For general information and news, visit InfoStor’s SSD Topic Center.

Monday, July 20, 2009

VARs see improvements in Q3

July 20, 2009 – Robert W. Baird & Co. recently completed its quarterly survey of enterprise VARs, and although Q2 results were flat there is some optimism regarding the second half of this year – particularly regarding technologies such as, not surprisingly, data deduplication and thin provisioning.

The firm surveyed 47 IT resellers with total annual revenue of $11.4 billion and average revenues of $261 million per year.

Results for the second quarter were split, with 43% of the server/storage VARs below plan, 44% on plan, and 13% above plan. However, there were signs of optimism (albeit guarded) for the rest of the year, with more than 75% of the survey participants expecting Q3 to be flat or up and the remaining VARs reporting either limited visibility or expectations that Q3 will be worse than Q2.

Specifically, 54% of the survey respondents expect the third quarter to be the same as the second quarter in terms of revenue; 22% expect is to be more positive; 12% expect it to be more negative than Q2; and 12 % said that it was too early to tell.

In terms of technology, as in the previous few surveys, the hottest revenue growth opportunities lie in cost-saving, infrastructure-optimization technologies. In the storage sector, that means data deduplication and thin provisioning. (ok, maybe EMC’s acquisition of Data Domain was worth $2.1 billion, although I still say it wasn’t. See “EMC out-trumps NetApp, or not.” ) VARs also cited solid-state disk (SSD) drives as a growth technology.

In a more general sense, storage and virtualization are expected to be the strongest areas for IT spending, while PCs and servers are expected to remain relatively weak.

Server virtualization is increasingly seen as a “must have” technology, with 68% of the Baird survey respondents saying that server virtualization is in strong demand and the remaining 32% saying that it is ramping moderately. In a related finding, virtual desktop infrastructure (VDI) is gaining momentum, with 62% of the resellers noting that VDI is either ramping moderately or experiencing strong growth – up from 39% in Baird’s Q1 survey.

Interestingly, in terms of vendor strength Baird analysts note that heavyweights such as EMC, Sun, HP, Dell and IBM lagged vendors of lower-cost, more innovative technologies, such as Compellent, LeftHand and Data Domain. Dell and IBM were particularly weak in Q2, with 83% and 69% of the VARs below plan, respectively. Conversely, Cisco and Compellent had notable sequential improvements in the reseller rankings, according to Baird analysts.

Finally, VARs ranked NetApp and LeftHand as “the most channel friendly vendors.”

Monday, July 13, 2009

EMC out-trumps NetApp, or not

July 13, 2009 – Now that the bidding battle for Data Domain is over, with EMC set to lay out a whopping $2.1 billion, the question is: What’s next?

To get the opinions of analysts such as the Enterprise Strategy Group’s Steve Duplessie and’s Dave Vellante, check out senior editor Kevin Komiega’s blog post, “Is Data Domain a good fit for EMC?”

I draw two conclusions from this saga:

--EMC paid way too much
--NetApp did the right thing

I thought the original bid of $1.5 billion was too high, but $2.1 billion for a data deduplication vendor?? With some vendors giving away deduplication for free (most notably, EMC and NetApp) users’ expectations re the cost of deduplication are going down. I’m told that a Data Domain implementation can get real costly real quickly, but even EMC won’t be able to keep those margins up over the long run. As such, EMC’s ROI for Data Domain appears questionable. And that’s a lot of money to pay just to keep a technology out of a competitor’s hands. EMC may appear to be victorious, but it’s a Pyrrhic victory at best.

NetApp officials did the right thing by ditching their egos and walking away from the bidding war. In fact, you could almost argue that NetApp is the victor in this battle.

So what’s next for NetApp? The conventional wisdom is that the company must make acquisitions – particularly on the software front – to round out its IT stack and stay competitive with EMC, IBM, HP, etc. And if you cruise the blogs you’ll find that acquisition speculation tends to focus on vendors such as CommVault, FalconStor, etc. and primary storage optimization vendors such as Ocarina. That assumes that NetApp is as dizzy over dedupe as it appears to be.

My guess is that NetApp will resume its acquisition attempts, but not in the deduplication arena. I think we’re in for some more surprises, and probably in the near future.

And in unrelated news . . .

Also last week, Broadcom appears to have dropped its hostile takeover bid for Emulex after getting rebuffed yet again on its sweetened offer. Something tells me this one isn’t over yet. Broadcom needs Fibre Channel – or at least Fibre Channel over Ethernet – technology, and Emulex isn’t the only Fibre Channel expert in the OC.

Wednesday, July 8, 2009

The drawbacks to data reduction

July 8, 2009 – Data reduction, or capacity optimization, has succeeded in the backup/archive space (i.e., secondary storage), but applying data reduction techniques such as a deduplication and/or compression to primary storage is a horse of a different color. This is why the leading vendors in data deduplication for secondary storage (e.g., Data Domain, EMC, IBM, FalconStor, etc.) are not the same players as we find in the market for data reduction on primary storage.

A lot of articles have been written about primary storage optimization (as the Taneja Group consulting firm refers to it), but most of them focus on the advantages while ignoring the ‘gotchas’ associated with the technology. InfoStor (me, in particular) has been guilty of this (see “Consider data reduction for primary storage” ).

In that article, I focused on the advantages of data reduction for primary storage, and introduced the key players (NetApp, EMC, Ocarina, Storwize, Hifn/Exar, and greenBytes) and their different approaches to capacity optimization. But I didn’t get into the drawbacks.

In a recent blog post, president and founder Dave Vellante drills into the drawbacks associated with data reduction on primary storage (which Wikibon refers to broadly as “online or primary data compression”).

Vellante divides the market into three approaches:

--“Data deduplication light” approaches such as those used by NetApp and EMC
--Host-managed data reduction (e.g., Ocarina Networks)
--In-line data compression (e.g., Storwize)

All of these approaches have the same benefits (reduced capacity and costs), but each has a few drawbacks. Recommended reading: “Pitfalls of compressing online storage.”

Thursday, July 2, 2009

Nirvanix brings storage from the moon to the cloud

July 3, 2009 – I’ve read a few interesting case studies about cloud storage (and a lot more non-interesting ones) but, for your July 4th reading pleasure, this one from Nirvanix gets my vote as the most interesting application of cloud storage. And you gotta love Nirvanix president and CEO Jim Zierick’s quote.

Here’s the press release:


Comprehensive imagery data from onboard cameras providing deeper understanding of the moon and its environment to be copied to CloudNAS-based solution

SAN DIEGO – June 29, 2009 – An Atlas V 401 rocket carrying two lunar satellites launched from Cape Canaveral Air Force Station in Florida at 5:32 p.m. EDT on June 18th in what is being described as America’s first step to the lasting return to the moon. One of the satellites, the Lunar Reconnaissance Orbiter (LRO), will begin to provide high-definition imagery of the moon once in orbit with a copy of all data stored on the Nirvanix Storage Delivery Network™ via CloudNAS®, a software based gateway to secure enterprise cloud storage.

After a four-day trip, the LRO will begin orbiting the moon, spending at least a year in a low polar orbit collecting detailed information about the lunar environment that will help in future robotic and human missions to the moon. Images from the Lunar Reconnaissance Orbiter Camera will be transmitted from the satellite to a project team at Arizona State University for systematic processing, replicated to secondary high-performance storage in a separate building at ASU and then replicated to the Nirvanix Storage Delivery Network (SDN™). Nirvanix provides a method for storing a tertiary copy of the data offsite by installing CloudNAS and writing a copy directly from the data-receiving servers. ASU and NASA have already transferred multiple TBs of original Apollo mission imagery to the Nirvanix CloudNAS-based solution.

“While this project may be one small step for NASA’s program to extend human presence in the solar system, it definitely represents a giant leap in cloud storage’s ability to provide a reliable, scalable and accessible alternative to tape for long-term retention of enterprise-class data,” said Jim Zierick, President and CEO of Nirvanix. “The tertiary copy of images from the LRO Camera stored on the Nirvanix CloudNAS is online and accessible within seconds and the project managers at ASU do not need to worry about managing offsite storage, allowing them to focus on the more important mission at hand. We are pleased to be part of such a historic project and value our contribution to finding a deeper understanding of the moon and its environment.”

Nirvanix CloudNAS is a fast, secure and easy way to gain access to the benefits of Cloud Storage. As the world's first software-only NAS solution accessible via CIFS or NFS, CloudNAS offers enhanced secure data transfers to any of Nirvanix's globally distributed storage nodes using integrated AES 256-bit encryption and SSL options. Through the Nirvanix CloudNAS, organizations have access to unlimited storage via the Nirvanix Storage Delivery Network with the ability to turn any server on their network into a gateway to the cloud accessible by many existing applications and processes.

For more news and in-depth features on cloud-based storage, see InfoStor’s cloud storage Topic Center.