Friday, February 27, 2009

Fujitsu to bail from HDD biz

February 27, 2009 -- The hard disk drive (HDD) manufacturer ranks will be winnowed down by one with Fujitsu’s announcement that it will sell its HDD business to Toshiba. The deal is expected to close in June, and financial terms were not released. (At first, Toshiba will own 80% of the joint venture and Fujitsu will own 20%, but it will eventually be owned 100% by Toshiba.)

It’s only fitting that the contraction and consolidation in the storage industry has filtered down to the disk drive level. Who would want to be in this business, anyway? Razor thin margins, constant price gouging, competition from solid state disk (SSD) drives, and the need to double capacity and support new interfaces every 18 months, etc.

Toshiba has a tough integration challenge (remember Seagate’s acquisition of Maxtor?), but it will eventually lead to solidification of its leadership position in small form factor (2.5-inch and smaller) drives, as well as giving the vendor entry into the high-margin enterprise-class drive market. It’ll also boost Toshiba’s SSD prowess via the possible combination of Toshiba’s NAND flash memory technology and Fujitsu’s enterprise HDD technology.

But the real winners here, at least in the short term, will be Seagate, Western Digital and, perhaps to a lesser degree, Hitachi GST. Those players will almost definitely gain immediate market share due to attrition during the integration process. (Seagate experienced about a 50% revenue attrition during its acquisition of Maxtor, according to Robert W. Baird and Co., although that was admittedly a buyout on a much larger scale.)

For those of you who are insane enough to trade stocks in the volatile disk drive industry, take a close look at Seagate and WD.

Tuesday, February 24, 2009

What's a PeSAN?

In a previous post I mentioned that I’ve been talking to TV and film production houses for an upcoming supplement called Storage in the Studio. Often, these entertainment facilities solve their unique storage challenges (the need for blazing bandwidth and capacious capacity) with “me-too” “white-box” disk subsystems that use conventional interconnects such as Fibre Channel or Ethernet.

But Advanced Digital Services, a Hollywood-based post-production and media services studio, opted for a horse of a different color. The company is using JMR’s PeSAN architecture, which I think (correct me if I’m wrong) is unique in that it uses the ubiquitous, high-speed PCIe bus to connect hosts to JMR’s BlueStor disk arrays. Using the PCIe bus as a storage interconnect eliminates the overhead associated with Fibre Channel, Infiniband or Ethernet protocol conversion.

Russell Ruggieri, ADS’ director of engineering, says that the BlueStor-PeSAN combo is less expensive that traditional alternatives (JMR cites pricing of less than $1 per GB) and provides plenty of bandwidth for the studio’s non-linear editing work.

Ruggieri says the new storage configuration solved the performance problems associated with his Apple Xsan configuration, providing more than 1,000MBps of throughput and the ability to handle 4K-resolution formats. (JMR claims performance of more than 2,000MBps.) “It also makes it easy to add capacity, and JMR’s PCI adapters give me additional slots on the Macs,” says Ruggieri. ADS may replace the remainder of its 2Gbps Fibre Channel configurations with additional BlueStor systems in a PeSAN configuration.

I had heard about the PeSAN technology about four years ago when it was in development, but I was skeptical. (If it was such a good idea, why hadn’t anybody else thought of it?) But JMR may be on to something with this architecture. The company seems to be gaining momentum in application areas such as content creation, SD/HD video editing, and 2K/4K digital intermediate applications.

Thursday, February 19, 2009

Seagate drops STEC lawsuit

You know you’ve hit the big time when you get sued by Seagate. That’s what happened to solid-state disk (SSD) specialist STEC last April when Seagate filed a patent infringement lawsuit against the company.

Today, that lawsuit was dropped. In legalese, it was a “mutual dismissal,” whatever that implies. No cash exchanged hands, and neither company licensed its technology to the other.

Not that many people thought that Seagate would prevail in this lawsuit (which had something to do with misappropriation of intellectual property relating to hard disk technology), but it does give the green light to STEC. This relatively small company from my neck of the woods (southern California) seems to have a lot of momentum (at least to the degree that enterprise-class SSDs have a lot momentum), having inked OEM deals with the likes of EMC, Sun and, most recently, Hitachi Data Systems (and more to come in the very near future) for its ZeusIOPS SSDs.

Related articles from

HDS puts SSDs in disk arrays
Sun ships arrays with SSDs
Who uses SSDs, and why?
SNIA launches SSD initiative
SSDs poised for the enterprise

Tuesday, February 17, 2009

IT energy efficiency -- the book

I just received the latest book (yes, an actual printed book) written by my buddy Greg Schulz. (No, this isn’t a plug for a book from my buddy – Greg is a well-known storage industry analyst and founder of

Although Greg is known primarily for his storage knowledge, The Green and Virtual Data Center (from CRC Press) covers energy efficiency and virtualization across all areas of IT. This is essential reading for any IT manager that is serious about “going green.” (According to Greg’s research, only about 5% to 10% of IT organizations have a green initiative, need or mandate, although 75% to 85% cite problems such as reliable availability of electrical power and limits on cooling capacities and floor space.)

Some of the broader technologies covered in the 376-page book include energy and cost footprint reduction, cloud-based storage and computing, managed services, intelligent power management and adaptive power management, blade centers and servers, virtualization and tiering (of servers, storage and networks), data reduction, compression, data de-duplication, energy avoidance, and many others, including storage-specific technologies such as MAID (massive array of idle disks) and thin provisioning.

Check it out here.

Saturday, February 7, 2009

200TB and 413,138 hairs

I’ve been talking to film and TV pre- and post-production outfits for an upcoming supplement we call Storage in the Studio, which includes case studies of studios that have solved somewhat unique storage problems. I try to focus on the storage angle, but a lot of interesting stuff invariably comes up.

For example, in creating graphics for the feature film The Tale of Despereaux, UK-based Framestore started with 50TB of capacity and by the end of production the studio had a whopping 200TB -- 400TB if you include a mirrored cluster. (All of the content was stored on Infortrend’s EonStor disk arrays in a, somewhat surprisingly, RAID-6 configuration for added reliability.). During peak production, Framestore was generating 5TB of animation content per day.

The computer generated imagery (CGI) studio created the 200TB storage cluster using the Lustre open-source Linux cluster file system. The cluster was tightly coupled to a 6,000-core render farm. Fun facts:

--There are 40,089 individual assets in the film
--1,726 final shots were delivered
--The film has 126,248 frames
--The animation crew size was 278 during peak production

And there are 413,138 hairs on Despereaux’s head.

Tuesday, February 3, 2009

Consider data de-duplication plus compression

Data de-duplication gets a lot of ink and engenders a lot of debates, but the option of combining data de-dupe with real-time compression on primary storage is rarely discussed, maybe because there aren’t many vendors targeting compression for primary storage resources.

That’s what Storwize specializes in, and the company recently released the results of tests that suggest a serious look at combining de-duplication with data compression to further enhance overall data reduction.

Storwize claims that adding real-time compression can improve data reduction by more than 200%, based on its internal testing as well as customer deployments. (In the internal tests, Storwize used data de-duplication solutions from both Data Domain and NetApp.)

Storwize positions data de-duplication as being particularly advantageous for highly redundant backup data sets, but points out that using real-time data compression on the front end (primary storage devices) can significantly reduce de-duplication processing time and enhance throughput, in addition to reducing overall capacity requirements.

To assess the validity of the test results, check them out here

And if you’re among the early adopters of data compression on primary storage, respond below or email me at and let me know what types of real-life data compression ratios you’re getting.