May 26, 2009 – Even after a long holiday weekend, the industry continues to debate the wisdom and ramifications of NetApp’s proposed acquisition of Data Domain. For our news coverage, see “NetApp shells out $1.5 billion for Data Domain.” And for reaction from analysts and competitors, see Kevin Komiega’s blog post, “NetApp’s competitors take aim at Data Domain deal.”
I agree with the general consensus that, in today’s economy, NetApp may have overpaid. (The buyout price was $25 per share, about a 40% premium over Data Domain’s closing price prior to the announcement.) But I disagree with some of the other commentary.
For example, some analysts, such as Wikibon.org’s David Vellante, predict that the “vision will take forever to execute.” If this was the same technological hurdle that NetApp faced with its acquisition of, say, Spinnaker, I’d agree. But it isn’t. In fact, they don’t even have to integrate the NetApp and Data Domain deduplication technologies.
Which brings me to another commentary on the deal; namely, that there’s overlap between the two vendors’ data-reduction product lines. Yes, there’s overlap, but that doesn’t mean that NetApp will have to jettison any products.
In the classic situation of product overlap after an acquisition, the conventional wisdom is to eliminate the overlap. But in this case it’s just another option for customers. In the case of data reduction for secondary or nearline storage, customers can opt for NetApp’s free dedeuplication technology, or, if they want higher performance or reduction ratios they can pay for the Data Domain technology. NetApp wins either way.
I don’t see any reason to kill product lines, so the product overlap objection seems to be a moot point. In fact, customers won’t even have to choose between the two product lines because they can be used in conjunction with each other.
Obviously, NetApp has to continue to branch out if it wants to be competitive with EMC, IBM, etc. What’s debatable is whether data deduplication (or data reduction in general) is a significant technology in the larger IT battle. I don’t think it is (because, as many have observed, it’s a feature and not a market), but to the degree that NetApp is right – that data deduplication is a linchpin technology -- the Data Domain deal is a good strategic move.
Tuesday, May 26, 2009
Wednesday, May 20, 2009
StarWind offers free iSCSI software
May 21, 2009 – iSCSI’s value proposition has always been lower cost than Fibre Channel SANs. StarWind Software is taking that value proposition to new levels with a free version of its iSCSI target software.
Free iSCSI isn’t new. Most of the iSCSI initiators are free (including Microsoft’s) and a few vendors offer free iSCSI target software. However, most of those are limited in the number of iSCSI connections you can have and/or the amount of capacity supported.
In contrast, StarWind’s free iSCSI target software supports an unlimited number of iSCSI connections and an impressive 2TB of capacity.
Of course, you don’t get StarWind’s thin provisioning, CDP, snapshots, mirroring or replication functionality for free, but unlimited connections and support for 2TB is a great deal. The software installs on any 32-bit or 64-bit Windows server, and turns the server into a shared storage SAN device.
But wait, there’s more. The software supports virtual server environments such as VMware ESX and ESXi (and can run inside a VM) and Microsoft’s Hyper-V, as well as server clustering.
To grab a copy of the software, visit the StarWind iSCSI Target Free Version Download Site.
For those not familiar with StarWind: The company has been selling its iSCSI software since 2003, and claims to have more than 1,000 customers that paid for its software and more than 15,000 that have downloaded a previous free version (which only supported 2GB of capacity).
Free iSCSI isn’t new. Most of the iSCSI initiators are free (including Microsoft’s) and a few vendors offer free iSCSI target software. However, most of those are limited in the number of iSCSI connections you can have and/or the amount of capacity supported.
In contrast, StarWind’s free iSCSI target software supports an unlimited number of iSCSI connections and an impressive 2TB of capacity.
Of course, you don’t get StarWind’s thin provisioning, CDP, snapshots, mirroring or replication functionality for free, but unlimited connections and support for 2TB is a great deal. The software installs on any 32-bit or 64-bit Windows server, and turns the server into a shared storage SAN device.
But wait, there’s more. The software supports virtual server environments such as VMware ESX and ESXi (and can run inside a VM) and Microsoft’s Hyper-V, as well as server clustering.
To grab a copy of the software, visit the StarWind iSCSI Target Free Version Download Site.
For those not familiar with StarWind: The company has been selling its iSCSI software since 2003, and claims to have more than 1,000 customers that paid for its software and more than 15,000 that have downloaded a previous free version (which only supported 2GB of capacity).
Monday, May 18, 2009
Disk drive/array market getting SASsier
May 18, 2009 – I just got the most recent disk drive shipment projections from IDC, and I was surprised – not only at the rapid ascendancy of SAS but also the rapid demise of Fibre Channel drives.
For example, if IDC’s projections hold up, SAS will account for a full 50% of all enterprise-level drive shipments this year. The rapid growth of SAS comes at the expense of Fibre Channel, which is expected to account for only 19% of drive shipments this year. Enterprise-class SATA drives are expected to make up the remaining 31%, according to John Rydning, IDC’s research director, hard disk drives (HDDs).
And by 2012, IDC estimates that Fibre Channel will have a mere 1% market share, compared to 25% for SATA and 74% for SAS.
“Fibre Channel will continue in small volume after this year, mainly for existing systems that will continue to sell in 2010 and for aftermarket support of current installations,” says Rydning.
He expects next-generation (6Gbps) SAS to further spur adoption of SAS drives (6Gbps arrays are expected near the end of this year), as well as “capacity optimized” drives with native SAS interfaces.
Personally, I don’t think Fibre Channel drives will bite the dust quite so rapidly, but I don’t have access to the raw shipment figures that Rydning does, and he’s been pretty accurate at predicting these things in the past.
I was just surprised at the SAS projections because I still think of SAS as the new kid on the block.
For example, if IDC’s projections hold up, SAS will account for a full 50% of all enterprise-level drive shipments this year. The rapid growth of SAS comes at the expense of Fibre Channel, which is expected to account for only 19% of drive shipments this year. Enterprise-class SATA drives are expected to make up the remaining 31%, according to John Rydning, IDC’s research director, hard disk drives (HDDs).
And by 2012, IDC estimates that Fibre Channel will have a mere 1% market share, compared to 25% for SATA and 74% for SAS.
“Fibre Channel will continue in small volume after this year, mainly for existing systems that will continue to sell in 2010 and for aftermarket support of current installations,” says Rydning.
He expects next-generation (6Gbps) SAS to further spur adoption of SAS drives (6Gbps arrays are expected near the end of this year), as well as “capacity optimized” drives with native SAS interfaces.
Personally, I don’t think Fibre Channel drives will bite the dust quite so rapidly, but I don’t have access to the raw shipment figures that Rydning does, and he’s been pretty accurate at predicting these things in the past.
I was just surprised at the SAS projections because I still think of SAS as the new kid on the block.
Thursday, May 7, 2009
Data reduction redux: What's in a name?
May 7, 2009 – As mentioned in a previous post (see “Welcome to the data dedupalooza” ), I’ve been researching the topic of data reduction for primary storage for an upcoming article. One thing I’ve noticed is that the industry is playing fast and loose with the term ‘data de-duplication,’ sometimes using it in place of the more general ‘data reduction’ or ‘capacity optimization’ terms.
Maybe that was ok in the realm of secondary storage, but as you begin considering data reduction for primary storage you’ll want to be more specific about what technology a vendor uses, although all of the technologies produce the same result (to varying degrees): reduced storage capacity and costs.
In the secondary storage space, in addition to data de-duplication, some vendors use single instancing or compression or a combination of techniques. In the primary storage space, some vendors use data de-duplication, some use compression only, and some combine the two.
Again, all of these technologies have the same goal, but in the case of primary storage (and depending on the type of data sets you have), it may make a difference how a vendor is implementing data reduction. And it’s misleading to refer to any type of data reduction as ‘data de-duplication,’ despite the popularity of the term.
If you’re interested in data reduction (aka capacity optimization) to reduce your capacity requirements, delay new purchases, and slash power, cooling and space requirements, plan to attend our upcoming Webcast, titled “Leveraging Capacity Optimization to Reduce Datacenter Footprint and Storage Costs," Tuesday, May 12 at 10:00 PST, 1:00 EST. Presenters will include Noemi Greyzdorf, research manager, storage software, at IDC, and Peter Smails, senior vice president of worldwide marketing at Storwize.
The Webcast will cover data reduction for all types of storage -- including primary, nearline and secondary data – and provide tips on how to maximize the benefits of capacity optimization. To register, click here.
Maybe that was ok in the realm of secondary storage, but as you begin considering data reduction for primary storage you’ll want to be more specific about what technology a vendor uses, although all of the technologies produce the same result (to varying degrees): reduced storage capacity and costs.
In the secondary storage space, in addition to data de-duplication, some vendors use single instancing or compression or a combination of techniques. In the primary storage space, some vendors use data de-duplication, some use compression only, and some combine the two.
Again, all of these technologies have the same goal, but in the case of primary storage (and depending on the type of data sets you have), it may make a difference how a vendor is implementing data reduction. And it’s misleading to refer to any type of data reduction as ‘data de-duplication,’ despite the popularity of the term.
If you’re interested in data reduction (aka capacity optimization) to reduce your capacity requirements, delay new purchases, and slash power, cooling and space requirements, plan to attend our upcoming Webcast, titled “Leveraging Capacity Optimization to Reduce Datacenter Footprint and Storage Costs," Tuesday, May 12 at 10:00 PST, 1:00 EST. Presenters will include Noemi Greyzdorf, research manager, storage software, at IDC, and Peter Smails, senior vice president of worldwide marketing at Storwize.
The Webcast will cover data reduction for all types of storage -- including primary, nearline and secondary data – and provide tips on how to maximize the benefits of capacity optimization. To register, click here.
Tuesday, May 5, 2009
Emulex to Broadcom: "Take a hike"
May 5, 2009 – As expected, Emulex’s board of directors this week unanimously rejected Broadcom’s hostile, $764 million takeover bid. As I posted previously, the offer was $9.25 per share of Emulex stock.
So far, this only qualifies as a mini-saga, but it could escalate into a mega-saga if (a) it sets off a bidding war for Emulex that involves players even bigger than Broadcom and/or (b) the acquisition bidding spills over into Emulex's competitors, such as QLogic.
At stake is nothing less than the nascent, but potentially game changing, market for converged networks (Ethernet and Fibre Channel), spurred by the overall trend toward data center consolidation.
The buzz on The Street is that Broadcom (or another suitor) will sweeten the pot. Financial analysts have pegged the possible enhanced offer at anywhere from $11 to $15 a share. When Broadcom made the “offer” last week, Emulex stock was trading at about $6 a share. As I write this, it’s trading at $10.76 a share.
In rejecting Broadcom’s bid, Emulex returned the hostility. In a letter to Broadcom CEO Scott McGregor, Emulex’s chairman of the board, Paul Folino, made a number of interesting comments.
For example: “As Broadcom is uniquely aware, Emulex has recently won tier-one [OEM] contracts at the expense of Broadcom and other competitors . . .” Ouch.
(The tier one OEM design wins refer to Emulex’s 10Gbps Ethernet NICs, 10Gbps iSCSI converged network adapters (CNAs), and 10Gbps Fibre Channel over Ethernet (FCoE) CNAs, although the OEMs were not disclosed.)
Another snippet from Folino’s letter to McGregor: “ . . . given that some of these design wins have come at your expense, including your core Ethernet networking business, you are uniquely aware of the future value we have secured and how well positioned we are to unseat you on many other platforms in the near future. We believe your proposal is an opportunistic attempt to capture that value, which rightly belongs to our stockholders.” Ouch again.
Folino ended the letter with a statement that Broadcom’s $9.25-per-share offer “significantly undervalues our company,” thus apparently leaving the door open for a higher bid. But I don’t know: It sounds like Emulex is hard-set against being acquired, at least by Broadcom.
In response to the rejection from the Emulex board, Broadcom today took its offer directly to Emulex shareholders. See “Broadcom Commences All Cash Tender Offer to Purchase Emulex Shares for $9.25 Per Share” release.
If the Broadcom-Emulex deal fails to go through, financial analysts speculate one or more of the following vendors could emerge as white knights: Cisco, Intel, Juniper Networks, Marvell, or PMC-Sierra. Brocade was also cited.
Financial analysts have speculated that the Broadcom-Emulex battle could start a bidding war that could also put Emulex archrival QLogic into play. (In an irrelevant aside: QLogic was spun off from Emulex in 1994.)
I think it’s safe to say that FCoE has a future.
In tangentially related news, QLogic last week acquired NetXen, which makes Gigabit Ethernet and 10GbE adapters, for about $21 million. The plot thickens.
So far, this only qualifies as a mini-saga, but it could escalate into a mega-saga if (a) it sets off a bidding war for Emulex that involves players even bigger than Broadcom and/or (b) the acquisition bidding spills over into Emulex's competitors, such as QLogic.
At stake is nothing less than the nascent, but potentially game changing, market for converged networks (Ethernet and Fibre Channel), spurred by the overall trend toward data center consolidation.
The buzz on The Street is that Broadcom (or another suitor) will sweeten the pot. Financial analysts have pegged the possible enhanced offer at anywhere from $11 to $15 a share. When Broadcom made the “offer” last week, Emulex stock was trading at about $6 a share. As I write this, it’s trading at $10.76 a share.
In rejecting Broadcom’s bid, Emulex returned the hostility. In a letter to Broadcom CEO Scott McGregor, Emulex’s chairman of the board, Paul Folino, made a number of interesting comments.
For example: “As Broadcom is uniquely aware, Emulex has recently won tier-one [OEM] contracts at the expense of Broadcom and other competitors . . .” Ouch.
(The tier one OEM design wins refer to Emulex’s 10Gbps Ethernet NICs, 10Gbps iSCSI converged network adapters (CNAs), and 10Gbps Fibre Channel over Ethernet (FCoE) CNAs, although the OEMs were not disclosed.)
Another snippet from Folino’s letter to McGregor: “ . . . given that some of these design wins have come at your expense, including your core Ethernet networking business, you are uniquely aware of the future value we have secured and how well positioned we are to unseat you on many other platforms in the near future. We believe your proposal is an opportunistic attempt to capture that value, which rightly belongs to our stockholders.” Ouch again.
Folino ended the letter with a statement that Broadcom’s $9.25-per-share offer “significantly undervalues our company,” thus apparently leaving the door open for a higher bid. But I don’t know: It sounds like Emulex is hard-set against being acquired, at least by Broadcom.
In response to the rejection from the Emulex board, Broadcom today took its offer directly to Emulex shareholders. See “Broadcom Commences All Cash Tender Offer to Purchase Emulex Shares for $9.25 Per Share” release.
If the Broadcom-Emulex deal fails to go through, financial analysts speculate one or more of the following vendors could emerge as white knights: Cisco, Intel, Juniper Networks, Marvell, or PMC-Sierra. Brocade was also cited.
Financial analysts have speculated that the Broadcom-Emulex battle could start a bidding war that could also put Emulex archrival QLogic into play. (In an irrelevant aside: QLogic was spun off from Emulex in 1994.)
I think it’s safe to say that FCoE has a future.
In tangentially related news, QLogic last week acquired NetXen, which makes Gigabit Ethernet and 10GbE adapters, for about $21 million. The plot thickens.
Subscribe to:
Posts (Atom)