Friday, May 28, 2010

VDI: The next big storage challenge

May 28, 2010 – Virtual server administrators and their storage brethren know full well that to reap all the benefits of a virtualized environment you need to clear a lot of storage hurdles. And InfoStor readers know that there are plenty of storage products and strategies to get the job done, whether the challenge is performance or data protection.

But fine tuning your storage environment to maximize your virtual server environment may seem like a walk in the park compared to performing the same task in a virtual desktop infrastructure (VDI).

As openBench Labs CTO Jack Fegreus put it in a VDI-related lab review we recently posted (see below):

“IT may well find that the distributed computing sea change in server virtualization will be dwarfed by a rapidly growing tsunami in virtualized desktops. On the client side, the corporate pool of desktop and laptop PCs has long been the uncharted sea of IT. And the sheer number of client devices makes the over-provisioning of CPU and storage resources for these systems a huge capital expense.”

Gartner estimates that, so far, only about 500,000 desktops are running on VMs – a fraction of the $150 billion PC market. However, Gartner predicts that IT will migrate 30% of its installed based of PCs to VMs by 2014. If that happens, the ranks of VMs running client systems will swell to more than 18 million.

According to Fegreus, best practices call for deploying desktop VMs four to eight times more densely than server VMs (which is possible because of the sporadic nature of desktop usage). This is great in the context of cost savings, but dense deployment also leads to resource utilization storms involving I/O, storage, CPU and memory resources.

“What makes storage so important are the inextricable links to the capital and operational expenses that IT must restructure to maximize the ROI of a VDI initiative,” according to Fegreus.

To explore storage issues in a VDI context, openBench Labs set up a test scenario with off-the-shelf servers, a VMware ESX 4 hypervisor with lots of other VMware software, and backup software from Veeam, all anchored by a Fibre Channel disk array from Xiotech.

Check out the full lab review: “VDI: For virtual desktops, latency matters.”
http://www.infostor.com/index/articles/display/5907788482/articles/infostor/openbench-lab-review/2010/may-2010/vdi_-for_virtual_desktops.html

Hardly light reading for the holiday weekend, but if virtual desktops are in your future it’s worth the click.

Thursday, May 27, 2010

More musings on NetApp's earnings

May 27, 2010 – The investment community was still atwitter today after NetApp’s earnings report yesterday which, among other eye-openers, included fourth quarter revenue of $1.17 billion, fiscal year revenue of $3.93 billion, and profits of 50 cents a share, excluding items (or “adjusted income”), which was well ahead of analysts’ predictions of 44 cents per share (see “NetApp wows Wall Street, doubles quarterly profits”).

NetApp (NSDQ: NTAP) shares were trading at $38.06 at one point today, up 17.36% and representing a three-year high, despite many financial analysts putting a “hold” recommendation on the stock yesterday.

Disclosure: I do not, and never did (unfortunately), own NetApp stock. That’s partially due to journalistic ethics, and partially because I’m stupid.

Another interesting tidbit that I failed to mention in our news coverage was that 71% of NetApp’s revenue came from indirect (channel) sales, and that Avnet/Arrow accounted for 27% of total revenue.

In a report on NetApp’s earnings, Canaccord/Genuity analyst Paul Mansky noted that “NetApp is firing on all cylinders and has been for three to four quarters.” However, in the “investment risk” portion of his report, Mansky reiterated his opinion that:

“Longer term, it is unclear how NetApp plans to maintain its current level of relevance as the market consolidates to vendor-centric vertical stacks. The company’s sole OEM partnership, IBM, has not evolved as planned. Absent a greater breadth of solutions, partners and services, NetApp could be relegated to a niche vendor in the data center of tomorrow.”

Which turns the old “Who will acquire NetApp?” question into perhaps a more relevant query: “Who will NetApp acquire?”

NetApp is about 18 years old, and to paraphrase one of the financial analysts: NetApp is 18, but still growing like it’s an adolescent.

Realizing NetApp’s age led me to a strange, but true, observation. I only have a couple professional memories from a period almost 20 years ago, but one is my first meeting with NetApp (Network Appliance at the time) execs when they introduced their first product. We had written some articles about Auspex, and the Network Appliance propeller heads wanted to show us their better mousetrap.

The fact that this meeting was one of only a couple that I can remember from my professional life 20 years ago can be attributed to either (a) my professional life was rather uneventful at the time, (b) my brain cells have eroded in accordance with my age and beverage preferences, or (c) I was extremely astute and prescient and recognized immediately that this “appliance” thing they had would truly shake up the storage industry. Place all bets on (a) or (b).

My thought at the time: What respectable IT person would buy something that the vendor called an “appliance?” Moreover, who would buy from a vendor that actually incorporated that word in its company name? At the time, an appliance conjured up an image of a toaster, or a Cuisinart. (Remember: This was back when a SAN was called a VAXcluster.)

The rest, as they say, is storage history.

Friday, May 21, 2010

Cloud storage is due for a shakeout

May 21, 2010 – I realize that it may seem ludicrous to predict a shakeout in a market that hasn’t even taken off, but I’m afraid that’s what’s in storage for the cloud storage market.

From where I sit (an editor’s desk, or the receiving end of the PR Gatling guns) there seems to be a startup getting into cloud storage every week. And since this has been going on for well over a year, those dozens of cloud storage providers from last year have probably blossomed into hundreds by now.

IDC predicts that cloud storage will grow from 9% of the overall IT cloud services market in 2009 to 14% in 2013. In dollar terms, that means revenues from cloud storage of $1.57 billion in 2009 to $6.2 billion in 2013.

There would appear to be more than enough room for dozens (hundreds?) of players in a market of that size, right? Wrong. There’s only room for a handful of profitable RAID vendors, just to take one example, and the RAID array market is way bigger than the cloud storage market.

What are these legions of cloud storage startups smoking? Or shooting? How can they possibly expect to turn a profit? There are a number of problems:

If I was going to start a storage company (pity the fool), I’d rather be a RAID vendor than a cloud storage provider. In RAID, at least you can undercut EMC and the boys on price. Maybe even technology. You’re not going to be able to that with cloud storage.

Cloud storage vendors already provide storage at pennies (ok, nickels in some cases) per GB per month. What are you going to do – Provide it for drachmas per GB? Not much profit in that business model. A better bet would be to start a disk drive manufacturing business!

There are a number of other obvious impediments for small cloud storage providers (small providers, not clouds).

Look who’s already in the cloud storage business.

Amazon didn’t invent the term ‘cloud storage,’ buy they’re almost synonymous with it.

And earlier this week, Google appeared to put its toes into cloud storage (see “Google Launches Online Data Storage Service for Developers” on Enterprise Storage Forum). After some free initial capacity, pricing appears to be 17 cents/GB/month, plus 10 cents/GB for uploading and 15 cents/GB for downloading.

Sure, Google’s cloud storage service is just for developers now, but if cloud storage truly is a billion dollar market you know that Google will chase Amazon.

And some people think that Microsoft will get into this space (google “windows azure”). Again, mostly for developers, but why stop there?

And then there’s EMC (Mozy, Atmos), the storage vendor that every storage vendor just loves competing with.

And cloud storage consumer kings such as Carbonite, who are moving up the food chain into SMB land.

The list of impediments to succeeding in the cloud storage market goes on and on, just like the list of vendors entering the space.

Don’t get me wrong: Cloud storage is a great thing (no matter how you define it, which can be like trying to nail a blob of mercury to a ceiling of Jell-o), but there just won’t be room for all the vendors clamoring for a piece of the pie.

So, if you’re a cloud storage startup and you still have some capital left . . . Wanna start a RAID company?

Cloud storage news:
ParaScale builds clouds for machine-generated data
InMage targets cloud providers
OpSource launches Cloud Files backup and archive service
Brocade, EMC lay groundwork for private clouds
CTERA offers cloud storage platform for MSPs

Thursday, May 20, 2010

Brocade’s earnings report is boring

May 20, 2010 – Brocade’s report today on its fiscal second quarter earnings was boring, at least compared to, say, Dell’s dazzling report today, HP’s home run earlier this week, or even to Brocade’s bubbly Q1 results.

The best thing you could say about Brocade’s earnings was that there were earnings. The company turned a net profit of $22.4 million in the second quarter. That compares to a net loss of $66.1 million for the same period last year.

Brocade (NSDQ: BRCD) reported Q2 revenues of $501 million, which was 1.1% off its Q2 2009 revenue. That’s what was so surprising: There wasn’t much change in this period vs. the same period a year ago.

For example, if you break Brocade’s business into three major segments, the SAN (Fibre Channel) business accounted for 56% of the company’s total revenues in Q2 2010 vs. 58% in Q2 2009. Global services represented 18% of total revenues, vs. 17% last year, and Ethernet accounted for 26%, vs. 25% in Q2 2009.

The highlight of the Fibre Channel king’s revenue report was . . . Ethernet. Brocade’s Ethernet business posted a 34% gain over the first quarter of this year (although only 2% year-over-year). In fact, Q2 was Brocade’s highest revenue quarter for Ethernet ($156.7 million in products and services) since the company acquired Foundry.

The other bright spot was a surge in Brocade’s federal business, which soared 161% over Q1 revenues.

Brocade officials stuck to their previous prediction of full fiscal 2010 revenues in the $2.1 billion to $2.2 billion range.

Related articles:
EMC expands converged networking deals with Brocade, Cisco
Brocade, EMC lay groundwork for private clouds
Brocade CNAs qualified by EMC, HDS, IBM, NetApp
Solid storage growth in HP’s Q2 report

Wednesday, May 19, 2010

Solid storage growth in HP’s Q2 report

May 19, 2010 – Storage wasn’t the brightest spot in HP’s fiscal Q2 earnings report yesterday, but it was far from the darkest spot. In fact, there weren’t any dark spots.

The only laggards were software (down 1% from last year) and services (which grew only 2% from the previous year, to $8.7 billion).

Overall, HP (NYSE: HPQ) posted revenue for the second quarter of $30.8 billion, up 13% over the same quarter last year. Net earnings were $2.2 billion for the quarter, an increase of 28% from fiscal Q2 2009.

It’s difficult to put context on earnings these days because of the relative macro economic conditions today vs. a year ago, but by anyone’s estimates this was a stellar performance by HP in a still-tough climate.

In fact, it prompted The Motley Fool to pen a column comparing HP to IBM (see “HP is the New Big Blue”).

Unfortunately, HP does not get real granular in breaking down its product lines, but the Enterprise Storage and Server segment racked up $4.5 billion in total revenue, up 31% over the previous year. That segment was led by Industry Standard Server revenue, which increased 54%, while the Storage segment revenue increased a solid 16%, with the EVA product line up 3%.

Operating profit for the Enterprise Storage and Servers segment was $571 million (12.6% of revenue), which was up from $250 million (7.2% of revenue) in Q2 2009.

HP is clearly a bellwether for the overall IT industry, although not so much for the storage sector. However, NetApp and Brocade are bellwethers for the storage industry. Brocade (NSDQ: BRCD) will report results for its fiscal second quarter after the market closes tomorrow (Thurs., May 20), and NetApp (NSDQ: NTAP) will announce results for its fiscal fourth quarter next week (Wed., May 26).

Stay tuned to see if you’re in the right business or not as the market turns.

For comments on the results from Brocade’s and NetApp’s previous quarter earnings report, see my blog posts:
Brocade’s earnings a mixed bag
NetApp hit$ a home run

Tuesday, May 18, 2010

Consider open source deduplication

May 18, 2010 – Some vendors (e.g., NetApp, EMC) give away data reduction (aka capacity optimization) technology, while other solutions for compression and data deduplication can get pretty expensive pretty quick. There’s another alternative: open source deduplication.

Vendors/organizations offering open source deduplication range from Oracle/Sun to Bacula, Nexenta and Zmanda to newcomer Opendedup.

I’m not very familiar with these products, but I recently read an article on open source deduplication on InfoStor sister site Enterprise Storage Forum.

Here are some quotes from some developers/execs involved with open source deduplication:

Opendedup developer Sam Silverberg: “SDFS has performance, scalability and cost advantages over many proprietary solutions, but I think proprietary solutions have some real technical benefits. Replication, source-based deduplication, and 24/7 phone support are not available today in open source solutions.”

Kern Sibbald, founder of Bacula.org and CTO of Bacula Systems: “Proprietary solutions are expensive and the source code is not available, so it is not easy to check or compare their performance. From the deduplication statistics that I have seen from proprietary vendors and those given by open source projects . . . I would say that the open source solutions stack up very well against the proprietary solutions.”

Chander Kant, CEO of Zmanda: “Over time, deduplication will become standard. Just like we have standard algorithms for compression today, there will be standard algorithms and formats for deduplication. And open source shines with standardization. So the future of deduplication is squarely with open source.”

Since I’m not familiar with these open source solutions, I don’t have an opinion (although I sincerely doubt that “the future of deduplication is squarely with open source.”) Then again, although the article doesn’t provide stats, it appears as though a lot of users are downloading this free code.

Read the full story on Enterprise Storage Forum: “Open Source Deduplication: Ready for Enterprises?”

Data deduplication news from InfoStor:
Quantum adds SMB mode to data deduplication lineup
Infineta unveils deduplication for inter-data-center traffic
EMC debuts Data Domain Global Deduplication Array

Thursday, May 13, 2010

iSCSI vs. FCoE, part deux

May 14, 2010 – It’s an age-old debate – iSCSI vs. Fibre Channel – but in light of all the hubbub around Fibre Channel over Ethernet (FCoE), the debate is heating up again.

If you follow the FCoE news, you’d think that converged networks based on FCoE are an IT inevitability. Eventually (it’s going to be a very slow adoption curve) that may be true, but I think it will be more in the Fortune 1000 space – where the benefits of FCoE will be most apparent – rather than in the SMB space, where low cost is still king.

FCoE provides the ability to run storage traffic over Ethernet (although you have to upgrade to 10GbE and Converged Enhanced Ethernet, or Data Center Bridging), but iSCSI also enables storage traffic over Ethernet – at a much lower cost. And, as with FCoE, you can preserve existing investments.

Running storage traffic over Ethernet is a no-brainer, but it doesn’t require FCoE. Cost-conscious SMBs may find iSCSI more palatable. And if part of your convergence and cost cutting revolves around virtualization, iSCSI (or NAS) may make even more sense. Plus, it’s a simpler and faster route to a converged network.

Of course, the old argument against iSCSI was, in part, related to performance. But that was when the argument centered on 1Gbps Ethernet vs. 4Gbps Fibre Channel. Now that iSCSI can run on 10Gbps Ethernet (as can Fibre Channel via FCoE), those performance-oriented arguments are crumbling.

But don’t take my word for it.

openBench Labs (which contributes lab reviews to InfoStor), recently set up an interesting test case by building a modest, inexpensive 10GbE iSCSI storage network that turned in some impressive performance results.

You’ll have to read the full review to put some perspective on the performance numbers, but openBench Labs CTO Jack Fegreus clocked average throughput of 5,000 I/Os per second (IOPS) with one virtual machine (VM), which was in line with what openBench Labs achieved with direct Fibre Channel access to the same disk array in previous tests. With two VMs, throughput scaled to about 8,200 IOPS.

Jack concludes his review with this observation: “Given these results, our 10GbE iSCSI configuration with QLogic Intelligent Ethernet Adapters should be able to support the installation of Microsoft Exchange Server on a VM with upwards of 5,000 mail boxes.” Quite sufficient for most SMBs. “The simplest and most immediate strategy for IT to begin leveraging 10GbE in a data center, especially when dealing with a virtual environment, is to begin implementing iSCSI.”

Jack’s 10GbE iSCSI network consisted of three Dell PowerEdge servers running Windows Server 2008 R2 and VMware ESX Server 4; iSCSI software from StarWind; dual-port 10GbE Ethernet adapters and 4Gbps Fibre Channel HBAs from QLogic; and a Xiotech Emprise 5000 disk array with two 4Gbps Fibre Channel ports. Performance was measured with Intel’s Iometer benchmark.

Check out the full review: “How to jumpstart SAN + LAN convergence.”

Related blog posts:
Virtual server SANs: FC vs. iSCSI vs. NAS
Intel, Microsoft top 1,000,000 IOPS in iSCSI tests
Video surveillance is a real sweet spot for iSCSI
NAS gains in virtual server environments

And you might want to consider attending this upcoming (May 26) SNIA Webcast: “iSCSI and New Approaches to Backup and Recovery.”

Tuesday, May 11, 2010

Buyer’s guide compares and rates 130 midrange arrays

May 11, 2010 – Analyst firm DCIG recently released its 2010 Midrange Array Buyer’s Guide, which provides details and comparisons on more than 130 disk arrays from 30 vendors.

Each entry includes data on about 30 disk array/controller/software features, and includes weighted, relative scores and rankings for each array.

The buyer’s guide was compiled by DCIG founding analyst Jerome Wendt, with help from some IT end users.

I like Jerome’s content because, before becoming a storage analyst and writer, he was an IT professional – most recently spending five years at First Data Corp. as a senior storage engineer. As such, he brings an end user’s perspective to all of his content.

As he says in his intro: “I wanted to leverage my experiences at First Data so the Buyer’s Guide could alleviate one of the most time-consuming components of the midrange array buying decision – the information gathering and evaluation stage.”

The buyer’s guide is designed primarily for IT professionals that want to cut down on the time spent evaluating midrange arrays before they arrive at a short list that they can test in-house with specific applications. But array vendors might also be interested in the guide in order to make competitive comparisons. Likewise VARs and other channel professionals.

You can also use the report to come up with stats such as:

-56% of midrange arrays support thin provisioning
-Less than 10% of midrange arrays support deduplication of primary storage
-Less than 30% of the arrays support MAID technology

You can get more info on the Midrange Array Buyer’s Guide on the DCIG Web site.
(Pricing starts at $5,000.)

Wednesday, May 5, 2010

Better backups for VMware: VADP + CBT

May 6, 2010 – The fantastic efficiencies of virtual servers can be compromised by poorly designed backup and recovery schemes. And storage specialists tasked with optimizing virtual environments (or the poor server specialists that inherited the job) know full well that backup and recovery can be the weak links in a virtual infrastructure.

The original solution to this problem –VMware Consolidated Backup (VCB) – is, in short, inadequate. (VCB is deployed in less than 10% of VMware shops, according to Wikibon estimates.) Virtually all backup/recovery software vendors have been addressing the problem, with or without VCB, but VMware has stepped up to the plate with some APIs and related technology that promise to significantly improve backup and recovery in VMware environments: vStorage APIs for Data Protection (VADP) and Change Block Tracking (CBT).

VADP, which is the replacement for VCB, integrates backup functionality via vStorage APIs, and CBT (available in vSphere 4.0) enables high-speed block-level incremental and differential backups.

At the moment, six vendors’ backup/recovery products have been certified with VADP:

CA (ARCserve)
EMC (Avamar)
IBM (Tivoli Storage Manager)
Symantec (NetBackup and Backup Exec)
Veeam (Veeam Backup)
Vizioncore (vRanger)

VADP and CBT are not standalone products to be used for backup; rather, they’re designed to be integrated into third-party backup applications.

For more information on VADP, start with this series of articles from Wikibon members: “What’s next for VMware backup” on the wikibon.org site.

And if you’re really interested in improving your virtual server backups, join us for an upcoming Webcast, “Backups That Work,” sponsored by Veeam. The link provides more information and a registration page.

Monday, May 3, 2010

What is cloudbursting?

May 3, 2010 – The nice thing about ill- or loosely-defined terms such as cloud computing or cloud storage is that they lead to more confusing, yet catchy, terms. Examples: cloudstorming, cloudware, cloudsourcing, cloud spanning and cloudbursting. You can get definitions for all of those, and more cloud terms, on the Cloud Computing Wiki.

That wiki defines cloudbursting as “The dynamic deployment of a software application that runs on internal organizational compute resources to a public cloud to address a spike in demand.” Cloudbursting is typically used to address seasonal or event-based traffic peaks, and is closely related to cloud balancing.

For context, the wiki offers two references to cloudbursting, both from The 451 Group:

“ISV virtual appliances should underpin a new surge in cloud use followed by self-service mechanisms and enterprise connectors enabling organizations to ‘cloudburst’ to using cloud services.”
and
“In addition to direct sales to enterprises, going forward it hopes that extending out from private clouds to public ones – what we like to call ‘cloudbursting’ – will become a prevailing IT weather pattern and provide it with additional opportunities.”

Cloudbursting could be particularly advantageous in the context of storage and virtual servers.

If you’re confused, or need further clarification, join us for a Webcast this week, Wednesday, May 5 at 1:00 EDT, 10:00 PDT. “Server Technology to Cloudburst for Increased Flexibility and Lower Costs” Webcast presenters will include 451 Group analyst William Fellows and Eric Burgener, senior vice president of product management at InMage Systems.

Here’s the description:
Cloudbursting, or provisioning virtual servers on demand within a cloud infrastructure, provides end users with the ability to then failover applications to these newly provisioned VMs. This opens up a lot of new use cases for cloud infrastructure beyond just the recovery-oriented services available now. This Webcast will focus on the use cases across three different areas -- recovery, administration and production -- and look at what technologies will enable these new use cases for cloud infrastructure. The presentation will also explain what end users should be looking for from their cloud providers, and what benefits and cost savings they can achieve.

Register here.

And then there’s George Clooney’s take on a completely different type of cloudbursting in this YouTube clip from The Men Who Stare at Goats.