I’ve been doing some background research on continuous data protection (CDP) for an upcoming Special Report, and I’m starting to wonder whether the term – not the technology – will survive.
For one, we still have the “true CDP” (recovery to any point in time) vs. “near CDP” (snapshots on steroids) argument confusing things. I spoke to a number of vendors, and they all went to great lengths to explain why the distinction is irrelevant, and then they went on to explain why their approach was (or in some cases, wasn’t) true CDP.
Maybe it is meaningless. The fact is that very few companies really need true CDP. It just depends on how much granularity you need in your recovery point objective (RPO), and for many companies snapshots are more than adequate.
In addition, many vendors now offer a spectrum of recovery points, ranging from snapshots to APIT recovery: Users can choose based on their specific RPO requirements, capacity issues, cost, service level agreements (SLAs), “application-consistent” vs. “crash-consistent” requirements, the importance of the data being protected, etc.
Further, many traditional backup software vendors have integrated some level of CDP functionality into their backup suites, or at least offer it as an option. This blurs the lines between CDP and backup and, in fact, standalone CDP applications/platforms are becoming rare.
As such, it’s possible that the CDP term will eventually fade away, despite the fact that the Storage Networking Industry Association’s Data Management Forum has a special interest group dedicated to CDP.
As Fadi Albatal, FalconStor Software’s director of marketing, says, “The argument [over true CDP vs. near CDP] is irrelevant. It all depends on your SLA, RPO and RTO requirements. It’s a matter of what is acceptable risk and/or data loss.”
Or as John Ferraro, InMage Systems’ president and CEO says, “We don’t sell CDP; we focus on disaster recovery.”
Monday, December 22, 2008
Wednesday, December 17, 2008
Storage spending according to TIP
Amid the general doom and gloom in the IT industry, the storage segment is showing some signs of surprising resiliency. Then again . . .
On the down side, a recent storage spending survey of Fortune 1000 managers conducted by TheInfoPro research firm, shows that storage budgets will decrease 14% on average in 2009.
In addition, 68% of the companies expect to spend less in 2009 vs. 2008, with only 21% of all respondents planning to spend more than $10 million (remember, all respondents were from Fortune 1000 firms) – which is about a 10% drop from 2008.
Other key findings from the TheInfoPro survey:
--32% of the respondents spent less in 2008 compared to what was originally budgeted
--32% expect to spend the same or more in 2009 vs. 2008 – down from 81% in the 2008 vs. 2007 period.
The main culprit, of course, is the macro economic malaise, but TheInfoPro CEO (and fellow Boston College alumnus) Ken Male also notes that “Excess capacity and the procurement of low-cost tiers of storage have also contributed to the considerably lower Q4 ‘budget flush’ that we’re witnessing compared to previous years.”
On the positive side, Gartner’s Q3 2008 revenue stats for the external controller-based disk storage market (aka disk arrays) is somewhat upbeat. For example, revenue totaled $4.3 billion in the third quarter, a 10% increase over Q3 2007. However, when comparing Q2 and Q3 of 2008, worldwide revenues declined 4.5%.
For those keeping score, the leader board in the disk array market (with market shares in the third quarter) goes like this: EMC (26.1%), IBM (13%), HP (11.3%), Hitachi Data Systems (9.8%), Dell (8.9%), NetApp (8%), Sun (4.3%), Fujitsu (2.5%), and “others” (16.1%).
In another gloomy end-of-year report, investment analysts at Needham & Company lead off with this: “After a year of acting as grief counselors as much as research analysts, we welcome the arrival of a new year. The bad news is that we expect the first half of 2009 (and in particular, Q1) to see a dramatic decline in IT spending.” The Needham analysts are predicting a 2% to 3% overall increase in IT spending in 2009, noting that it will be heavily weighted toward the second half of the year.
On the down side, a recent storage spending survey of Fortune 1000 managers conducted by TheInfoPro research firm, shows that storage budgets will decrease 14% on average in 2009.
In addition, 68% of the companies expect to spend less in 2009 vs. 2008, with only 21% of all respondents planning to spend more than $10 million (remember, all respondents were from Fortune 1000 firms) – which is about a 10% drop from 2008.
Other key findings from the TheInfoPro survey:
--32% of the respondents spent less in 2008 compared to what was originally budgeted
--32% expect to spend the same or more in 2009 vs. 2008 – down from 81% in the 2008 vs. 2007 period.
The main culprit, of course, is the macro economic malaise, but TheInfoPro CEO (and fellow Boston College alumnus) Ken Male also notes that “Excess capacity and the procurement of low-cost tiers of storage have also contributed to the considerably lower Q4 ‘budget flush’ that we’re witnessing compared to previous years.”
On the positive side, Gartner’s Q3 2008 revenue stats for the external controller-based disk storage market (aka disk arrays) is somewhat upbeat. For example, revenue totaled $4.3 billion in the third quarter, a 10% increase over Q3 2007. However, when comparing Q2 and Q3 of 2008, worldwide revenues declined 4.5%.
For those keeping score, the leader board in the disk array market (with market shares in the third quarter) goes like this: EMC (26.1%), IBM (13%), HP (11.3%), Hitachi Data Systems (9.8%), Dell (8.9%), NetApp (8%), Sun (4.3%), Fujitsu (2.5%), and “others” (16.1%).
In another gloomy end-of-year report, investment analysts at Needham & Company lead off with this: “After a year of acting as grief counselors as much as research analysts, we welcome the arrival of a new year. The bad news is that we expect the first half of 2009 (and in particular, Q1) to see a dramatic decline in IT spending.” The Needham analysts are predicting a 2% to 3% overall increase in IT spending in 2009, noting that it will be heavily weighted toward the second half of the year.
Monday, December 1, 2008
2008: The Year of . . .
It’s customary to take a year-end look back at the key trends and technologies that shaped, or re-shaped, our industry. In storage, it’s usually difficult to zero down on one or two topics, but in the case of 2008 it was relatively easy.
In my view, two technologies dominated this year: virtualization and data de-duplication.
In the case of virtualization, I’m referring to server virtualization and its impact on storage, rather than storage virtualization per se (although the two are careening toward convergence). Server virtualization is arguably the hottest trend in the overall IT market, and virtually all storage vendors are hustling to take advantage of this trend by providing integration hooks and even brand new products that help end users maximize the benefits of server virtualization.
With Microsoft’s introduction of Hyper-V this year, which will ignite competition with VMware and other platforms, adoption of virtualization will continue to surge, and the storage implications – from HBAs to backup software – will take center stage for storage administrators in 2009.
But I’d have to give top honors for the dominant storage technology of 2008 to data de-duplication.
The rapid rise of data de-duplication is no surprise: From the end-user perspective, it’s a no-brainer. There aren’t any cogent arguments against reducing your capacity requirements and costs, particularly in these belt-tightening times.
But the thing I really like about data de-duplication is that it’s one of those technologies that show how end users win in the end. Data de-dupe works against the profit motive of vendors (at least the hardware vendors) because, like thin provisioning, it enables users to significantly reduce the amount of storage space they need to buy – or at least defer purchases. And let’s give credit to the early pioneers of data de-duplication, such as Data Domain, that gave users a technology that was so useful that they in turn forced the big vendors to offer a technology that in a way was not in their near-term best interests.
Here are some recent InfoStor articles on data de-duplication:
Users weigh de-dupe options
Users react to de-duplication deals
EMC refreshes data protection, de-dupe software
In my view, two technologies dominated this year: virtualization and data de-duplication.
In the case of virtualization, I’m referring to server virtualization and its impact on storage, rather than storage virtualization per se (although the two are careening toward convergence). Server virtualization is arguably the hottest trend in the overall IT market, and virtually all storage vendors are hustling to take advantage of this trend by providing integration hooks and even brand new products that help end users maximize the benefits of server virtualization.
With Microsoft’s introduction of Hyper-V this year, which will ignite competition with VMware and other platforms, adoption of virtualization will continue to surge, and the storage implications – from HBAs to backup software – will take center stage for storage administrators in 2009.
But I’d have to give top honors for the dominant storage technology of 2008 to data de-duplication.
The rapid rise of data de-duplication is no surprise: From the end-user perspective, it’s a no-brainer. There aren’t any cogent arguments against reducing your capacity requirements and costs, particularly in these belt-tightening times.
But the thing I really like about data de-duplication is that it’s one of those technologies that show how end users win in the end. Data de-dupe works against the profit motive of vendors (at least the hardware vendors) because, like thin provisioning, it enables users to significantly reduce the amount of storage space they need to buy – or at least defer purchases. And let’s give credit to the early pioneers of data de-duplication, such as Data Domain, that gave users a technology that was so useful that they in turn forced the big vendors to offer a technology that in a way was not in their near-term best interests.
Here are some recent InfoStor articles on data de-duplication:
Users weigh de-dupe options
Users react to de-duplication deals
EMC refreshes data protection, de-dupe software
Wednesday, November 12, 2008
Who's #1 in disk arrays? (yawn)
Most end users don’t care much about vendors’ market shares (although they do care who’s rising or falling rapidly) but, maybe because I’m a bit of a sports fan, I like stats and standings, including those in the largest revenue segment of the storage industry – disk arrays.
Gartner Inc.’s numbers for the “external controller-based disk storage market” in the second quarter of this year (the most recent period for which they have final tallies) show that the 800-pound gorilla held on to its first place position, comfortably. On Q2 revenues of just over $1 billion, EMC garnered a 24.3% market share, followed by IBM with 14.1% ($631.7 million) and HP at 11.8% ($528.6 million).
Rounding out the top eight vendors were Dell, Hitachi Data Systems, NetApp, Sun and Fujitsu (including Fujitsu Siemens).
Somewhat surprisingly, Sun had the highest growth year-over-year (a 34.7% increase in revenues), attributable largely to its success with the StorageTek 2000/6000/9000 series of arrays. Also surprisingly, all but two of the top eight vendors had double-digit growth, with NetApp coming in second behind Sun with a 22.9% increase. Only HP (3.4% growth) and Fujitsu (a -2.8% decrease) did not rack up double-digit growth.
In one more surprising stat, the “other vendors” category showed strong growth in Q2, posting a 38.5% increase in revenue year-over-year while increasing market share from 13.5% in Q2 2007 to 15.8% in Q2 2008.
Prediction: NetApp will beat out Hitachi Data Systems within the next quarter or two to gain the #5 position in the disk array market (although it’s important to note that HDS’ revenues exclude OEM revenue from HP and Sun).
Gartner Inc.’s numbers for the “external controller-based disk storage market” in the second quarter of this year (the most recent period for which they have final tallies) show that the 800-pound gorilla held on to its first place position, comfortably. On Q2 revenues of just over $1 billion, EMC garnered a 24.3% market share, followed by IBM with 14.1% ($631.7 million) and HP at 11.8% ($528.6 million).
Rounding out the top eight vendors were Dell, Hitachi Data Systems, NetApp, Sun and Fujitsu (including Fujitsu Siemens).
Somewhat surprisingly, Sun had the highest growth year-over-year (a 34.7% increase in revenues), attributable largely to its success with the StorageTek 2000/6000/9000 series of arrays. Also surprisingly, all but two of the top eight vendors had double-digit growth, with NetApp coming in second behind Sun with a 22.9% increase. Only HP (3.4% growth) and Fujitsu (a -2.8% decrease) did not rack up double-digit growth.
In one more surprising stat, the “other vendors” category showed strong growth in Q2, posting a 38.5% increase in revenue year-over-year while increasing market share from 13.5% in Q2 2007 to 15.8% in Q2 2008.
Prediction: NetApp will beat out Hitachi Data Systems within the next quarter or two to gain the #5 position in the disk array market (although it’s important to note that HDS’ revenues exclude OEM revenue from HP and Sun).
Tuesday, November 4, 2008
Virtual servers and storage
One of the, if not the, most interesting topics in the storage space these days -- and probably for the next couple years -- is the challenge of optimizing your storage environment, devices, and software in order to maximize the benefits of virtual servers. For me, it's interesting in part because I don't know squat about virtual server technology. Sure, I understand the basics and the benefits, but I've never been a server administrator, let alone a virtual server administrator.
But I have friends who do understand the synergies between storage and virtual servers in detail. For example, InfoStor recently hosted a Webcast titled Storage Challenges Created by a Virtualized Server Infrastructure, which was presented by Taneja Group analysts Jeff Byrne and Jeff Boles and sponsored by FalconStor Software. (NOTE: Registration is required to view the Webcast.)
The bulk of the presentation addressed the "Five Storage Challenges Exacerbated by Server Virtualization":
1. Storage Utilization Decreased by Server Virtualization
2. Application Performance Dependent on Storage Performance
3. End-to-End Visibility from Virtual Machine through Physical Device
4. Diagnostics, Tuning and Change Management Are More Difficult
5. Backup and Recovery are More Complex
The two Jeffs also provided advice on how to address all of these daunting challenges. If you have anything to do with the storage side of the virtual server equation I urge you to check out this Webcast.
This topic is so hot that VMworld may be one of the, if not the, pre-eminent storage-related trade shows in 2009.
But I have friends who do understand the synergies between storage and virtual servers in detail. For example, InfoStor recently hosted a Webcast titled Storage Challenges Created by a Virtualized Server Infrastructure, which was presented by Taneja Group analysts Jeff Byrne and Jeff Boles and sponsored by FalconStor Software. (NOTE: Registration is required to view the Webcast.)
The bulk of the presentation addressed the "Five Storage Challenges Exacerbated by Server Virtualization":
1. Storage Utilization Decreased by Server Virtualization
2. Application Performance Dependent on Storage Performance
3. End-to-End Visibility from Virtual Machine through Physical Device
4. Diagnostics, Tuning and Change Management Are More Difficult
5. Backup and Recovery are More Complex
The two Jeffs also provided advice on how to address all of these daunting challenges. If you have anything to do with the storage side of the virtual server equation I urge you to check out this Webcast.
This topic is so hot that VMworld may be one of the, if not the, pre-eminent storage-related trade shows in 2009.
Tuesday, October 28, 2008
We need standards for SSDs
The Storage Networking Industry Association (SNIA) recently announced that it has formed a Solid State Storage Initiative (see SNIA launches SSD initiative). In addition to the SNIA's normal activities such as evangelizing, proselytizing and cheerleading, the SSSI will contribute to standards relating to solid-state disk (SSD) drives.
What we really need here from the SNIA are standards that users, integrators and OEMs can use to compare SSDs, as well as SSDs vs. traditional hard disk drives (HDDs). At a minimum, this standard, or standards, would address performance, providing apples-to-apples metrics to enable comparisons of vendors' performance claims.
But in the case of SSDs, the metrics would have to go way beyond that. For one, they would have to include capacity and price. This would approximate what we get from the Storage Performance Council's SPC benchmarks.
However, the SSD metrics should also encompass durability/reliability and even power consumption. I doubt that it would be possible to come up with a single metric that measured IOPS/$/GB/watts, but the industry will desperately need at least a series of metrics to enable users/integrators/OEMs to make sense out of the nonsense that currently dominates in marketing materials.
What we really need here from the SNIA are standards that users, integrators and OEMs can use to compare SSDs, as well as SSDs vs. traditional hard disk drives (HDDs). At a minimum, this standard, or standards, would address performance, providing apples-to-apples metrics to enable comparisons of vendors' performance claims.
But in the case of SSDs, the metrics would have to go way beyond that. For one, they would have to include capacity and price. This would approximate what we get from the Storage Performance Council's SPC benchmarks.
However, the SSD metrics should also encompass durability/reliability and even power consumption. I doubt that it would be possible to come up with a single metric that measured IOPS/$/GB/watts, but the industry will desperately need at least a series of metrics to enable users/integrators/OEMs to make sense out of the nonsense that currently dominates in marketing materials.
Friday, October 24, 2008
When will we see FCoE?
Continuing on my Fibre Channel over Ethernet musings . . .
I don't think we'll see any appreciable adoption of FCoE in the end-user community until at least 2010. Products, or at least prototypes, exist today from vendors such as Emulex, QLogic, and Cisco, and each of those vendors has promised to provide me with FCoE customer contacts within the next month or two. If that happens, InfoStor will certainly report on it, but don't hold your breath.
TheInfoPro research firm recently concluded a survey of Fortune 1000 storage managers that included some questions about FCoE adoption. Almost three-fourths (73%) of the respondents did not have FCoE in their plans (vs. 84% six months ago), 21% had it in their long-term plans, and 2% in their near-term plans. About 1% said they had FCoE products in pilot and/or evaluation stages, and another 4% said they were currently using FCoE (although I find that hard to believe).
One thing that might stall adoption is somewhat political, or territorial, in nature: Who would be in charge of the FCoE infrastructure -- the storage or network professionals? FCoE is Fibre Channel, but it's also Ethernet.
On a somewhat related note: Which vendors will succeed in the FCoE market? Those of us in the storage biz might assume it will be the vendors with a lot of Fibre Channel expertise, such as Brocade, Emulex, QLogic, etc. in the area of FCoE-based converged network adapters (CNAs). However, I'm told that there are dozens of other vendors -- including existing NIC vendors and start-ups -- gearing up to enter the CNA space. The same question would apply to FCoE-enabled switches.
This could be an interesting battle brewing.
For more information on FCoE, see:
Q&A: Fibre Channel over Ethernet (FCoE)
And for the Fibre Channel Industry Association's view, see:
FCIA makes the case for FCoE
I don't think we'll see any appreciable adoption of FCoE in the end-user community until at least 2010. Products, or at least prototypes, exist today from vendors such as Emulex, QLogic, and Cisco, and each of those vendors has promised to provide me with FCoE customer contacts within the next month or two. If that happens, InfoStor will certainly report on it, but don't hold your breath.
TheInfoPro research firm recently concluded a survey of Fortune 1000 storage managers that included some questions about FCoE adoption. Almost three-fourths (73%) of the respondents did not have FCoE in their plans (vs. 84% six months ago), 21% had it in their long-term plans, and 2% in their near-term plans. About 1% said they had FCoE products in pilot and/or evaluation stages, and another 4% said they were currently using FCoE (although I find that hard to believe).
One thing that might stall adoption is somewhat political, or territorial, in nature: Who would be in charge of the FCoE infrastructure -- the storage or network professionals? FCoE is Fibre Channel, but it's also Ethernet.
On a somewhat related note: Which vendors will succeed in the FCoE market? Those of us in the storage biz might assume it will be the vendors with a lot of Fibre Channel expertise, such as Brocade, Emulex, QLogic, etc. in the area of FCoE-based converged network adapters (CNAs). However, I'm told that there are dozens of other vendors -- including existing NIC vendors and start-ups -- gearing up to enter the CNA space. The same question would apply to FCoE-enabled switches.
This could be an interesting battle brewing.
For more information on FCoE, see:
Q&A: Fibre Channel over Ethernet (FCoE)
And for the Fibre Channel Industry Association's view, see:
FCIA makes the case for FCoE
Wednesday, October 22, 2008
FCoE vs. iSCSI, take 1
The emerging Fibre Channel over Ethernet (FCoE) standard promises to bring back the good old days of the raging Fibre Channel vs. iSCSI debates. Or does it?
First, a little history. When the FCoE concept was first hatched about two years ago, naysayers -- and sometimes the press -- jumped all over it: FCoE was a "last ditch ploy" by Fibre Channel "bigots" to stem the "inevitable tide of Ethernet" taking over all types of data center traffic, including storage via the iSCSI protocol. The rhetoric ran rampant, but then it died down into more politically correct statements such as "Fibre Channel and iSCSI are complementary, not competitive."
I actually bought that for awhile, in part because I (and many others) didn't think that iSCSI would ever be considered for enterprise data centers (which is where FCoE would play). As such, iSCSI would eventually dominate at the departmental level and at SMBs while Fibre Channel, or FCoE, would dominate in data centers as companies moved to converged networks, or whatever you want to call them.
Then along came 10GbE (10Gbps Ethernet) which, once it gets cheaper, all of a sudden makes iSCSI seem feasible as the kingpin storage protocol for data centers. Then again, I don't see data-center managers (or at least the storage managers) giving up on all the Fibre Channel equipment, software, and expertise they've accumulated.
It does seem clear that data-center managers will eventually have to make a choice between FCoE and iSCSI, assuming they're moving to converged networks. Hence the controversy and all the early promotional activity from the FCoE camp.
Another possibility is that FCoE will be used as an interim "stepping stone" technology on the path to a true converged network with one physical layer transport for all traffic.
Yet another possibility: The real competitor for FCoE is not iSCSI but, rather, the status quo. In this scenario, data centers keep their Fibre Channel SANs for storage and their Ethernet LANs for everything else -- and never the twain shall meet.
Maybe I just have FCoE on the brain because of some of the company I kept at last week's Storage Networking World show -- which included Brocade, Emulex, and QLogic -- but in my next blog I'll look at some of the lingering questions/issues surrounding this nascent technology.
First, a little history. When the FCoE concept was first hatched about two years ago, naysayers -- and sometimes the press -- jumped all over it: FCoE was a "last ditch ploy" by Fibre Channel "bigots" to stem the "inevitable tide of Ethernet" taking over all types of data center traffic, including storage via the iSCSI protocol. The rhetoric ran rampant, but then it died down into more politically correct statements such as "Fibre Channel and iSCSI are complementary, not competitive."
I actually bought that for awhile, in part because I (and many others) didn't think that iSCSI would ever be considered for enterprise data centers (which is where FCoE would play). As such, iSCSI would eventually dominate at the departmental level and at SMBs while Fibre Channel, or FCoE, would dominate in data centers as companies moved to converged networks, or whatever you want to call them.
Then along came 10GbE (10Gbps Ethernet) which, once it gets cheaper, all of a sudden makes iSCSI seem feasible as the kingpin storage protocol for data centers. Then again, I don't see data-center managers (or at least the storage managers) giving up on all the Fibre Channel equipment, software, and expertise they've accumulated.
It does seem clear that data-center managers will eventually have to make a choice between FCoE and iSCSI, assuming they're moving to converged networks. Hence the controversy and all the early promotional activity from the FCoE camp.
Another possibility is that FCoE will be used as an interim "stepping stone" technology on the path to a true converged network with one physical layer transport for all traffic.
Yet another possibility: The real competitor for FCoE is not iSCSI but, rather, the status quo. In this scenario, data centers keep their Fibre Channel SANs for storage and their Ethernet LANs for everything else -- and never the twain shall meet.
Maybe I just have FCoE on the brain because of some of the company I kept at last week's Storage Networking World show -- which included Brocade, Emulex, and QLogic -- but in my next blog I'll look at some of the lingering questions/issues surrounding this nascent technology.
Tuesday, October 21, 2008
The hottest technologies at SNW
In my last blog I promised to reveal The Hottest Technology at last week's Storage Networking World (SNW) show but first, the runners up.
5. Solid-state disk (SSD) drives. There was a lot of talk about SSDs at SNW (mostly in the context of the raging SSD-vs.-HDD debate), but there weren't many product introductions at the show, although Intel did announce production shipments of its new line of enterprise-class flash drives, which will eventually spur further price erosion in this yet-to-get-hot market.
4. Cloud-based storage. Again, a lot of talk, but few vendors. The only cloud storage vendor I met with was Nirvanix, although this services category is expected to grow rapidly over the next couple quarters. For more on this subject, see "What is cloud-based storage?"
3. Storage efficiency technologies, most notably data de-duplication and thin provisioning. Data de-duplication for secondary storage is becoming widespread, but there was a lot of talk at the show about data de-dupe for primary storage from vendors such as NetApp, Storwize, Ocarina Networks, and there will be much more to come.
2. Server virtualization. This is clearly the dominant IT trend, but there are so many storage technologies focused on optimizing virtual servers that it didn't quite make the top of my list.
And the winner is . . .
Fibre Channel over Ethernet (FCoE). This is admittedly an odd choice because, in terms of end-user adoption, FCoE may be years away, but
For one, FCoE was the topic of the only major press conference at the show (hosted by QLogic, Cisco, NetApp and VMware).
Two, a number of vendors made FCoE product -- or at least product certification -- announcements at the show (albeit prototypes in most cases), including EMC, NetApp, Brocade, Emulex and QLogic.
And three, FCoE will provide controversy (as in 10Gbps iSCSI over Ethernet vs. Fibre Channel over Ethernet) for years to come.
I'll address the controversial side of FCoE in my next blog.
5. Solid-state disk (SSD) drives. There was a lot of talk about SSDs at SNW (mostly in the context of the raging SSD-vs.-HDD debate), but there weren't many product introductions at the show, although Intel did announce production shipments of its new line of enterprise-class flash drives, which will eventually spur further price erosion in this yet-to-get-hot market.
4. Cloud-based storage. Again, a lot of talk, but few vendors. The only cloud storage vendor I met with was Nirvanix, although this services category is expected to grow rapidly over the next couple quarters. For more on this subject, see "What is cloud-based storage?"
3. Storage efficiency technologies, most notably data de-duplication and thin provisioning. Data de-duplication for secondary storage is becoming widespread, but there was a lot of talk at the show about data de-dupe for primary storage from vendors such as NetApp, Storwize, Ocarina Networks, and there will be much more to come.
2. Server virtualization. This is clearly the dominant IT trend, but there are so many storage technologies focused on optimizing virtual servers that it didn't quite make the top of my list.
And the winner is . . .
Fibre Channel over Ethernet (FCoE). This is admittedly an odd choice because, in terms of end-user adoption, FCoE may be years away, but
For one, FCoE was the topic of the only major press conference at the show (hosted by QLogic, Cisco, NetApp and VMware).
Two, a number of vendors made FCoE product -- or at least product certification -- announcements at the show (albeit prototypes in most cases), including EMC, NetApp, Brocade, Emulex and QLogic.
And three, FCoE will provide controversy (as in 10Gbps iSCSI over Ethernet vs. Fibre Channel over Ethernet) for years to come.
I'll address the controversial side of FCoE in my next blog.
Wednesday, October 15, 2008
Why trade shows? Because we need the eggs
Greetings from Storage Networking World in Dallas! My colleague Kevin Komiega and I are knee-deep in dozens (seems like hundreds) of meetings with vendors (hey, end users are hard to find at this show). We're covering product highlights on the InfoStor home page, which we'll (hopefully) wrap up over the next day or two, but for now I have a brief respite for reflection on trade shows.
Although the conversations with vendors center on their products and PowerPoints, the meetings invariably start or end with general chit-chat about the show itself. The vendors consistently complain about lack of sales leads, booth traffic, hotel logistics and high prices, yet they're strangely and consistently upbeat (perhaps because I'm usually talking to marketing managers or CEOs and they're always happy). When I ask them how they can justify the costs associated with these shows in light of the above complaints, I often get the rare blank stare.
In trying to figure out why vendors (and editors, for that matter) continue going to trade shows, I'm reminded of the Woody Allen lines at the end of Annie Hall.
This guy goes to a psychiatrist and says, "Doc, my brother's crazy. He thinks he's a chicken." And the doctor says, "Well, why don't you turn him in?" And the guy says, "I would, but I need the eggs."
Well, I guess that's pretty much how how I feel about trade shows (and relationships). They're totally irrational and crazy and absurd, but I guess we keep going through it because most of us need the eggs.
In my next blog I'll explore The Hottest Technology at Storage Networking World.
Although the conversations with vendors center on their products and PowerPoints, the meetings invariably start or end with general chit-chat about the show itself. The vendors consistently complain about lack of sales leads, booth traffic, hotel logistics and high prices, yet they're strangely and consistently upbeat (perhaps because I'm usually talking to marketing managers or CEOs and they're always happy). When I ask them how they can justify the costs associated with these shows in light of the above complaints, I often get the rare blank stare.
In trying to figure out why vendors (and editors, for that matter) continue going to trade shows, I'm reminded of the Woody Allen lines at the end of Annie Hall.
This guy goes to a psychiatrist and says, "Doc, my brother's crazy. He thinks he's a chicken." And the doctor says, "Well, why don't you turn him in?" And the guy says, "I would, but I need the eggs."
Well, I guess that's pretty much how how I feel about trade shows (and relationships). They're totally irrational and crazy and absurd, but I guess we keep going through it because most of us need the eggs.
In my next blog I'll explore The Hottest Technology at Storage Networking World.
Subscribe to:
Posts (Atom)