Get the lowest-cost and the best server colocation service in the business. Learn more.
Information Technology News.


Share on Twitter.

Install your server in Sun Hosting's modern colocation center in Montreal. Get all the details by clicking here.


Do it right this time. Click here and we will take good care of you!





Click here to order our special clearance dedicated servers.

Get the most reliable SMTP service for your business. You wished you got it sooner!

June 8, 2014

Market research firm IDC has just revealed the numbers of one of the worst data storage quarters in recent history as enterprise buyers literally went on a high-end storage strike. It's been a major drop in the storage business in the first quarter of this year so far.

In a nutshell, here's what IDC statement had to say-- "The total (internal plus external) disk storage systems market generated $7.3 billion in revenue, representing a drop of 6.9 percent from the prior year's first quarter and a sequential decline of 17 percent compared to the seasonally stronger 4th quarter of 2013."

IDC storage research director Eric Sheppard said-- "The poor results of the first quarter were driven by several factors, the most important of which was a 25 percent decline in high-end storage spending."

He also singled out the mainstream adoption of storage optimization technologies, a general trend towards keeping systems longer, economic uncertainty, and the ability of customers to address capacity needs on a micro and short-term basis through public cloud offerings.

Storage provider EMC lead with a 29.1 percent revenue market share but its revenue growth was just 8.8 percent year-on-year, worse than the market as a whole (5.2 percent) because high-end array sales were so weak.

IBM did a lot worse, with a 22.5 percent drop in quarterly revenues uear over year. As NetApp and HP declined less than the market, they arguably gained share.

For its part, Dell had a 8.8 percent revenue drop year over year. Although NetApp, in the number two position for revenue market share, saw a market share rise year-on-year IDC shows it as having experienced a 2.8 percent revenue growth over the same period because its actual revenues fell.

Year over year, the storage market as a whole declined by 6.9 percent in revenue terms, with NetApp only declining 2.8 percent, meaning that it gained a bit of market share.

We also charted IDC's recent quarterly numbers for total storage revenue market shares - not absolute revenues - to see the trends again.

So of course, the question everybody is now asking-- is this just a quarterly drop or will the move to the cloud gather more momentum and the "Others" category, including hotshot upstarts like Fusion-io, Nimble, Pure, Tegile, Tintri and Violin, capture more disk array business with their new flash products?

So are we seeing the cloud and flash market put a temporary or permanent kink in the disk array business? We will know more in about another three months.

In other IT news

Google has developed a method to save as much as 20 percent of the electricity used to power its multiple data centers by reaching deep into the core of its infrastructure and experimenting with different formulas.

In a paper to be presented next week at the ISCA 2014 computer architecture conference entitled "Towards Energy Proportionality for Large-Scale Latency-Critical Workloads", researchers and engineers from Google and Stanford University discuss an experimental system named "Pegasus" that may save Google large amounts of cash by helping it cut its electricity consumption.

To be sure, Pegasus addresses one of the worst-kept secrets about cloud computing, which is that the computer chips in the gigantic data centers of Google, Amazon and Microsoft are standing idle for significant amounts of time.

Though all these companies have developed sophisticated technologies to try to increase the utilization of their chips, they all fall short in one way or another.

This simply means that a substantial amount of the electricity going into their data centers is completely wasted as it powers server chips that are either completely idle or in a state of very low utilization.

From an operator's perspective, it's a bit of a losing proposition, and from an environmentalist's perspective it's a real blunder.

Now Google and Stanford researchers have designed a new system that increases the efficiency of the power consumption of the data centers, but without compromising performance in any way.

Pegasus does this by dialing up and down the power consumption of the processors within Google's servers according to the desired request-latency requirements – dubbed iso-latency – of any given workload.

The power management technology that Pegasus uses is 'Running Average Power Limit' or RAPL, which allows you to incrementally tweak CPU power consumption in amounts of just 0.125W.

The system "sweeps the RAPL power limit at a given load to find the point of minimum cluster power that satisfies the service-level objective target".

Put another way, Pegasus makes sure that a processor is working just hard enough to meet the demands of the application running on it, but nothing else.

"The baseline can be compared to driving a car with sudden stops and starts. Having said that, iso-latency would then be driving the car at a slower speed to avoid accelerating hard and braking harder," the researchers write.

"The second method of operating a car is much more fuel ef?cient than the ?rst, which is akin to the results we have observed," the paper suggests.

To be sure, existing power management techniques for large data centers advocate turning off individual servers or even individual cores, but the researchers said this was inefficient, and it is.

"Even if spare memory storage is available, moving tens of gigabytes of state in and out of servers is expensive and very time consuming, making it dificult (if not impossible) to react to fast or small changes in load," they explain.

Shutting down individual computer cores, meanwhile, doesn't work due to the specific needs of Google's search technology. "A single user query to the front-end will lead to forwarding of the query to all leaf nodes. As a result, even a small request rate can create a non-trivial amount of work on each server," say the researchers.

"For instance, consider a cluster sized to handle a peak load of 10,000 queries per second (QPS) using 1000 servers," they explain. "Even at a 10 percent load, each of the 1000 nodes are seeing on average one query per millisecond. There is simply not enough idleness to invoke some of the more effective low power modes," they explain.

So, PEGASUS, which stands for Power and Energy Gains Automatically Saved from Underutilized Systems, has been created. The technology is a dynamic, feedback-based controller that enforces the iso-latency policy.

It tweaks the power to the chip according to the task it's running, making sure to not violate any service-level agreements on latency.

During various tests on Google's production workloads, the researchers found that PEGASUS saved as much as 30 percent of power compared to a non-PEGASUS system during times of low demand, and 11 percent total energy savings over a 24-hour period.

The team also evaluated it on a full scale, production cluster for its search engine at Google. There, Pegasus did marginally less by saving between 10 and 20 percent on average during low utilization periods.

This is due to the way it applied policy across the thousands of servers without taking into account variations between chips.

A potential solution to this is to distribute the PEGASUS controller so that it lives on each node and applies latency policy from there.

"The solution to the so-called 'hot leaf issue' is fairly straightforward-- implement a distributed controller on each server that keeps the leaf latency at a certain latency goal," the researchers write."

In the real world, as any grey haired veteran of distributed systems can tell you, implementing any kind of distributed controller scheme is akin to inviting a world of confusion and pain into your data center.

Click here to order the best dedicated server and at a great price.

However, Google is a gold-plated organization that can fund the necessary engineers to keep a distributed scheme like this working. And it does work.

If PEGASUS were to be implemented in a distributed way, the researchers say it could save up to 35 percent of power over the baseline-– a huge savings for a company the size of Google.

As is typical with Google, the paper gives no details of whether PEGASUS has been deployed across Google's infrastructure in production, but given these huge power savings and the substantial amount of work Google has invested in the technology, we think it's very likely.

"Overall, iso-latency provides a signi?cant step forward towards the goal of energy proportionality for one of the challenging classes of large-scale, low-latency workloads," the researchers write.

The deployment of complex systems like PEGASUS alongside other advanced Google technologies such as OMEGA (cluster management), SPANNER (distributed DBMS), or CPI2 (thread-level performance monitoring) enables Google to make its data centers dramatically more efficient than those operated by smaller, less sophisticated companies.

These technologies will, over time, help Google compete in public cloud with rivals such as Amazon and Microsoft, while serving more ads at a lower cost than before.

Source: IDC Market Research.

Get the most dependable SMTP server for your company. You will congratulate yourself!

Share on Twitter.

IT News Archives | Site Search | Advertise on IT Direction | Contact | Home

All logos, trade marks or service marks on this site are the property of their respective owners.

Sponsored by Sure Mail™, Avantex and
by Montreal Server Colocation.

       © IT Direction. All rights reserved.