Get the lowest-cost and the best server colocation service in the business. Learn more.
Information Technology News.


Oracle to soon close a gap with its competitor SAP

Share on Twitter.

Install your server in Sun Hosting's modern colocation center in Montreal. Get all the details by clicking here.


Do it right this time. Click here and we will take good care of you!





Click here to order our special clearance dedicated servers.

Get the most reliable SMTP service for your business. You wished you got it sooner!

June 11, 2014

As it's often said, a company's immediate future is its competitor's past.

However, having said that, Oracle isn't too worried right now, because the changes it has made to let its database store data in speedy memory afford far more backwards compatibility than does SAP's HANA product.

With the launch of the new 'Oracle Database In-Memory' technology yesterday, Oracle has closed a gap with its competitor SAP and thrown further doubt on some of the value claimed by newer, younger startups.

The new application lets system admins of Oracle's 12c database easily shift data into the DRAM memory of servers, which is a much more responsive storage medium than traditional spinning disk drives.

With this upgrade, Oracle has tightened the industry competition with IBM, Microsoft, and – most significantly, its biggest competitor: SAP.

The German software company, which introduced an in-memory processing system named HANA back in late 2010, has been fairly active as of late, and we can only imagine that it will try to fend off Oracle as best it can.

The difference between HANA and Oracle's new software is that Oracle's in-memory option is compatible with all existing Oracle apps built via 12c, letting admins get the advantages of memory without having to drastically rewrite all their apps that they've worked so hard on.

"We've implemented this so that you have to do nothing to your applications to test it out," said Oracle vice president of product management Tim Shetler.

"Overall, that was purely so that people could adopt it easily, and not have to open up the app and be forced to make changes. Everything they could have done previously is still accessible when running with the in-memory option," added Shetler.

Additionally, Oracle has also let the system tier across different memory mediums within the same cluster. "Even though this is memory-optimized database technology, there's no requirement that the entire database or dataset be in memory to use it," said Shetler.

"We also have the ability to spread a large dataset across different tiers of storage. If you run a query against that data it will all transparently come back," he added.

So, what's different? "Well a lot of the transparency comes through enhancements we've had to make to the optimizer in the database itself," Shetler said. "The optimizer is always the one that sees a SQL statement and figures out what is the fastest way to analyze this request. The optimizer will then decide based on the request which direction to route the request."

As the optimizer is the core part of the database, Oracle has spent several years working on adding this capability without breaking anything, and while making certain it's backwards compatible.

Subtle things like making sure "that if one app had made a modification to the row store and that data that's been modified or is in the in-memory columnar store, we have to synchronizer those two," are why it's taken Oracle several years to catch up to SAP HANA.

Overall, introducing the technology meant that "the disruption was fairly minimal," Shetler added. "It was like grafting an in-memory store onto an existing set of foundation and infrastructure." Pricing was not disclosed as of yet.

In other IT news

Market research firm IDC has just revealed the numbers of one of the worst data storage quarters in recent history as enterprise buyers literally went on a high-end storage strike. It's been a major drop in the storage business in the first quarter of this year so far.

In a nutshell, here's what IDC statement had to say-- "The total (internal plus external) disk storage systems market generated $7.3 billion in revenue, representing a drop of 6.9 percent from the prior year's first quarter and a sequential decline of 17 percent compared to the seasonally stronger 4th quarter of 2013."

IDC storage research director Eric Sheppard said-- "The poor results of the first quarter were driven by several factors, the most important of which was a 25 percent decline in high-end storage spending."

He also singled out the mainstream adoption of storage optimization technologies, a general trend towards keeping systems longer, economic uncertainty, and the ability of customers to address capacity needs on a micro and short-term basis through public cloud offerings.

Storage provider EMC lead with a 29.1 percent revenue market share but its revenue growth was just 8.8 percent year-on-year, worse than the market as a whole (5.2 percent) because high-end array sales were so weak.

IBM did a lot worse, with a 22.5 percent drop in quarterly revenues uear over year. As NetApp and HP declined less than the market, they arguably gained share.

For its part, Dell had a 8.8 percent revenue drop year over year. Although NetApp, in the number two position for revenue market share, saw a market share rise year-on-year IDC shows it as having experienced a 2.8 percent revenue growth over the same period because its actual revenues fell.

Year over year, the storage market as a whole declined by 6.9 percent in revenue terms, with NetApp only declining 2.8 percent, meaning that it gained a bit of market share.

We also charted IDC's recent quarterly numbers for total storage revenue market shares - not absolute revenues - to see the trends again.

So of course, the question everybody is now asking-- is this just a quarterly drop or will the move to the cloud gather more momentum and the "Others" category, including hotshot upstarts like Fusion-io, Nimble, Pure, Tegile, Tintri and Violin, capture more disk array business with their new flash products?

So are we seeing the cloud and flash market put a temporary or permanent kink in the disk array business? We will know more in about another three months.

In other IT news

Google has developed a method to save as much as 20 percent of the electricity used to power its multiple data centers by reaching deep into the core of its infrastructure and experimenting with different formulas.

In a paper to be presented next week at the ISCA 2014 computer architecture conference entitled "Towards Energy Proportionality for Large-Scale Latency-Critical Workloads", researchers and engineers from Google and Stanford University discuss an experimental system named "Pegasus" that may save Google large amounts of cash by helping it cut its electricity consumption.

To be sure, Pegasus addresses one of the worst-kept secrets about cloud computing, which is that the computer chips in the gigantic data centers of Google, Amazon and Microsoft are standing idle for significant amounts of time.

Though all these companies have developed sophisticated technologies to try to increase the utilization of their chips, they all fall short in one way or another.

This simply means that a substantial amount of the electricity going into their data centers is completely wasted as it powers server chips that are either completely idle or in a state of very low utilization.

From an operator's perspective, it's a bit of a losing proposition, and from an environmentalist's perspective it's a real blunder.

Now Google and Stanford researchers have designed a new system that increases the efficiency of the power consumption of the data centers, but without compromising performance in any way.

Click here to order the best dedicated server and at a great price.

Pegasus does this by dialing up and down the power consumption of the processors within Google's servers according to the desired request-latency requirements – dubbed iso-latency – of any given workload.

The power management technology that Pegasus uses is 'Running Average Power Limit' or RAPL, which allows you to incrementally tweak CPU power consumption in amounts of just 0.125W.

The system "sweeps the RAPL power limit at a given load to find the point of minimum cluster power that satisfies the service-level objective target".

Put another way, Pegasus makes sure that a processor is working just hard enough to meet the demands of the application running on it, but nothing else.

"The baseline can be compared to driving a car with sudden stops and starts. Having said that, iso-latency would then be driving the car at a slower speed to avoid accelerating hard and braking harder," the researchers write.

"The second method of operating a car is much more fuel ef?cient than the ?rst, which is akin to the results we have observed," the paper suggests.

To be sure, existing power management techniques for large data centers advocate turning off individual servers or even individual cores, but the researchers said this was inefficient, and it is.

"Even if spare memory storage is available, moving tens of gigabytes of state in and out of servers is expensive and very time consuming, making it dificult (if not impossible) to react to fast or small changes in load," they explain.

Shutting down individual computer cores, meanwhile, doesn't work due to the specific needs of Google's search technology. "A single user query to the front-end will lead to forwarding of the query to all leaf nodes. As a result, even a small request rate can create a non-trivial amount of work on each server," say the researchers.

"For instance, consider a cluster sized to handle a peak load of 10,000 queries per second (QPS) using 1000 servers," they explain. "Even at a 10 percent load, each of the 1000 nodes are seeing on average one query per millisecond. There is simply not enough idleness to invoke some of the more effective low power modes," they explain.

Source: Oracle.

Get the most dependable SMTP server for your company. You will congratulate yourself!

Share on Twitter.

IT News Archives | Site Search | Advertise on IT Direction | Contact | Home

All logos, trade marks or service marks on this site are the property of their respective owners.

Sponsored by Sure Mail™, Avantex and
by Montreal Server Colocation.

       © IT Direction. All rights reserved.