More on the OpenPOWER Consortium formed by IBM in 2013
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
April 23, 2014
The OpenPOWER Consortium was formed by IBM a year ago, at a time when Big Blue and other IT companies were seeing their hardware
divisions cut down by a serious drop in spending from enterprise clients.
The drop in sales was also attributed to low-cost servers that could still be customized by low-cost manufacturers in Asia.
Also, Intel's x86 architecture continued to dominate the market in both typical servers and high-performance computing, putting
alternate architecture providers like Oracle, IBM and, to a lesser extent HP, in a very tough position.
So the question is, how should IBM keep its POWER chips alive and guarantee them a larger market in a changing world? Big Blue's answer
to this weird situation was OpenPOWER, which seeks to do for its chip architecture what British company ARM's licensing model did for its
eponymous chips, causing them to become the fundamental technology to the vast majority of the world's phones and tablets.
IBM is seeking with OpenPOWER to do to 'hyperscale' servers what ARM did to phones, and in doing so create itself a huge stream of
low-margin revenue that it can rely upon in years to come.
And although no one has said it so far, a helpful side effect is that this may cut down Intel's large business in huge data
IBM's hope is that by licensing the innards of its POWER chips to companies like Google, Canonical, Nvidia, Tyan and Suzhou PowerCore Technology,
it may be able to create new markets for the chip beyond Big Blue's traditional mainframes and high-end enterprise systems.
The OpenPOWER Consortium is, in many ways, where the guerrilla development approach of open source meets the expensive, complex
world of chip hardware.
IBM and its partners are betting that the architecture is good enough to meet their expectations. Giving the enthusiastic mood
that existed throughout the press conference, it was only natural that Intel would point out some of the possible drawbacks of the
"The OpenPOWER Foundation may hope to someday create an open solution, but it also faces a complex multi-year effort to establish
an ecosystem around the design, manufacturing and software," an Intel spokesperson said.
"Most data centers today run on Intel and we are not slowing down. Businesses recognize the value and there is a
large and growing x86 ecosystem (established over many years) that isn't going away. Creating an ecosystem is not an easy feat and
could take several years and a significant investment of time and money in porting architectures," he added.
By comparison, Intel competitor ARM was much more upbeat about the whole matter. "Across the server market, even within non-volume
servers, users are ready to move beyond the 'one size fits all' approach for servers and OpenPOWER is further validation of this as
well as the ARM business model," an ARM spokesperson said.
"Server customers are demanding choice and differentiation which is why ARM and its partners are already well underway with our work
to move the volume server market beyond the limitations associated with a proprietary architecture."
With OpenPOWER's 26 partners ranging from equipment makers to rich potential customers like Google, the scheme has a chance of
working. Maybe ARM has a new potential partner in its plan to pull Intel's chips out of the biggest data centers? We shall see.
In other IT news
Scientists and university researchers have built DNA genomes for many years, but applying what we already know about genetics
to everyday medicine is a difficult and rather daunting task.
Crafting treatments from genes is so complex that IBM recently entered a partnership to get its Watson supercomputer
learning to help the medical profession tailor personalised treatments for cancer.
Part of the issue that researchers want to solve is gene expression. In all the complexities of how genes interact, what
interactions are expressed in a physical trait? Whether that trait is blue eyes, or why one individual dies of a cancer that's
arrested in someone else.
What's needed is a method to accurately predict gene expression, and one angle of the research is based on RNA sequencing
The problem is that analysing RNA sequencing is a very slow process, and that's where the research out of Carnegie-Mellon
University and the University of Maryland comes in.
Their so-called Sailfish algorithm dramatically accelerates estimates of the likely outputs of RNA sequence. To explain why
this is so important, the researchers' release says-- “Though an organism's genetic makeup is static, the activity of individual
genes varies greatly over time, making gene expression an important factor in understanding how various organisms work and what
occurs during disease processes. Gene activity can't be measured directly, but can be inferred by monitoring RNA, the molecules
that carry information from the genes for producing proteins and other cellular activities.”
But analysing the RNA-seq reads (short sequences of RNA) traditionally results in huge datasets that have to be mapped back to
their original genetic processes.
The Sailfish algorithm completely skips this painstaking mapping step, thereby increasing the speed of the process by a wide
Instead, the university researchers found they could allocate parts of the reads to different types of RNA molecules, much as
if each read acted as several votes for one molecule or another.
Think of it as upvoting posts in a forum-- individual votes bestow a kind of consensus on which reads or posts carry the greatest
Getting what might be a 15-hour analysis down to just a few minutes is important, the researchers believe. There are already huge
repositories of RNA-seq data, but turning data into insight is held back by computational effort.
Fifteen hours for each analysis really starts to add up, particularly if you want to look at 100 experiments, explains Carnegie-Mellon
associate professor Carl Kingsford.
“With Sailfish, we can give researchers everything they got from previous methods, but faster,” he said.
In other IT news
VMware reported stronger earnings and revenue yesterday that beat analysts expectations. The Palo Alto-based company made $1.368 billion
in sales in its first quarter, up 14 percent on the previous year's quarter.
VMware reported an operating income of $241 million, up 51 percent on the $160 million it reported in Q1 of 2013. Non-GAAP earnings
per share was $0.80, versus expectations of $0.79.
VMware made $561 million from licenses in the quarter versus $799 million in maintenance, representing a growth-heavy start to
a year that VMware executives believe will lead to significant growth for the company.
"Our strong financial results reflect VMware's unique position in helping customers transform their IT infrastructure," said VMware's
CEO Pat Gelsinger.
"As the industry shifts from client server computing to the mobile-cloud era, customers are choosing our solutions because we
have the most complete vision and offering for navigating this evolving world," added Gelsinger.
During the 1st quarter, VMware spent $77 million on additions to property and equipment, compared with $78 million in the same
quarter last year.
Those purchases support the infrastructure that goes into VMware's strategically important vCloud Hybrid Service, which launched
in August 2013 and is meant to compete with rivals Amazon, Microsoft and Google.
But it's important to note that VMware's competitors have been spending a lot more on capital expenditures in the last year, and
that's something that VMware needs to look into if it wants to continue growing at the same pace.
Source: IBM Corp.
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.
Need to know more about the cloud? Sign up for your free Cloud Hosting White Paper.