SanDisk acquires Fusion-io for $1.1 billion in cash
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
June 17, 2014
Flash storage solutions provider SanDisk said this morning that it is acquiring Fusion-io
for $1.1 billion in an all cash transaction.
Fusion-io’s PCIe flash card and ioControl shared flash array businesses will now be integrated
with SanDisk’s other enterprise flash acquisitions and business.
This acquisition will provide SanDisk an immediate and high-profile PCIe flash card hardware
and software product line, almost overnight.
Market analyst firm Gartner recently said that Fusion-io was fifth overall in the enterprise SSD market
(SSDs + PCIe flash), behind fourth-placed SanDisk, which has virtually no PCIe presence, so the acquisition
can be viewed as stategic.
SanDisk has a $6 billion annual revenue run rate and made a billion dollar profit in 2013. It certainly
is in a healthy state.
Over the past couple of years SanDisk has also acquired:
SMART Storage for its SSDs and flash DIMMs for $307 million in July 2013
FlashSoft for PCIe flash aching software in February 2012
Pliant for SSD controllers in May 2011
Overall, Fusion-io is its largest flash business acquisition so far. The company will benefit from
the addition of Fusion-io’s leading PCIe solutions to SanDisk’s vertically integrated business
SanDisk CEO Sanjay Mehrotra was quoted as saying-- “Fusion-io will accelerate our efforts to enable
the flash-transformed data center, helping companies better manage increasingly heavy data
workloads at a lower total cost of ownership.”
SanDisk is hell-bent on penetrating the enterprise flash market, pushing the concept of an all-flash
Fusion-io has just announced its third generation “Atomic” ioMemory PCIe card products. SanDisk
will inherit all of its server OEM and channel partner relationships and be able to guarantee a steady
supply of flash chips from its own flash foundry partnership with Toshiba.
The acquisition is expected to close in the third quarter of SanDisk’s fiscal 2014, unless
somebody else jumps in with a bid and prolongs the affair.
The flash industry is consolidating at a dizzying rate, with Seagate acquiring LSI’s PCIe card flash
business from Avago for $450 million last month.
It followed on from multiple acquisitions by Western Digital, and Toshiba buying Violin's Velocity
PCIe card business.
This $1.1 billion SanDisk acquisition price is slightly more than double that, so
there is a premium for Fusion-io’s market success over LSI, but still, two times its annual
revenue? Sounds like a good bargain.
Fusion-io CEO Shane Robinson was pleased and he said-- “This transaction represents a
compelling opportunity for Fusion-io’s employees, customers and shareholders. Fusion-io’s innovative
hardware and software solutions will be improved by SanDisk’s worldwide scale and vertical integration,
enabling a combined company that can offer an even more compelling value proposition for customers and
In other IT news
Microsoft said earlier this morning that it has found a new method to greatly increase the
computing capabilities of its data centers, despite the fact that Moore's Law is wheezing towards
its inevitable demise sooner as opposed to later.
In a paper to be presented this week at the International Symposium on Computer Architecture (ISCA) titled
'A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services' a group of top Microsoft
Research engineers explain how the company has dealt with the slowdown in single-core clock-rate
improvements that has occurred over the past eleven years.
To get around that issue, Microsoft has designed a system it calls Catapult, which automatically
offloads some of the advanced technology that powers its Bing search engine onto clusters of highly
efficient, low-power FPGA chips attached to typical Intel Xeon server processors.
Think of FPGAs (field-programmable gate arrays) as chips whose circuits can be customized and
tweaked as required, allowing specific tasks to be offloaded from the Xeon processors and accelerated
in FPGA hardware.
This approach may save Microsoft from a rarely acknowledged problem that lurks in the technology
industry-- that processors are not getting much faster these days. And that would be a reality.
For the past fifty years or so, almost every aspect of our global economy has been affected
by Moore's Law, which states that the number of transistors on a chip of the same size will
double every 1 1/2 year or so, resulting in faster performance and much improved power efficiency.
But there's one small problem with that-- Moore's Law is not a law. Instead, it was an assertion
by Intel founder Gordon Moore in a 1965 article that the semiconductor industry got rather
carried away with.
In the past ten years, the salubrious effects of Moore's Law have started to wane, because
although companies are packing more and more transistors on their chips, the performance gains that
those transistors bring with them are not as great as they were during the law's early days.
And as you probably know by now, Intel has based its entire business model to the successful
fulfillment of Moore's Law, and proudly announces each new boost in transistor counts.
Those new transistors can help to increase a compute core's all-important instruction set per
cycle (IPC) metric-– IE: improved branch prediction, larger caches, more-efficient scheduling,
beefier buffers, etc.
However, the simple fact is that although chips have gone multi-core and are getting better
at multi-tasking, those individual cores are not getting much faster due to any significant new
discovery. And that's where the core of the problem resides in.
As AMD CTO Joe Macri recently told us, "There's not a whole lot of revolution left in CPUs per se, you know."
He did, however, note that "there's a lot of evolution left." Microsoft's new Catapult system is a
bit of both.
Under new chief executive Satya Nadella, Microsoft is investing billions of dollars at massive
data centers in its attempt to become a cloud-first company.
Understandably, part of that effort is to determine a way to jump-start consistent data-center compute-performance
The solution that Microsoft Research has come up with is to pair field-programmable gate
arrays with typical x86 processors, then let some data-center services such as the Bing search
engine offload certain well-understood operations to the arrays.
To say that the performance improvements in this approach have been noticeable would be a
Microsoft tells us that a test deployment on 1,632 servers was able to increase query
throughput by 95 percent, while only increasing power consumption by about 10 percent.
Though FPGA technology is well understood and used widely in the embedded technology industry,
it's not often that we hear it being paired with standard off-the-shelf CPUs for accelerating
web-facing software – at least until now, that is.
"We're moving into an era of programmable hardware supporting programmable software," Microsoft
Research's Doug Burger said. "We're just starting down that road now."
If Microsoft has indeed determined how to almost double the performance of its computers
while only paying a tenth more in electricity for large-scale data center tasks – and we see
no reason to doubt them – that's not only a huge saving, but also one that saves the company
from the slowdown in run-of-the-mill CPUs chips.
"Based on the actual results, Bing will roll out FPGA-enhanced servers in one data center
to process customer searches starting in early 2015," said Derek Chiouh, the principal architect
"We were looking to make a big jump forward in data center capabilities. It's an important
area," Microsoft Research's Doug Burger said.
Of course, Microsoft isn't doing this just on a hunch either. Burger wrote a paper in 2011
called Dark Silicon and the end of Multicore Scaling, which predicted that "left to the multicore
path, we may hit a 'transistor utility economics' wall in as few as three to five years, at which
point Moore's Law may end, creating massive disruptions in our industry."
In other IT news
Microsoft says it wants to start the development of quantum computers, in a partnership with the
Massachusetts Institute of Technology.
Head of research at Microsoft, Peter Lee has told the MIT Technology Review digital summit
that the company will be supporting research as well as doing its own work at its Station Q
research lab at the University of California's Santa Barbara campus.
In fact, what the software behemoth is after is something that can be exploited as a “fab-friendly”
To be sure, creating qubits has become almost routine in quantum research labs. In 2012, the University
of New South Wales created a single-atom qubit on silicon.
But creating qubits and getting them to behave properly, as a superposition of states rather than
a quantum 1 or 0, is difficult. Doing so reliably is even more difficult and creating qubits in a
manner that could be turned into some kind of microelectronics foundry is a long way off, no matter
how hard you try.
Lee knows this, and he told the MIT Technology Review that as far as Microsoft is aware, the
current approaches to creating qubits don't scale very well.
Microsoft's direction is to work on a “topological qubit”. The description “topological” refers to
how entanglement is created and maintained and the point of the work is that Redmond told MIT
that it sees the approach as more robust than other research into qubits.
Only a cynic would also note that whoever successfully builds a reliable, mass-producable
qubit will have an intellectual property of almost incalculable value, making it unlikely that
Microsoft would follow paths where others have taken the lead.
Source: Sandisk Inc.
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.