NetApp completes the overhaul of its unified storage arrays
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
June 18, 2014
NetApp said earlier this morning that it has completed the overhauling of its unified
storage FAS arrays with FAS-2500s at the low-end and a bigger FAS-8080 EX on the high end
of the range.
We got the basic details, except for the FAS 8080 which has 36 TB of virtual storage tier flash
and not the 18 TB we initially expected.
The new solution can scale up to 69 PB through clustering and handle 600+ IO connections.
Just a quick point about the scale-out-- the FAS-8080 EX can scale to 24 nodes (12 high-availability
pairs) with file access but only 8 nodes (4 HA pairs) with SAN access.
The separate V-Series goes away as a hardware implementation, with its software now incorporated
in Data OnTap.
At the low-end, the FAS-2220 gets replaced with the FAS-2520, the FASA-2240 with the FAS-2552 and the FAS-3220
with the FAS-2554, completing the removal of the mid-range FAS-3200s, the FAS-3250 having been replaced
by the FAS-8020 a few weeks ago.
NetApp says the FAS-2500s, with the 4 TB of flash, can accelerate workloads by up to 46 percent
and increase usable capacity by 48 percent.
The entire FAS hardware range has been refreshed in just five months and now characterized
as hybrid flash+disk or, with the FAS-8000s, all-flash, if users configure them that way.
The FAS-8080 EX can perform at around 4 million IOPS and more than 4.6 PB of capacity if configured
with only flash drives.
As a hybrid array, its flash cache amounts to almost 500 PB. There is a NetApp All-Flash FAS Reference
Architecture for VDI.
The company says all-flash FAS arrays are suited to workloads needing both high-performance
and low data access latency.
Its OnCommand Insight management software enables storage cost management, rationalisation, and
optimization of storage services and there is a tailored offering, NetApp Services for OnCommand
Insight, for enterprise customers deploying high-end FAS-8000 systems across heterogeneous datacentres.
To encourage reluctant customers, NetApp is offering a 90 day payment holiday to save while
transitioning to these new FAS arrays.
The company is now embracing the cloud storage world with Cloud ONTAP and also beginning
to strengthen its object offering.
As soon as an attractive new data storage silo comes along, a 3rd-party array virtualization function,
and object storage NetApp either adds functionality to FAS or makes a targeted acquisition.
To be sure, let's just say that FlashRay is a departure from this as it's a complete in-house development
of an all-flash array and not, as with Dell, HDS and HP all-flash arrays a facility inside a
current array and inheriting all of its data management features.
In that respect, FlashRay will be unique, as NetApp's other all-flash array competitors either
bought technology in Cisco, EMC and NetApp or are startups (Kaminario, Nimbus Data, Pure Storage,
Skyera, SolidFire and Violin Memory).
In other IT news
Flash storage solutions provider SanDisk said this morning that it is acquiring Fusion-io
for $1.1 billion in an all cash transaction.
Fusion-io’s PCIe flash card and ioControl shared flash array businesses will now be integrated
with SanDisk’s other enterprise flash acquisitions and business.
This acquisition will provide SanDisk an immediate and high-profile PCIe flash card hardware
and software product line, almost overnight.
Market analyst firm Gartner recently said that Fusion-io was fifth overall in the enterprise SSD market
(SSDs + PCIe flash), behind fourth-placed SanDisk, which has virtually no PCIe presence, so the acquisition
can be viewed as stategic.
SanDisk has a $6 billion annual revenue run rate and made a billion dollar profit in 2013. It certainly
is in a healthy state.
Over the past couple of years SanDisk has also acquired:
SMART Storage for its SSDs and flash DIMMs for $307 million in July 2013
FlashSoft for PCIe flash aching software in February 2012
Pliant for SSD controllers in May 2011
Overall, Fusion-io is its largest flash business acquisition so far. The company will benefit from
the addition of Fusion-io’s leading PCIe solutions to SanDisk’s vertically integrated business
SanDisk CEO Sanjay Mehrotra was quoted as saying-- “Fusion-io will accelerate our efforts to enable
the flash-transformed data center, helping companies better manage increasingly heavy data
workloads at a lower total cost of ownership.”
SanDisk is hell-bent on penetrating the enterprise flash market, pushing the concept of an all-flash
Fusion-io has just announced its third generation “Atomic” ioMemory PCIe card products. SanDisk
will inherit all of its server OEM and channel partner relationships and be able to guarantee a steady
supply of flash chips from its own flash foundry partnership with Toshiba.
The acquisition is expected to close in the third quarter of SanDisk’s fiscal 2014, unless
somebody else jumps in with a bid and prolongs the affair.
The flash industry is consolidating at a dizzying rate, with Seagate acquiring LSI’s PCIe card flash
business from Avago for $450 million last month.
It followed on from multiple acquisitions by Western Digital, and Toshiba buying Violin's Velocity
PCIe card business.
This $1.1 billion SanDisk acquisition price is slightly more than double that, so
there is a premium for Fusion-io’s market success over LSI, but still, two times its annual
revenue? Sounds like a good bargain.
Fusion-io CEO Shane Robinson was pleased and he said-- “This transaction represents a
compelling opportunity for Fusion-io’s employees, customers and shareholders. Fusion-io’s innovative
hardware and software solutions will be improved by SanDisk’s worldwide scale and vertical integration,
enabling a combined company that can offer an even more compelling value proposition for customers and
In other IT news
Microsoft said earlier this morning that it has found a new method to greatly increase the
computing capabilities of its data centers, despite the fact that Moore's Law is wheezing towards
its inevitable demise sooner as opposed to later.
In a paper to be presented this week at the International Symposium on Computer Architecture (ISCA) titled
'A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services' a group of top Microsoft
Research engineers explain how the company has dealt with the slowdown in single-core clock-rate
improvements that has occurred over the past eleven years.
To get around that issue, Microsoft has designed a system it calls Catapult, which automatically
offloads some of the advanced technology that powers its Bing search engine onto clusters of highly
efficient, low-power FPGA chips attached to typical Intel Xeon server processors.
Think of FPGAs (field-programmable gate arrays) as chips whose circuits can be customized and
tweaked as required, allowing specific tasks to be offloaded from the Xeon processors and accelerated
in FPGA hardware.
This approach may save Microsoft from a rarely acknowledged problem that lurks in the technology
industry-- that processors are not getting much faster these days. And that would be a reality.
For the past fifty years or so, almost every aspect of our global economy has been affected
by Moore's Law, which states that the number of transistors on a chip of the same size will
double every 1 1/2 year or so, resulting in faster performance and much improved power efficiency.
But there's one small problem with that-- Moore's Law is not a law. Instead, it was an assertion
by Intel founder Gordon Moore in a 1965 article that the semiconductor industry got rather
carried away with.
In the past ten years, the salubrious effects of Moore's Law have started to wane, because
although companies are packing more and more transistors on their chips, the performance gains that
those transistors bring with them are not as great as they were during the law's early days.
And as you probably know by now, Intel has based its entire business model to the successful
fulfillment of Moore's Law, and proudly announces each new boost in transistor counts.
Those new transistors can help to increase a compute core's all-important instruction set per
cycle (IPC) metric-– IE: improved branch prediction, larger caches, more-efficient scheduling,
beefier buffers, etc.
However, the simple fact is that although chips have gone multi-core and are getting better
at multi-tasking, those individual cores are not getting much faster due to any significant new
discovery. And that's where the core of the problem resides in.
As AMD CTO Joe Macri recently told us, "There's not a whole lot of revolution left in CPUs per se, you know."
He did, however, note that "there's a lot of evolution left." Microsoft's new Catapult system is a
bit of both.
Under new chief executive Satya Nadella, Microsoft is investing billions of dollars at massive
data centers in its attempt to become a cloud-first company.
Understandably, part of that effort is to determine a way to jump-start consistent data-center compute-performance
The solution that Microsoft Research has come up with is to pair field-programmable gate
arrays with typical x86 processors, then let some data-center services such as the Bing search
engine offload certain well-understood operations to the arrays.
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.