Avoid RAID rebuild issue by fixing damaged files instead of rebuilding disks
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
June 12, 2014
Earlier this morning, storage solution provider Panasas has unveiled improved hardware and software that solves
RAID rebuild issues by fixing damaged files instead of rebuilding complete disks.
The company is an HPC storage system supplier which is moving into technical computing/big
data use cases in enterprises.
It has more than 450 customers in over 50 countries around the world. The hardware is the
Panasas ActiveStor 16, positioned as a hybrid, scale-out NAS appliance.
It runs v6.0 of PanFS which has a new RAID 6+ protection scheme. This is the sixth generation
of PAS hardware. The PAS 11 and 12 constituted the fourth generation, both disk-based with the
PAS 14 being the fifth generation.
It introduced the use of SSDs to accelerate small, random access IOs, small random accesses into
large files, and hold file system metadata.
The PAS 11 and 12 were announced in November 2010 and the PAS 14 in September 2012. A little
under two years later, we have the PAS 16, which has faster controllers (director blades), using
2.53 GHx quad-core Xeon CPUs.
These are 19 percent faster than PAS 14 director blade processors and have 4 times more cache –
48 GB of memory.
Overall, the PAS 16 storage blade moves up from 4 TB disk drives to using 2 x 6 TB HGST He
drives, and doubled SSD capacity of 240 GB. There is now up to 122.4 TB of storage capacity in
its 4U enclosures, with ten of them in a rack providing 1.2 PB.
As with the previous PAS products, data throughput reaches 150 GB/sec. The big news though
is the RAID 6+ triple parity protection scheme. President and CEO Faye Pairman said PAS 16 had
undergone a hardening process to make it optimal for enterprise big data applications and PanFS
6.0 is the most important software release for Panasas in the past five to six years.
As big data gets larger, and so do disk drives, the time to rebuild a failed drive is extended,
so much so so that, in large arrays, a second disk drive can fail before the first is rebuilt, causing
To be sure, RAID 6 protects against such a situation. But disks have reached 6 TB and will
soon be at 8 and 10 TB and rebuild times will continue to be longer, which is one issue in itself,
and exposing us to the risk of a third disk failure while two disks are being rebuilt.
What Panasas’ RAID 6+ application does is to end the default rebuilding of an entire failed
drive and only rebuild the damaged data components instead.
It can do this because at its core, it's an object-based parallel filesystem. Also, the flash
in the PAS 16 has been tuned for RAID 6+ operations. Individual files can be restored in a
disaster recovery exercise instead of the whole system.
Product marketer Geoffrey Noer said that overall reliability increases as you scale the
system, instead of decreasing it and that “Erasure codes protect every file individually.”
With RAID 6+ a triple simultaneous disk failure means that the percentage of files to restore
approaches zero at scale.
He added that RAID 6+ provides a 150 times increase in overall reliability compared to RAID 6
and, with the PAS 16 and PanFS v6 “RAID rebuild performance scales linearly.” As the scale of
the system grows the chance of any 3-drive failures affecting any given file grow less.
And RAID 6+ decreases the chances of having to do a RAID rebuild at all. Small files have three
copies stored, all in flash. Filesystem metadata is quadruple-mirrored. Larger files are striped
across storage blades.
However, the system isn't given a five or six nines availability rating. Instead, Panasas
talks about an always-on model.
While per-file rebuilds are taking place, affected files may not be available but the remainder
of the file is.
Upon making the point that all-flash storage using TLC SSDs would greatly increase data access
speeds and certain use-cases needing fast access to big data amounts could bear the cost, such
as financial trading, Pairman said that now we can understand why Samsung Ventures invested in
The presence of Samsung Ventures as an investor was news to us. Dong-Su Kim, vice president
of Samsung Venture Investment Company, sits on Panasas’ board along with Faye Pairman, co-founder
Garth Gibson, and two partners from Moh, Davidow Ventures.
A March 2014 podcast slide lists Intel Capital as well as Samsung Ventures as investors. Pairman
said it’s becoming an all-flash world and they see us having tiers of flash in the PAS hardware.
It needs an intelligent file system to operate this and PanFS is exactly that, a reliable, hardened,
performant file OS.
So we might begin to imagine all-flash storage blades in future PAS systems, a PAS 18 for example
which, we speculate, could arrive in 2016, assuming a 20 to 24 month interval between Panasas hardware
Panasas ActiveStor 16 and PanFS 6.0 are available for order now and expected to ship in September.
PanFS 6.0 will be available for PAS 11, 12, 14 and 16 systems.
The list price for 122 TB PAS 16 is $160,000 with an 82 TB one going for $130,000. We'll keep
you posted on this and other stories.
In other IT news
As it's often said, a company's immediate future is its competitor's past.
However, having said that, Oracle isn't too worried right now, because the changes it has made to let its database store data in speedy
memory afford far more backwards compatibility than does SAP's HANA product.
With the launch of the new 'Oracle Database In-Memory' technology yesterday, Oracle has
closed a gap with its competitor SAP and thrown further doubt on some of the value claimed by
newer, younger startups.
The new application lets system admins of Oracle's 12c database easily shift data into the
DRAM memory of servers, which is a much more responsive storage medium than traditional spinning
With this upgrade, Oracle has tightened the industry competition with IBM, Microsoft, and – most
significantly, its biggest competitor: SAP.
The German software company, which introduced an in-memory processing system named HANA back in
late 2010, has been fairly active as of late, and we can only imagine that it will try to fend off
Oracle as best it can.
The difference between HANA and Oracle's new software is that Oracle's in-memory option is compatible
with all existing Oracle apps built via 12c, letting admins get the advantages of memory without having
to drastically rewrite all their apps that they've worked so hard on.
"We've implemented this so that you have to do nothing to your applications to test it out," said
Oracle vice president of product management Tim Shetler.
"Overall, that was purely so that people could adopt it easily, and not have to open up the
app and be forced to make changes. Everything they could have done previously is still accessible
when running with the in-memory option," added Shetler.
Additionally, Oracle has also let the system tier across different memory mediums within
the same cluster. "Even though this is memory-optimized database technology, there's no requirement
that the entire database or dataset be in memory to use it," said Shetler.
"We also have the ability to spread a large dataset across different tiers of storage. If you
run a query against that data it will all transparently come back," he added.
So, what's different? "Well a lot of the transparency comes through enhancements we've had to make
to the optimizer in the database itself," Shetler said. "The optimizer is always the one that sees a
SQL statement and figures out what is the fastest way to analyze this request. The optimizer
will then decide based on the request which direction to route the request."
As the optimizer is the core part of the database, Oracle has spent several years working
on adding this capability without breaking anything, and while making certain it's backwards
Subtle things like making sure "that if one app had made a modification to the row store
and that data that's been modified or is in the in-memory columnar store, we have to synchronizer
those two," are why it's taken Oracle several years to catch up to SAP HANA.
Overall, introducing the technology meant that "the disruption was fairly minimal," Shetler
added. "It was like grafting an in-memory store onto an existing set of foundation and infrastructure."
Pricing was not disclosed as of yet.
In other IT news
Market research firm IDC has just revealed the numbers of one of the worst data storage
quarters in recent history as enterprise buyers literally went on a high-end storage strike.
It's been a major drop in the storage business in the first quarter of this year so far.
In a nutshell, here's what IDC statement had to say-- "The total (internal plus external)
disk storage systems market generated $7.3 billion in revenue, representing a drop of 6.9 percent
from the prior year's first quarter and a sequential decline of 17 percent compared to the seasonally
stronger 4th quarter of 2013."
IDC storage research director Eric Sheppard said-- "The poor results of the first quarter were
driven by several factors, the most important of which was a 25 percent decline in high-end storage
He also singled out the mainstream adoption of storage optimization technologies, a general trend
towards keeping systems longer, economic uncertainty, and the ability of customers to address capacity
needs on a micro and short-term basis through public cloud offerings.
Storage provider EMC lead with a 29.1 percent revenue market share but its revenue growth
was just 8.8 percent year-on-year, worse than the market as a whole (5.2 percent) because high-end
array sales were so weak.
IBM did a lot worse, with a 22.5 percent drop in quarterly revenues uear over year. As NetApp and
HP declined less than the market, they arguably gained share.
For its part, Dell had a 8.8 percent revenue drop year over year. Although NetApp, in the
number two position for revenue market share, saw a market share rise year-on-year IDC shows
it as having experienced a 2.8 percent revenue growth over the same period because its actual
Year over year, the storage market as a whole declined by 6.9 percent in revenue terms,
with NetApp only declining 2.8 percent, meaning that it gained a bit of market share.
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.