Microsoft to fund MIT with additional research in quantum computing
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
June 13, 2014
Microsoft says it wants to start the development of quantum computers, in a partnership with the
Massachusetts Institute of Technology.
Head of research at Microsoft, Peter Lee has told the MIT Technology Review digital summit
that the company will be supporting research as well as doing its own work at its Station Q
research lab at the University of California's Santa Barbara campus.
In fact, what the software behemoth is after is something that can be exploited as a “fab-friendly”
To be sure, creating qubits has become almost routine in quantum research labs. In 2012, the University
of New South Wales created a single-atom qubit on silicon.
But creating qubits and getting them to behave properly, as a superposition of states rather than
a quantum 1 or 0, is difficult. Doing so reliably is even more difficult and creating qubits in a
manner that could be turned into some kind of microelectronics foundry is a long way off, no matter
how hard you try.
Lee knows this, and he told the MIT Technology Review that as far as Microsoft is aware, the
current approaches to creating qubits don't scale very well.
Microsoft's direction is to work on a “topological qubit”. The description “topological” refers to
how entanglement is created and maintained and the point of the work is that Redmond told MIT
that it sees the approach as more robust than other research into qubits.
Only a cynic would also note that whoever successfully builds a reliable, mass-producable
qubit will have an intellectual property of almost incalculable value, making it unlikely that
Microsoft would follow paths where others have taken the lead.
In other IT and research news
Cloud economics sometimes make a lot of sense, and now the phenomenon is pushing prices lower
for IBM's SoftLayer cloud solution close to those of its rivals Amazon and Microsoft.
IBM is announcing today that it has dropped SoftLayer object storage prices to $0.04 per
gigabyte per month, lightly cutting compute prices, and adding a low-cost networking service
that lets companies rent a secure, high-bandwidth connection into the cloud.
By reducing its storage prices, SoftLayer has brought its costs into parity with prices far
below the $0.085 per gigabyte offered by rivals Amazon ($0.03 per GB per month) and Google ($0.026
per GB per month) and Microsoft ($0.05 per GB per month).
It also added a connection option named Direct Link, which allows SoftLayer customers to create
private, dedicated connections into the cloud from 18 IBM network points of presence around the world.
This simply means that a customer with a rack in a Softlayer-linked data center from Equinix,
Coresite, Terremark, Pacnet, Interxion, TelecityGroup or Telx, can now rent a dedicated connection
between their computer equipment and IBM's cloud.
The scheme is similar to Microsoft's Express Route and Amazon's Direct Connect schemes, with
the main variation being a greater diversity of suppliers for IBM's scheme, plus of course, lower pricing.
Direct Link connections are available now and cost $147 per month for a 1 Gbps connection or $997
per month for a 10 Gbps one.
This also undercuts Amazon and Microsoft, which charge respectively $219 and $600 a month for equivalent
1 Gbps services.
"Direct Link itself is a similar product offering to others in the marketplace, but the use
case is different because it gives access to SoftLayer's cloud platform and network," said IBM's
SoftLayer manager of product innovation, Marc Jones.
"SoftLayer has had this service available to select customers for several months already, and
IBM's large enterprise install base was requesting this service, so this is the official 'productization'
of the service."
Although SoftLayer has some advantages over its larger rivals – such as bare metal servers that
guarantee higher performance at the cost of more ornate manageability requirements – IBM is yet to
build out the set of services on the cloud to gain parity with rivals.
But as these announcements go, when it comes to low-end services and storage prices, market
forces are beginning to dictate the actions of cloud providers far more than any individual
Perhaps Equinix's dream of a so-called intercloud made up of multiple cloud providers all linked
together by cables in Equinix data centers may be not so far-fetched?
In other IT news
Earlier this morning, storage solution provider Panasas has unveiled improved hardware and software that solves
RAID rebuild issues by fixing damaged files instead of rebuilding complete disks.
The company is an HPC storage system supplier which is moving into technical computing/big
data use cases in enterprises.
It has more than 450 customers in over 50 countries around the world. The hardware is the
Panasas ActiveStor 16, positioned as a hybrid, scale-out NAS appliance.
It runs v6.0 of PanFS which has a new RAID 6+ protection scheme. This is the sixth generation
of PAS hardware. The PAS 11 and 12 constituted the fourth generation, both disk-based with the
PAS 14 being the fifth generation.
It introduced the use of SSDs to accelerate small, random access IOs, small random accesses into
large files, and hold file system metadata.
The PAS 11 and 12 were announced in November 2010 and the PAS 14 in September 2012. A little
under two years later, we have the PAS 16, which has faster controllers (director blades), using
2.53 GHx quad-core Xeon CPUs.
These are 19 percent faster than PAS 14 director blade processors and have 4 times more cache –
48 GB of memory.
Overall, the PAS 16 storage blade moves up from 4 TB disk drives to using 2 x 6 TB HGST He
drives, and doubled SSD capacity of 240 GB. There is now up to 122.4 TB of storage capacity in
its 4U enclosures, with ten of them in a rack providing 1.2 PB.
As with the previous PAS products, data throughput reaches 150 GB/sec. The big news though
is the RAID 6+ triple parity protection scheme. President and CEO Faye Pairman said PAS 16 had
undergone a hardening process to make it optimal for enterprise big data applications and PanFS
6.0 is the most important software release for Panasas in the past five to six years.
As big data gets larger, and so do disk drives, the time to rebuild a failed drive is extended,
so much so so that, in large arrays, a second disk drive can fail before the first is rebuilt, causing
To be sure, RAID 6 protects against such a situation. But disks have reached 6 TB and will
soon be at 8 and 10 TB and rebuild times will continue to be longer, which is one issue in itself,
and exposing us to the risk of a third disk failure while two disks are being rebuilt.
What Panasas’ RAID 6+ application does is to end the default rebuilding of an entire failed
drive and only rebuild the damaged data components instead.
It can do this because at its core, it's an object-based parallel filesystem. Also, the flash
in the PAS 16 has been tuned for RAID 6+ operations. Individual files can be restored in a
disaster recovery exercise instead of the whole system.
Product marketer Geoffrey Noer said that overall reliability increases as you scale the
system, instead of decreasing it and that “Erasure codes protect every file individually.”
With RAID 6+ a triple simultaneous disk failure means that the percentage of files to restore
approaches zero at scale.
He added that RAID 6+ provides a 150 times increase in overall reliability compared to RAID 6
and, with the PAS 16 and PanFS v6 “RAID rebuild performance scales linearly.” As the scale of
the system grows the chance of any 3-drive failures affecting any given file grow less.
And RAID 6+ decreases the chances of having to do a RAID rebuild at all. Small files have three
copies stored, all in flash. Filesystem metadata is quadruple-mirrored. Larger files are striped
across storage blades.
However, the system isn't given a five or six nines availability rating. Instead, Panasas
talks about an always-on model.
While per-file rebuilds are taking place, affected files may not be available but the remainder
of the file is.
Upon making the point that all-flash storage using TLC SSDs would greatly increase data access
speeds and certain use-cases needing fast access to big data amounts could bear the cost, such
as financial trading, Pairman said that now we can understand why Samsung Ventures invested in
The presence of Samsung Ventures as an investor was news to us. Dong-Su Kim, vice president
of Samsung Venture Investment Company, sits on Panasas’ board along with Faye Pairman, co-founder
Garth Gibson, and two partners from Moh, Davidow Ventures.
A March 2014 podcast slide lists Intel Capital as well as Samsung Ventures as investors. Pairman
said it’s becoming an all-flash world and they see us having tiers of flash in the PAS hardware.
It needs an intelligent file system to operate this and PanFS is exactly that, a reliable, hardened,
performant file OS.
So we might begin to imagine all-flash storage blades in future PAS systems, a PAS 18 for example
which, we speculate, could arrive in 2016, assuming a 20 to 24 month interval between Panasas hardware
Panasas ActiveStor 16 and PanFS 6.0 are available for order now and expected to ship in September.
PanFS 6.0 will be available for PAS 11, 12, 14 and 16 systems.
The list price for 122 TB PAS 16 is $160,000 with an 82 TB one going for $130,000. We'll keep
you posted on this and other stories.
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.