IBM, AT&T to present new research at next week's OFC-NFOEC conference
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
March 14, 2013
Research scientists at IBM say they managed data throughputs tests of 25 GB per second over just a few millimetres of distance,
while consuming just 24 milliwatts of power.
Meanwhile, AT&T's research team say they can zip by 400 GB per second down 12,000 kilometers of fibre cable using a new modulation
technique developed by the telecom carrier.
Both companies will present their research work at next week's OFC-NFOEC conference in Anaheim, California, where the future
of optical communications will be discussed by about 11,950 delegates.
These new technologies lie at two extremes of the science of sending data through optical cable. AT&T is clearly focussed on getting
data around great distances, while IBM just wants to reach the other side of the table in the least amount of time, and while
consuming less power.
Getting data around inside a printed circuit board may seem trivial, but IBM engineers are increasingly looking at radio and
optical links between components that can't be connected easily across a packed motherboard.
Funded by DARPA, the team from IBM is looking to fit the technology into "exascale" computers it expects to be developed by
2019 - 2020.
IBM has created a new laser that consumes only 24 milliwatts (0.0024 Watt) of power while transmitting data at 25 GB per
second, breaking previous records thanks to a collection of technologies too obscure to have decent acronyms-- silicon-on-insulator
complementary metal-oxide-semiconductor (SOI CMOS) combined with advanced vertical cavity surface emitting lasers (VCSELs),
according to Science Daily.
Meanwhile, telecom behemoth AT&T has developed a new encoding technique that can apparently throw 400 GB per second down
12,000 km of fibre cable by running eight separate signals, 100 GHz apart, multiplexed by wavelength to pack more data into the
But disappointingly, the transmitter and receiver weren’t actually 12,000 km apart, but instead the signal was sent over the
same 100 km lengths until it degraded, suggesting that it could have survived the long trip if it had needed to.
Radio communications today only uses about 100 GHz of spectrum in total, right at the bottom of the dial, but we've spent
decades squeezing more and more data into each megahertz of bandwidth.
Visible light, way up in the terahertz range, can't go through walls nor bounce off the ionosphere, but there are plenty more
frequencies available below that and we've still got a lot to learn about how to make the most of them, and pack the maximum
amount of data we possibly can, using today's technology.
In other technology news
NetApp's own marketing department has sweetened the company's SPC-2 benchmark results in a further attempt to help NetApp
stand out from the rapidly increasing crowd, emphasizing the price-to-performance characteristics of its latest E-5500 storage
The Storage Performance Council (SPC) is a storage industry benchmark organizational body that helps the IT industry provide
common and accepted standards, in an objective storage array transaction performance (SPC-1) as well as large-scale, sequential
data movement performance (SPC-2) test results.
With SPC-2, enterprise data suppliers can say that their system moved so many MB/sec of data with a particular price/performance
Those benchmark results enable valid and accurate comparisons to be made between different suppliers' disk storage arrays
when performing the various benchmark tests.
What NetApp's numerical alchemists have done is to refine the good SPC-2 result they obtained with their E-5500 storage
array and increase the numerical difference between it and other systems by using a new measure calculated from the benchmark
IT industry insiders call this "Doing a Dot-Hill" after that supplier employed similar creativity to make its array stand
out from competing products.
The E-5500 is NetApp's latest E-Series disk array, and it is aimed at the large data sets and high-performance computing (HPC) segment
of the IT industry.
SGI is OEM'ing the E-5500 disk array controller and populating it with drives, calling the result its IS-5600. The E-5500
disk controller has an ASIC instead of the expected X86 commodity processor controller. It can manage 384 drives compared
to the previous model's 192 drives and scale to 1.2 PB instead of just 576 TB, delivering about 2.5X on average more IO performance.
NetApp product and solutions marketing manager Brendon Howe said that the company had set out to "design a new product that
provides industry-leading bandwidth per dollar spent while improving density and reliability."
The Sunnyvale headquarted company said that SGI had run an SPC-2 test of its IS-5600 series which "demonstrates the highest
throughput per spindle by more than 2.5 times over the nearest non-NetApp published result."
However, there is no such throughput-per-spindle SPC-2 measure. Instead, the council's test results providing throughput in
MB/sec (MBPS) and price/performance is the configuration's list price divided by the MBPS figure.
This wasn't a good enough way of showing the E5500's data-moving prowess, so NetApp's number-crunchers devised a new way of
presenting the results, dividing the MBPS total by the number of disk drive spindles in the tested configuration, and so came
out on top of the list by a wide margin.
Now NetApp can say the test results "demonstrate the highest throughput per spindle by more than 2.5 times over the nearest
non-NetApp published result."
Well, not exactly. We added in the XIV result, using data from an SPC-2/E energy efficiency version of the benchmark. NetApp
is still ahead by a good margin though, at 73.80 MBPS per spindle.
Source: IBM and AT&T.
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.
Need to know more about the cloud? Sign up for your free Cloud Hosting White Paper.