Nanotechnology could double the density of today's hard drives
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
March 3, 2013
HGS, the Western Digital subsidiary formerly known as Hitachi Global Storage, says it has developed a new method of manufacturing
hard-disk platters using nanotechnology that could actually double the density of today's modern hard drives.
The new technology utilizes a combination of self-assembling molecules and 'nano-imprinting', technologies previously associated
with semiconductor manufacturing.
This is in an effort to assemble patterns of tiny magnetic islands, each no more than ten nanometers wide, or about the width of
approximatey fifty atoms.
The resulting patterns are composed of no less than 1.2 trillion dots per square inch, where each dot can store a single
bit of data. That's roughly twice the density of today's hard-disk media, and HGS researchers say they are just scratching
the surface of what can be done at this point.
"With the proper chemistry and surface preparations, we believe this work is extendible to ever-smaller dimensions," HGS
fellow Tom Albrecht said in a statement.
The aforementioned self-assembling molecules are so called because they are built from segments of hybrid polymers that
repel each other.
When coated onto a specially prepared surface as a thin film, the segments line up into almost perfect rows.
Once so arranged, the tiny building blocks can be manipulated using other chip-industry processes to form the desired structures
before being nanoimprinted onto the disk substrate.
HGS' key breakthrough was in assembling these otherwise-rectangular features into the radial and circular patterns necessary
for spinning-disk storage, which the company says it achieved through careful preparation of the surface onto which the self-assembling
molecules were applied in the first place.
To be sure, the global chip industry has long eyed nano-lithography as a potential alternative to current photolithography
processes, which have grown increasingly complex and expensive as the scale of semiconductor features has diminished quite
While it may one day be possible to assemble such complex components as microprocessors using this type of nanolithography,
many researchers still believe that its more immediate use will be for applications such as disk drives or memory, which are
simpler and more tolerant of the defects that inevitably occur when employing such an immature technology.
In fact, given HGS' many innovations, the first commercial hard drives based on nanolithography may be just a few years
away. According to HGS vice president Currie Munce, the company expects the technology to become cost-effective by the end of
In other IT and high tech news
Google says that it will now offer a new open sourced compression algorithm called Zopfli. The company says that it's a
slower-but-stronger data compression engine than the likes of zlib.
The solution is from Googler Lode Vandevenne, and he used Google's 20% time formula, the number of days in a month Google
allows its staff to work on side projects.
Zopfli is said to reduce files to sizes 3.7 to 8.3 percent smaller than its rivals. The downfall is that it takes one hundred
times longer to do the job.
Google already agreed to that delay, noting that Zopfli requires two to three times the CPU power of its rivals and is
therefore “best suited for applications where data is compressed once and sent over a network many times, for example, static
content for the web.”
Over the years, many people have claimed to have created compression systems that can pack data into wonderfully tiny bundles.
So while Google has made only modest compression gains and has done so without time savings, 8 percent is still nothing to be
sneezed at in applications like content delivery to mobile devices, where any reduction in the amount of time ratios work means
better battery life and lower bills for data consumption.
And while Zopfli takes a lot more time to compress data, it uses existing tools to unpack it at the same speed as rival
algorithms, meaning no processing penalty on the software.
Open sourcing the algorithm therefore makes a lot of sense, given Google is keen on getting more content into more mobile
devices more often and in more places. Almost everyone who owns a mobile device wishes it would do things faster.
Let's not forget the looming wireless spectrum crunch predicted to send mobile data costs rising to unpleasant levels, a
phenomenon a little more compression could ease.
In other IT and open source news
The Ruby community has just announced the first stable release of Ruby 2.0, exactly twenty years to the day since Ruby
creator Yukihiro Matsumoto first created the language on February 26, 1993.
Ruby 2.0.0-p0, as the release is formally known, represents the first major revision of the language since Ruby 1.9 was
released in December 2007.
The most recent entry in the 1.9 line, Ruby 1.9.3, was released on October 31, 2011. Although the full list of changes since
Ruby 1.9.3 is fairly long, Ruby 2.0 is essentially an incremental release, one that maintains nearly full backward compatibility
with the previous version.
According to the language's maintainers, most developers today will find it much easier to migrate from Ruby 1.9 to Ruby 2.0
than previous versions.
Among the language changes in the new release are the ability to pass arguments to methods as named keywords; new syntax
that makes it easier to create arrays of symbols; and Module#prepend, a way to allow methods loaded from a module to override
those in a specific class.
Source: Western Digital
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.
Need to know more about the cloud? Sign up for your free Cloud Hosting White Paper.