China wants to decrease its reliance on U.S. software companies
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
August 25, 2014
For another episode in yet such little time, China is once again taking a nationalist approach
on technology, but this time with its own PC and mobile operating systems. And it looks like the project
has been planned for a while.
The state-controlled Xinhua News Agency reported the announcement late yesterday, citing U.S.
surveillance as one of the reasons Chinese engineers are developing their own operating system
for desktop computers and mobile devices.
The new software would compete directly with Microsoft's Windows and Google's Android OSs. And it
would be available to China's consumers and government personnel alike.
The operating system is slated for release in October, Xinhua reported. The project is being
completed by an alliance of various developers operating under the guidance of Guangnan Ni,
who co-founded Chinese computer maker Lenovo in 1984.
He remains one of the top figures in the country's technology scene. Ni told Xinhua that the Chinese
government should lead the project going forward.
He noted that software developers in China have been building their operating systems on top
of Google's Android platform instead of starting from scratch, and that the government
could help move them to the new Chinese operating system.
To be sure, the Chinese government has complete and total control of the country's software market,
and this is the latest sign that authorities are seriously going domestic on technology.
When Microsoft ended support for Windows XP on April 8 of this year, it left nearly three-quarters
of Chinese computers at risk of bugs and malware. China's government was angry that Microsoft would
let them down and promised that it would look at all its options.
China then took the unusual step of blacklisting Microsoft's latest operating system, Windows 8,
banning it from all government computers.
The Chinese news agency named Microsoft's "monopoly" as one reason for bringing production domestic
rather than upgrading government computers to Windows 8.
But it also cited U.S. spying as a reason for backing its own operating system, suggesting that
the Chinese government is worried the U.S. National Security Agency might be inserting backdoors into
U.S.-made software like Windows and Android, something that the U.S. has been worried sick about
from Chinese networking firm Huawei.
In other IT news
Some industry observers are now saying that the ARM processor architecture will graduate to production servers soon.
However, before ARM servers can ship in any significant volume, a standardized hardware platform that specifically
targets the data center is an absolute requisite.
At least that's the view of chief ARM architect for enterprise Linux vendor Red Hat, who addressed
the topic during a session at the LinuxCon 2014 conference in Chicago last Thursday.
Red Hat and several others most notably the Linaro consortium, of which Red Hat is also a member have
been working on getting Linux ready for ARM servers, and vice versa, for several years.
But according to Masters, one specific challenge has been convincing hardware vendors that what
has worked for ARM on mobile devices won't work for the data center, since those are two very
"A lot of early servers not just in the ARM case but with other architectures were built
using what I call an embedded mindset," Masters said. "So they continue what I affectionately
call the embedded zoo, which is really applying the design philosophy that you take with a mobile
phone and applying that to a server."
It's not that Masters sees anything wrong with how phone vendors have been building their
devices. He does admit however that the embedded design philosophy has served Apple and the various
Android mobile-makers extremely well.
But these efforts have been successful in large part because smartphone vendors build their hardware
so that the software is welded to the hardware as a fully integrated system.
Whether they use an off-the-shelf ARM system-on-chip (SoC) component or they create their
own as Apple and Samsung have both done each device they produce typically contains numerous
software adaptations for its own, very specific hardware. And that what's important here.
The concept of highly integrated, power-conserving SoCs can also be a huge boon to the data
center, Masters said. But having each chipmaker design its SoCs to totally different specifications,
the way they do for the embedded market, is just no good for servers.
"To be sure, general purpose computing platforms differ a lot from embedded systems," he
explained. "Software does not ship with the hardware. They're not welded together. People buy
hardware from their vendor of choice, and then they get their operating system from their vendor
of choice, and they need that to work as a complete system."
"If I've got 20 different possibilities for wiring up a serial port on a server, there's a
problem," says Masters.
"We're not just talking about choosing between Linux and some other operating system here,
either. When today's IT system admins buy a server, they also expect to be able to wipe whatever
Linux distribution it came with and install another one of their choice-- something they are familiar with.
Yet with ARM SoCs designed for the embedded market, there's no such guarantee," he added.
"There's no standard that tells you, for example, 'Here's exactly how the system is going to boot,
here's how you're going to find the kernel', etc, etc," Masters said. "Not 'on this board go here and
on that board go there,' but 'here's one way to do it.' There isn't that in some of these embedded technologies."
Masters simply doesn't believe that the software solutions developed for the embedded market
like the Device Tree and the U-Boot universal bootloader are the right way to go for servers, either.
They simply don't provide enough abstraction above the hardware to allow system admins and IT managers
to treat ARM servers interchangeably, the way they do their existing x86 systems.
"What we need instead are standardized hardware devices. In order to boot the system that
we're using, we have to have a certain level of standards, going in. If for example I've got
20 different possibilities for wiring up a serial port on a server, there's a problem," Masters
In other IT news
Market research firm IDC suggests that software-defined networking (SDN) could generate $8 billion
in revenue in 2018, an increase over the $960 million it will account for this year. IDC says the $8 billion
will come as businesses buy more converged infrastructure.
To be sure, selling $8 billion of anything is impressive, but as SDN is potentially destructive
it is worth exploring what that big pile of cash will mean in the context of the wider networking
IDC has put its name to a $50.15 billion number for networking equipment sales in 2018,
taking into account ethernet switches, routers, WLAN, WAN, enterprise video and telepresence systems,
plus fibre channel and InfiniBand technology.
Sales of those categories of equipment will increase from $42.5 billion in 2014, so will
actually grow faster than SDN in dollar terms but much slower in terms of annual growth rates.
It's harder to say if networking gear are less of a thing any more, because sales in the years leading
up to 2013 weren't exactly buoyant due bleak global economic conditions.
In other IT news
SanDisk said this morning that it is launching a new Ultra II solid state drive (SSD) for retrofitting
to PCs that uses lower cost 3-bits-per-cell NAND technology.
TLC or 3 bits per cell flash stores 50 percent more data in each cell than MLC (2 bits per cell) and is
cheaper to make on a cost/bit basis.
However, the number of times TLC flash can be rewritten and the P/E cycle count is lower than MLC,
typically being measured in the hundreds of cycles instead of thousands.
Of course, that has restricted its use in business flash applications, up until today. The Ultra II
is an update on SanDisk's Ultra product, which was first announced in July 2011, and radically increases
the performance and capacity.
The original product had 60 GB, 120 GB and 240 GB capacity points, whereas the new one starts at 120 GB
and passes through 240 GB and 480 GB models, up to a 960 GB high point.
The original device did sequential reads up to 280 MB per second and sequential writes up to 270MB per second.
Ultra II blows these numbers away with reads up to 550 MB/sec and writes to 500 MB/sec.
Random performance is up to 99,000 read IOPS and 83,000 write IOPS. It's helped by the so-called nCache 2.0,
which sets aside a portion of the flash to run in faster SLC mode and so speed things up even more.
The interface has been speeded up too, from 3 Gbit/s SATA to 6 Gbit/s. SanDisk appears to have been
able to lengthen this TLC product's endurance because it is offering a three-year warranty.
There is a 1.75 million hour MTBF rating on the new products but no number for total TB written or
full drive writes over the life of the drive, and this leads us to think that the endurance may
be inferior to MLC SSDs, although we could not confirm this.
The new drives have shock resistance features SanDisk says, making them more physically robust.
Source: The Government of China.
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.