Get the lowest-cost and the best server colocation service in the business. Learn more.
Information Technology News.


AMD to build pin-compatible 64-bit x86 ARM SoCs

Share on Twitter.

Install your server in Sun Hosting's modern colocation center in Montreal. Get all the details by clicking here.


Do it right this time. Click here and we will take good care of you!





Click here to order our special clearance dedicated servers.

Get the most reliable SMTP service for your business. You wished you got it sooner!

May 6, 2014


AMD said earlier today that it will create pin-compatible 64-bit x86 and ARM SoCs in a new initiative that it's calling Project SkyBridge.

Overall, AMD has licensed the ARMv8 architecture and will design its own home-grown ARM-based processors.

"AMD is the only company that can bridge ARM and the x86 ecosystems," said AMD's general manager of global business units Lisa Su at the announcement event in San Francisco yesterday.

"We said we were going to be ambidextrous," said AMD CEO Rory Read, also at the event. "We were going to do something that no one else on earth could do."

Pin compatibility, Su said, will bring "tremendous flexibility to the market," seeing as how an OEM can design and build a single motherboard that can be fitted with either an x86 or ARM SoC.

"It really is a design framework," she said. "It's a family of products that we'll be putting out starting first in 20-nanometer technology." The first products in that family, planned for next year, will be what AMD has dubbed APUs – accelerated processing units – and will include AMD's Graphics Core Next (GCN) GPU technology.

Both the ARM and x86 parts will be built around a heterogeneous system architecture HSA. AMD currently supports HSA in its "Kaveri" desktop and notebook chip, and its "Berlin" Opteron server chip, demoed at the Red Hat Summit in April.

On the x86 side of the Project SkyBridge ambidexterity, the APUs will be based on a next-generation "Puma+" compute cores, an update to the "Puma" cores announced last week in the rollout of the low-power, low-cost "Beema" and "Mullins" SoCs.

On the ARM side, Su said, "We're going to optimize 64-bit ARM Cortex A57, again in the same footprint as we have our x86 capability." These low-power ARM-based APUs will also be AMD's first HSA-capable chips that support Android.

The first Project SkyBridge APUs will be targeted at the embedded and client markets. "It's an opportunity for us to help customers innovate, differentiate, and also reduce their time to market," Su said.

For some markets (she used networking as an example) Project SkyBridge will allow customers to simplify their code base from one that now also includes MIPS and PowerPC devices, to just x86 and ARM running on a single motherboard design.

"It's just way too expensive to support all of those disparate architectures across a single ecosystem or several ecosystems," Su said.

After the first Project SkyBridge appeared in 2015, the following year would see AMD moving beyond such ARM-designed and licensed compute cores as the ARM Cortex-A57, to create its own ARM-chip designs.

"The very key piece of differentiation for us," Su said, "is really around developing our own ARM cores. So today we are also announcing that we are an ARM architectural licensee, and that we are well on our way with developing our own ARM cores," the first one being code-named "K12".

AMD ARM chips, Su said, will find their way into everything from embedded to servers, and will be "a huge addition to our semi-custom portfolio."

As AMD CTO Mark Papermaster, also at the event explained, the effort involved in both Project SkyBridge and K12 is aided by AMD's ability to mix and match its intellectual property in CPU, GPU, APU, fabric, and other areas.

The company's product-development cycle, he said, has been "re-engineered from top to bottom," and will eventually result in a "from-scratch, ground-up, optimized design of the ARM-64 architecture marrying all of that deep expertise we have at AMD."

It's all about simplification, reusability, and flexibility, Papermaster said, giving as an example on-chip fabric. "It doesn't care if you're x86 or ARM," he said. "An ambidextrous network-on-chip will put all these pieces together – our IP, the IP of our partners, IP of a third party, etc."

Amid all of this talk about ARM, Papermaster took a moment to reassure Intel ISA devotees. "Of course we're not backing off x86," he reassured them. "That's the beauty of this ambidextrous approach."

In other IT news

The battle between Amazon Web Services, Google and Microsoft for the public cloud gets more and more attention these days, but global managed services provider Dimension Data has thrown in its own hat into the equation with a new hybrid Windows Server cloud.

The company's plan is to offer enterprise customers the chance to migrate a legacy Windows app into a cloudy environment where it can be given the managed services treatment, complete with service level agreement.

That SLA, says David Hanrahan, general manager of Dimension Data Cloud Services, is likely to be rather more granular than the guarantees offered by a pure-play public cloud.

“If everyone's Cloud apps had been designed to run natively, that would be great,” he said. “The cost of re-platforming them is significant. We are getting traction to manage them and patch them.”

“Clients want to be able to migrate legacy 32-bit software to the cloud,” he said, adding that he feels many enterprises will take their first steps into the cloud in that manner.

That's perhaps an odd observation given the likes of AWS repeatedly point to very considerable adoption by big business, but perhaps also not surprising given he's selling just that kind of service.

In Hanrahan's defence, he also said that this kind of work is as balance-sheet friendly as its traditional engagements, thanks largely to a “build once and redeploy at will” approach to building out its cloudy offerings.

Perhaps surprising a bit is that Dimension Data isn't alone in this kind of market. The likes of Fujitsu and CSC also operate clouds on very large scales, offering the elasticity of pure-play public clouds with a promise of rather more grooming, feeding and SLA backing.

Dimension Data certainly thinks that there's plenty of upside in this kind of market-- last week it said it can quadruple its US $1 billion data centre business by 2018.

Some of that growth is expected to come from acquisitions like one announced today-- IT services company Nexus today came under the 'DiData' umbrella for an undisclosed sum, bringing with it 19 offices across the United States.

In other IT news

The Hadoop community said late yesterday that it's currently working as a team on various new patches that will bring Docker into the data management system, and independent benchmarks are already showing that the technology is now a lot faster than traditional server virtualization methods. The technology actually is a new breakthrough.

Docker is an open source Linux containerization technology that uses underlying kernel elements like namespaces, lxc, and cgroups to let a system admin run multiple apps with all their dependencies in secure sandboxes on the same underlying Linux operating system, making it an attractive option to server virtualization, which bundles a copy of the OS with each app.

In a set of specific benchmarks that an IBM worker released on Thursday, Big Blue demonstrated that Docker containerization has some huge advantages over the KVM hypervisor, from an overall performance perspective.

Alongside this, we also discovered some pretty impressive work by the Hadoop community to bring the technology into the eponymous data analysis and management engine.

This will add more punch to the idea that Docker could become an eventual replacement for traditional server virtualization approaches, granting businesses huge benefits from an open source technology.

To start with, benchmarks conducted by IBM show that Docker has a number of performance advantages over the KVM hypervisor when running on the open source cloud infrastructure tool OpenStack.

An informative post published by IBM's Boden Russell goes into further details about the results. "From an OpenStack Cloudy operational time perspective (boot, reboot, delete, snapshot, etc.) docker LXC outperformed KVM ranging from 1.09x (delete) to 49x (reboot)," Russell wrote.

"Based on the compute node resource usage metrics during the serial VM packing test, Docker LXC CPU growth is approximately 26 times lower than KVM. On this surface, this indicates a 26x density potential increase from a CPU point of view using docker LXC vs a traditional hypervisor. Docker LXC memory growth is approximately 3 times lower than KVM. On the surface, this indicates a 3x density potential increase from a memory point of view using docker LXC vs a traditional hypervisor," he added.

Not only does Docker have desirable resource-usage characteristics, but the way it allows developers to package applications has attracted attention from the open source Hadoop community.

Recently, we learned that some people are diligently working to add Docker support into a crucial component of Apache Hadoop 2.0 named YARN, with the goal of increasing the usefuleness of both technologies.

YARN was introduced in version two of Apache Hadoop, and it lets the software run multiple applications within Hadoop rather than purely MapReduce jobs.

Source: AMD.

Get the most dependable SMTP server for your company. You will congratulate yourself!

Share on Twitter.

Advertisement
Need to know more about the cloud? Sign up for your free Cloud Hosting White Paper.

IT News Archives | Site Search | Advertise on IT Direction | Contact | Home

All logos, trade marks or service marks on this site are the property of their respective owners.

Sponsored by Sure Mail™, Avantex and
by Montreal Server Colocation.

       © IT Direction. All rights reserved.