Cisco cancels the development on its ACE load balancer modules
Share on Twitter
September 24, 2012
Internet networking giant Cisco confirms that it has stopped the development on its Application Control Engine (ACE) load
balancer modules for its high-end switches and routers, and now its competitors are trying to raid its installed customer base.
Upstart A10 Networks has struck the first with a Cisco ACE trade-in deal. A10 is offering Cisco ACE customers a credit of
no less than $24,000 as well as installation and migration services if they move over to an AX Series application delivery
controller (ADC) or to A10's SoftAX implementation, which runs the AX ADC software inside a virtual machine.
The initiative was announced after Cisco confirmed rumors last week that it had completely halted the development of its ACE
The key element to remember here is what Cisco said at the end of its statement about ACE-- "Cisco is not exiting the market,
but as the market is rapidly evolving, we are looking at new methods to deliver our load balancing technology. We will share
additional details with the IT community as they become available."
However, just because Cisco is stopping the development on its ACE load balancing modules doesn't mean that the company
is getting out of the load balancing business entirely. It's still easy to see Cisco take its own ACE code, roll it up inside
a VMware ESXi virtual server, and integrate the whole thing into its Unified Computing System modular systems.
It's also equally likely that Cisco could partner with market leader F5 Networks and do the same thing, or any of the other
suppliers of load balancers and application delivery controllers, for that matter.
It's difficult to imagine that Cisco would partner with F5 Networks, which has the lion's share of the load balancing / ADC
market with its BIG-IP appliances. But with almost $49 billion in cash, Cisco has plenty of options on its side.
One of the more interesting courses of action might be to acquire one of the smaller players and then take the battle to F5
For its part, A10 Networks, which is still privately held and has raised $38 million in venture capital funding since it
was founded in late 2004, is the obvious first choice as a Cisco acquisition target.
F5 Networks has a market capitalization of $8.5 billion, and would give Cisco its much-wanted 60 percent share in this
market segment, but probably at a hefty premium - perhaps as high as $10 to $11 billion.
If Cisco was going to do such a deal, it should have never let it out of the bag that it was pulling back on the ACE products
until after the deal was done.
For its part, Citrix would put Cisco into an even tougher position with partners EMC and VMware, while Riverbed is a doable deal
without denting the Cisco cash pile too much.
Alteon VA also runs on KVM and Xen hypervisors. Radware could also be easily snapped up by Cisco, but it partners with Juniper
Networks to offer software-only ADCs for Juniper MX switches, so this deal might be harder to do.
Whatever Cisco is doing right now, it better do it fast if it wants to retain a presence in the load balancing segment of
the business. And the sharks are lurking very close for their prey.
In other IT and networking news
Intel's new Xeon E5 processors offer considerable power and speed, but if you want to get the most performance out of
them, particularly in virtualized server environments doing real transaction processing, you have to remember not to try
to cut corners on main memory.
Server memory optimization specialist Inphi, which has a vested interest in promoting load-reduced main memory (LRDIMM),
put an x86 server through its paces supporting a virtualized server stack running an online transaction processing workload
that simulates a busy online store to show just how much extra work you can get out of a server if you move to integrate LRDIMM
sticks instead of its RDIMM counterpart.
LRDIMM memory replaces the RAM stack register on a DDR3 memory module with a buffer chip that allows the memory chips on
the main module to run at higher clock speeds, which greatly boosts performance, in certain cases up to 45 percent.
The buffer chip also allows for more memory chips to be put on each channel, twice as many in fact, thereby increasing
the capacity even more.
The old and trusted adage that more RAM is your best friend in any computer or server system is still very alive and well,
and Inphi will tell you those fine words of advice when it comes to the performance of any system.
But to fully support LRDIMM memory, the on-chip memory controllers on a server CPU have to be tweaked, which is why you
can't just add it to any old server. For its part, Inphi makes the buffer chips used by Samsung, Hynix and Micron to craft
LRDIMM sticks, so it wants to demonstrate how good its solution is.
Intel's Xeon E5 processors support LRDIMM, and so do AMD's Opteron 6200 processors – so this is not an Intel-only topic,
and it's highly likely that future Power7+, Sparc T5, Itanium 9500 and even Sparc64-X processors will support LRDIMM sticks
in the near future.
In January of this year, when AMD was trumpetting its LRDIMM support with the Opteron 6200s, Inphi vice president of marketing
Paul Washkewicz said that a 1.35 volt LRDIMM with 32 GB of RAM will consume up to 20 percent less power than a 1.5 volt RDIMM with
16 GB of memory, something that needs to be taken into account by system admins.
To realize just how much of a performance boost and performance/watt advantage fat LRDIMMs offer over regular RDIMMs,
Inphi commissioned the server performance techs at Principled Technologies to run the DVD Store version 2.1 (DS2) benchmark
on a four-socket server to see what effect memory had on performance atop virtualized instances.
The DS2 2.1 test suite was announced in December 2011, and simulates and online music store with a Web front-end and a
MySQL database backend rubbing on Linux CentOS 6.3. You can
use Microsoft, Oracle, MySQL, and PostgreSQL databases and the front end has PHP web pages and C# drivers.
DS2 is already part of the VMmark 2.0 workload stack that VMware uses to test its hypervisor. In this particular instance,
Inphi and Principled Technologies loaded up VMware's ESXi 5.0 hypervisor on its servers and then ran multiple instances of the
DS2 test atop Windows Server 2008 R2 SP1 Enterprise Edition and Microsoft SQL Server 2012.
Each instance of the DS2 test had a 50 GB database. The DS2 virtualization benchmark was done on an IBM System x3750 M4
server sporting four E5-4650 processors running at 2.7 GHz, and 20 MB of L3 cache onboard and on the die.
Source: Cisco Systems.
Share on Twitter
Need to know more about the cloud? Sign up for your free Cloud Hosting White Paper.