Information Technology News.


Scientists say OpenFlow's architecture is inefficient, consumes too much power

Share on Twitter.

Sponsered ad: Get a Linux Enterprise server with 92 Gigs of RAM, 16 CPUs and 8 TB of storage at our liquidation sale. Only one left in stock.

Sponsered ad: Order the best SMTP service for your business. Guaranteed or your money back.

August 22, 2016

Various computer scientists are saying that OpenFlow's architecture is inefficient and limits its performance while consuming too much power.

That's the conclusion from researchers at Australia's Data61 and Sydney University, who assessed four major OpenFlow controllers: NOX, Maestro, Floodlight and Beacon.

Open Daylight was also tested but not reported-- “The performance was too low to provide any meaningful comparison”, the scientists said.

None of the controllers tested got anywhere close to line speed, whether running on a network processor based on Tilera chips, or on a Xeon E5-2450-based server.

On the CBench software defined networking (SDN) controller benchmark, the best the Tilera setup achieved was just under five million requests per second, compared to a line rate of 29 million requests per second.

It seems that Intel's long years of work understanding packet processing is paying off: on the x86 setup, Beacon was able to hit 20 million requests per second. The maximum of the other controllers was 7 million requests per second.

But since SDN controllers have to deal with traffic as flows (meaning they have to remember MAC addresses so as to track conversations, compared to an Ethernet switch that only has to know which port it's forwarding traffic to), network scalability is also a huge issue.

None of the controllers stayed near their peak performance with ten million unique MAC addresses in the benchmark, and the Java-based controllers (Beacon and Floodlight) stagger to a near-complete halt at that scale.

The issue is that OpenFlow itself has architectural inefficiencies. The whitepaper's authors pick out serialisation, I/O threading and, “the key data structure of the learning switch application: the hash table”.

But serialisation is by far the biggest overhead. Even the most efficient controller “spends about 18 to 20 percent of his time in packet serialisation. This limitation is inherent to the object oriented design principle of these controllers.

They all treat each single packet as an individual object, a limitation that induces an unaffordable per-packet overhead.

The authors' proposal is for a new SDN controller design-- “Treat arriving packets with pre-allocated buffers rather than new objects,” they write. Controllers should also “be aware of the hardware characteristics to limit cache misses on multi-core platforms, or exploit the network-on-chip on many-core platforms”.

Source: Sydney University and Data61.

Sponsered ad: Get a Linux Enterprise server with 92 Gigs of RAM, 16 CPUs and 8 TB of storage at our liquidation sale. Only one left in stock.

Sponsered ad: Order the best SMTP service for your business. Guaranteed or your money back.

Share on Twitter.

IT News Archives | Site Search | Advertise on IT Direction | Contact | Home

All logos, trade marks or service marks on this site are the property of their respective owners.

Sponsored by Sure Mail™, Avantex and
by Montreal Server Colocation.

       © IT Direction. All rights reserved.