Get the lowest-cost and the best server colocation service in the business. Learn more.
Information Technology News.


Researchers to address the shortcomings in TCP/IP's behavior on Wi-Fis

Share on Twitter.

Install your server in Sun Hosting's modern colocation center in Montreal. Get all the details by clicking here.


Do it right this time. Click here and we will take good care of you!





Click here to order our special clearance dedicated servers.

Get the most reliable SMTP service for your business. You wished you got it sooner!

August 14, 2014

For the past several years, the internet community would have been happy that engineers could find ways in improving the venerable TCP/IP protocol. The latest, from researchers at the University of Cincinnati, addresses shortcomings in the protocol's behavior on wireless networks.

Since the advent of wireless networks instead of Ethernet cable is now the default internet connection for most devices.

The TCP/IP protocol is a key component of network performance. But as the researchers from the University of Cincinnati's Centre of Distributed and Mobile Computing recently discovered, the combination of a lossy physical layer and the TCP/IP's congestion control algorithms can hamper network performance.

The proposal put forward by the university is dubbed TCP-Forward, and is built on the prior work done in a protocol called TCP-Vegas, which uses the round trip time (RTT) to help decide whether a network is experiencing congestion or packet loss.

To TCP-Vegas, TCP-Forward adds Fountain Codes, which are already present in protocol stacks such as IEEE 802.11n (Wi-Fi), CDMA 2000, EV-DO, 3GPP and 10 GBase-T Ethernet.

The paper explains-- “Instead of using a feedback channel to notify the source if the sent data successfully arrives at the destination, network redundancy is introduced to make sure the destination node can get the original data even if the transmission channel drops some packets.”

TCP-Forward is also useful in multi-hop wireless networks, because only the receiver and not the intermediate nodes needs to worry about packet decoding.

“Redundancy is introduced at the sender to provide reliability, but explicit acknowledgements are still sent by the receiver. But different from regular TCP congestion control algorithms, this acknowledgement is only used to move the coding window, which in turn slides the TCP congestion window. Duplicate acknowledgements will not incur TCP to reduce its transmission rate, nor will it change any parameters used in the congestion avoidance algorithms in TCP,” the paper adds.

The result is a protocol that can maintain better throughput than protocols like TCP-Reno or TCP/NC (network coding) in the presence of packet loss, while at the same time maintaining far better latency than TCP/NC solutions because of its lower processing overhead, particularly in handling larger packets.

In other IT news

Hybrid flash/disk array supplier Nimble Storage said this morning that it has refreshed its product line.

As suggested by a few of its customers, the company has replaced its entry-level and midrange products with ones driven by faster processors.

Hybrid arrays offer a combination of flash speed and disk capacity and lower cost per GB, offering much of the performance of all-flash arrays and with greater storage capacity.

There are 3 hybrid array startups, all with specialized software and hardware-- pre-IPO Tintri and Tegile, and post-IPO Nimble.

It had a three-product line up in June with the CS-200, CS-400 and the high-end CS-700s, which can be clustered as an added feature.

The CS-200s and 400s are now discontinued and were replaced with the CS-300 and CS-500 lines.

The CS-300 is categorized as a base performance line, the CS-500s are high performers, and the CS-700s are the extremely high performers, with the clustered CS-700s being the ultimate performers.

Nimble says the CS-210 and CS-215 provide value and performance for small to medium-sized IT organizations or remote offices, for workloads such as Microsoft Exchange and VDI.

The CS-300 is ideal for midsize IT organizations or distributed sites of larger companies. It offers the best capacity per dollar for workloads such as Microsoft applications, VDI, or virtual server consolidation. The CS-300 delivers 1.6 times more IOPS than the CS-215.

The CS-500 offers advanced performance for larger-scale deployments or IO-intensive workloads, like larger-scale VDI, and Oracle or SQL Server databases, and provides the best performance and IOPS per dollar.

Overall, the CS-500 achieves five times the performance of the CS-215. The CS-700 is designed for consolidating multiple large-scale critical applications with aggressive performance demands.

It delivers approximately seven times the IOPS of the CS-215. The CS-300 and CS-500 both utilize Intel Sandy Bridge CPUs and deliver 50 percent more performance than the products they replace, the company claims.

They can use an All-Flash Shelf to support tens of terabytes of flash in a scale-out cluster. Networking options are 10 Gbase-T, Gbit-E, and Gbit-E SFP, plus networking connectivity with the ability to interchange connectivity in the future.

Nimble introduced the CS-700 and all-flash shelf in June of this year. In two months, it has refreshed its entire array line with higher-performing equipment.

The move should enable the company to make more waves among mainstream storage array customers. They find their existing suppliers' solutions more expensive and slower than the gear from the new hybrid array vendors, according to various comments we've heard recently. Both the CS-300 and CS-400 are available now.

In other IT news

Graphics standards body the Khronos Group has called on the IT industry to help draft the next generation of the OpenGL specification, a potential major rewrite that's expected to help unify the OpenGL development model for desktop PCs, mobile devices and the internet in general.

"Work on the detailed proposals and the various design implementations are already underway, and any company interested to participate in this initiative is strongly encouraged to join Khronos for a voice and a vote in the development process," the group said in a press release.

In a phone conversation last week, Khronos CEO Neil Trevett said that one reason why OpenGL needs a major update is because the hardware landscape is dramatically different today than when the standard was first implemented in 1994.

"OpenGL is 20 years old already and back then, RealityEngine hardware from Silicon Graphics was the typical target for the first generation of OpenGL," Trevett said.

"Obviously, hardware has significantly changed since then, especially on mobile devices. You see multiple-core CPUs, quite advanced GPUs, shared memory, etc, etc," he added.

Mobile devices are much on everyone's mind in the IT industry these days, with console-quality gaming on mobile phones just around the corner.

But Trevett sees the overall demand for OpenGL broadening even further, and at a faster pace.

"More and more platforms are becoming 3D capable," Trevett said. "It's not just desktop gaming PCs and workstations anymore. It's mobile devices, and it's web browsers everywhere, and it is cloud rendering, feeding across the web to clients. And there's quite a big market selling GPUs to the automotive industry.

One goal for the next-generation OpenGL standardization effort will be to simplify the OpenGL ecosystem itself and make it easier to develop applications for a wider range of targets.

For example, Khronos currently maintains the full OpenGL specification for desktop PCs. Version 4.5 was announced just yesterday, and it's a separate OpenGL ES specification for use on mobile devices.

Trevett hopes that under the new standard, this division will no longer be necessary. Additionally, subtle inconsistencies in how individual vendors implement the OpenGL ES specification can mean that overall application performance can vary from device to device, something the new standard aims to address.

Click here to order the best dedicated server and at a great price.

"Khronos has definitely taken it on board for this generation. We don't just focus on the spec but focus on how we're going to make it even more reliable across multiple vendors," Trevett said. "And we're attacking that issue at a very fundamental level."

The new standard should have a more streamlined API so that it's easier to implement consistently, Trevett said, and it should also have a standardized intermediate language that's decoupled from the hardware acceleration that's available on each platform.

Khronos also plans to improve its conformance testing methodology so that implementation problems can be spotted before they go to market.

The group is even considering releasing its conformance testing as open source so that interested parties can help improve them, although it hasn't committed to this yet.

Trevett said work on the next generation of OpenGL is already well underway and progress is moving rapidly. The effort has also sparked various interest from unusual quarters, with "triple-A" game makers like Blizzard, EA and Epic joining the typical hardware and tool vendors at the table for the first time.

"Out of all of the APIs we've ever designed at Khronos, including the original OpenGL ES, this is the most positive energy and momentum that I've ever seen," Trevett said.

Source: The University of Cincinnati.

Get the most dependable SMTP server for your company. You will congratulate yourself!

Share on Twitter.

IT News Archives | Site Search | Advertise on IT Direction | Contact | Home

All logos, trade marks or service marks on this site are the property of their respective owners.

Sponsored by Sure Mail™, Avantex and
by Montreal Server Colocation.

       © IT Direction. All rights reserved.