Oracle thinks it will win in an IP lawsuit against Rimini Street
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
August 15, 2014
Oracle is saying that recent findings by the judge hearing its case against third party
software support firm Rimini Street prove that its IP has been violated.
Judge Larry Hicks' orders aren't online yet, but Oracle has let the world know about what it considers
the best victory.
That includes a quote to the effect that Rimini Street has engaged in massive theft of Oracle's intellectual
property in which the Judge apparently said it is undisputed that Rimini engaged in theft of Oracle's intellectual
Oracle says that today's findings also demonstrate that Rimini Street ran 200 unlicensed
copies of its database software.
Oracle's attorney Geoff Howard's statement indicates that Oracle expects Rimini Street to start
talking about specific damages payments (read: and out-of-court settlement) or suffer a full-fledged jury trial.
The case directly hinges on how third-party support providers like Rimini Street go about their
Oracle has in the past accused Rimini Street of signing up for Oracle support, downloading as much
of the stuff on the support database as it can, then using that material to provide its support service
to Oracle users at rather less than Oracle charges.
Oracle asserts that behavior directly violates the terms of its contracts and represents theft of
its intellectual property.
Which may well be true. A more complex issue is that software vendors make decent profits from support
and don't like the idea that anyone can undercut them, especially when it's their own software.
A certain hostility towards third-party providers is therefore almost inevitable, as is some
pondering of various competition laws.
Rimini Street has not issued a response to the new decision at the time of writing. The case
continues, and we will keep you posted.
In other IT news
Microsoft's Visual Studio Online services for software developers are in the midst of a
total service outage that has lasted for more than five hours, and nobody seems to know when it will
be back up.
Microsoft is blaming a database bug for the interruption. The services, which were launched in November 2013 to
coincide with the general availability of Visual Studio 2013, are hosted on Microsoft's Azure
cloud platform, and are available at a number of monthly subscription levels.
The services offer a variety of cloud-based enhancements for the Visual Studio IDE, including
source code version control, a hosted build service, load testing, a basic online code-editing
environment, and telemetry data that can give insights into application performance and stability.
The stability of Visual Studio Online itself wasn't so appealing yesterday, however. Beginning
at around 7:30 am Pacific Time, users began reporting trouble accessing any of the services and performance
issues when they were able to login.
Before long, the Azure service status page was reporting that Visual Studio Online was experiencing
a multi-region full service interruption.
After about an hour of investigating the various issues, Microsoft reported that its DevOps
engineers had decided to roll back some changes they had made to the infrastructure in the last
24 hours, in hopes that this would address the problems.
"The actual root cause is still under investigation, but initial analyzis is indicating that a contention
in our core database seems to be causing blocking and performance issues in the services," the team
wrote on the Visual Studio Online service blog.
"Our DevOps teams have identified a couple of mitigation steps and currently going thru
various validations as a group," Microsoft added.
But even after reverting those changes, Microsoft reported that its database issues seemed
to persist, and the last time we checked in, they were still down. We'll keep you updated on
how the issue gets resolved.
In other IT news
Hitachi Data Systems (HDS) says it has acquired Sepaton, a high-end enterprise deduping backup service.
Sepaton used to have a rewarding deal with Hewlett Packard until the StoreOnce came along.
The news was announced by Sean Moser, a senior vice president for HDS' global portfolio and product
Sepaton will now be run as a wholly-owned independent subsidiary, however. HDS calls it "a leader
in incredibly fast, scalable, and cost-efficient purpose-built backup appliances (PBBA)."
But Gartner's magic quadrant for deduping backup appliances suggests otherwise-- Sepaton is positioned
just over the boundary from the niche vendors into the visionary's category, well behind Exagrid and
nowhere near the leaders' quadrant.
But Sepaton (turn it backwards and its name spells 'No Tapes') must be a good fit for HDS.
HDS says the acquisition "is part of a larger data protection strategy that offers enterprise
customers comprehensive data protection that is scalable and integrated."
Further Moser says-- "This acquisition better enables us to help our customers reduce the cost
of protection, enable more data to be protected against disaster, and offer greater flexibility
where or how it is protected."
He adds that "Sepaton and HDS have partnered for many years, and have a number of mutual customers."
The PBBA growth prospects are good. IDC reckons its compound annual growth rate (CAGR) is 19.2 percent
to a $5.3 billion market in 2015, according to Moser.
Sepaton has some 3,000 customers and that number should increase, HDS says, with its channel
pitching the Sepaton message to its customers.
HDS also wants to extend Sepaton's technology-- "We intend to leverage [the Sepaton] team to aggressively
develop next generation solutions that will integrate with other HDS assets, such as storage and
copy creation, and management software."
How much did HDS pay for the company? It isn't saying. Sepaton was founded in 1999 and its total
funding is said to be $98 million.
HDS' announcement strategy is different. It hasn't issued a press release, just the Moser blog. The news
is blazoned across the home page on Sepaton's website and mentioned as a subsidiary item on HDS'
site with a link to the Moser blog.
It gives HDS, which acquires companies shrewdly, a nice high PBBA gap-filler and they may extend
the product down-market.
In other IT news
For the past several years, the internet community would have been happy that engineers could
find ways in improving the venerable TCP/IP protocol. The latest, from researchers at the University of
Cincinnati, addresses shortcomings in the protocol's behavior on wireless networks.
Since the advent of wireless networks instead of Ethernet cable is now the default internet
connection for most devices.
The TCP/IP protocol is a key component of network performance. But as the researchers from the
University of Cincinnati's Centre of Distributed and Mobile Computing recently discovered, the
combination of a lossy physical layer and the TCP/IP's congestion control algorithms can hamper
The proposal put forward by the university is dubbed TCP-Forward, and is built on the prior
work done in a protocol called TCP-Vegas, which uses the round trip time (RTT) to help decide whether
a network is experiencing congestion or packet loss.
To TCP-Vegas, TCP-Forward adds Fountain Codes, which are already present in protocol stacks such
as IEEE 802.11n (Wi-Fi), CDMA 2000, EV-DO, 3GPP and 10 GBase-T Ethernet.
The paper explains-- “Instead of using a feedback channel to notify the source if the sent data
successfully arrives at the destination, network redundancy is introduced to make sure the destination
node can get the original data even if the transmission channel drops some packets.”
TCP-Forward is also useful in multi-hop wireless networks, because only the receiver and not
the intermediate nodes needs to worry about packet decoding.
“Redundancy is introduced at the sender to provide reliability, but explicit acknowledgements are
still sent by the receiver. But different from regular TCP congestion control algorithms, this acknowledgement
is only used to move the coding window, which in turn slides the TCP congestion window. Duplicate acknowledgements
will not incur TCP to reduce its transmission rate, nor will it change any parameters used in the congestion avoidance
algorithms in TCP,” the paper adds.
The result is a protocol that can maintain better throughput than protocols like TCP-Reno or TCP/NC
(network coding) in the presence of packet loss, while at the same time maintaining far better latency
than TCP/NC solutions because of its lower processing overhead, particularly in handling larger packets.
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.