Oracle accused of unfair competition, breaks US anti-trust laws
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
April 28, 2014
Oracle has been accused of unfair competition and of breaking U.S. anti-trust laws over its Solaris support division.
The claims are made in a counter-lawsuit lodged by the Solaris Support Group, Terix, which had previously been dragged into
court by Oracle for allegedly stealing the database giant's copyrighted code.
The Terix suit claims that Oracle violated California's unfair competition laws and that it attempted to operate an illegal
monopoly in violation of Section 2 of the U.S. Sherman Act.
It was the same Sherman Act that the U.S. Department of Justice accused Microsoft of violating over the bundling of Windows and Internet
Explorer during that company's antitrust case during the 1990s.
Terix's case has been lodged in the Northern District of California, San Jose Davison. The Terix claim states-- "Oracle's
efforts include (among other things) the use of Oracle's natural monopoly over Solaris patches (including error corrections,
security fixes, and other updates) and Oracle's natural monopoly over firmware for Sun/Oracle hardware to force customers to
purchase software and hardware support from Oracle, even in the many instances when those customers could and would otherwise
obtain superior software and hardware support from third-part service providers such as TERiX at a significantly lower cost."
"Senior Oracle personnel have not only admitted but in fact touted Oracle's intent. Indeed, at a press briefing on the day Oracle
acquired Sun, Oracle's executive vice president of global customer services announced-- "We believe we should be the ones to support
our customers. If you're a third-party support provider offering multivendor support, we're coming to get you," the complaint read.
According to Terix, Oracle has succeeded in undermining and weakening third party providers of software and hardware support,
including Terix, by forcing customers to sign up only with Oracle.
Oracle unleashed its case against Terix and Maintech in July 2013 saying they'd stolen its copyrighted code-- Solaris patches,
updates and bug fixes through their work with customers.
Oracle also accused the two companies of mis-representing themselves to customers by claiming they are allowed to support Solaris.
Oracle wants unspecified damages over copyright infringement, false advertising, breach of contract, intentional interference with
prospective economic relations, and unfair competition.
Oracle saw part of its case thrown out by the judge in January, as the court ruled Terix and Maintech had not duped users by
saying they were allowed to fix and update Solaris.
In other IT news
Micron said late yesterday that it's now offering a middle-of-the-road flash drive for servers.
The M-500DC uses 20nm MLC NAND, and comes in 120 GB, 240 GB, 480 GB and 800 GB storage capacities. It seems to be a reworked
version of the M-500 and M-550 personal SSD products, however.
The M-500DC comes with a 6 Gbit/s SATA interface, not the fastest in these 12 Gbit/s days but still good enough for most
If we tabulate the performance of these three SSDs we can see what Micron has done to make the M-500/550 server-ready.
The M-550 is an upgrade to the M-500. The M-500-DC is downgraded from the M-500 and M-550 in raw performance terms so that it
can have a longer life.
That’s the optimisation that Micron has carried out as the M-500-DC has significantly slower random IOPS and sequential I/O performance
than its forebears.
The M-500-DC (server version) comes in both 1.8-inch and 2.5-inch form factors. Micron says it has “EXPERT features including adaptive
read management (ARM/OR), data path protection, redundant array of independent NAND (RAID), reduced command access latency (ReCAL), and
Micron says the M-500-DC is designed to withstand 24/7/365 heavy duty cycles, and can sustain two drive fills a day, every day
for five years. But we don't have the MTBF (mean time between failure) rating yet.
It looks like a competently designed mid-range server SSD, and is termed affordable by Micron, although no prices have been
revealed as of yet. The M-500-DC SSD is available now from Micron's channel partners.
In other IT news
Earlier today, HP has warned its customers that one of its firmware updates can accidentally crash the network interface cards (NICs) in 100 Series
ProLiant Server models. The Service Pack for ProLiant 2014.02.0 can potentially kill HP Broadcom-based network adapters in G2 to G7 series servers, HP
A machine relying on a dead NIC is not much use at all and may very well require a motherboard swap to fix if the slain silicon is a
built-in component. The affected adapters range from PCIe cards to integrated controllers.
HP's online support centre admitted that applying the firmware upgrade on some vulnerable systems could have a disastrous effect, rendering
the server useless in communicating with the outside world.
"On certain HP ProLiant servers, certain HP Broadcom-based network adapters listed may become non-operational when they are updated with
the Comprehensive Configuration Management firmware Version 7.8.21 using our firmware smart component, HP Smart Update Manager or the HP Service
Pack for ProLiant 2014.2.0," HP warned.
It also added that, in extreme cases, such a network adapter may require a hardware replacement to fully recover the NIC.
System admins who download and install the patch on a vulnerable machine will shortly discover that the server cannot detect its own
network adapter, which will be an issue to fix correctly, especially when trying to subsequently load in the replacement firmware.
For some admins, the warning HP tacked onto the service pack's web page came too late in the day-- the firmware was released on April 18, giving
unsuspecting IT departments plenty of time to crash the affected servers.
Richard Brain, technical director at security firm ProCheckup says that he was advised by HP to swap out a bug-hit motherboard-- the
network adapter is embedded on it, but this was not a cheap nor a quick solution to the problem.
"To replace a server motherboard in a G2 to G7 series Proliant machine takes the best part of half a day," he said, pointing out the
fans, the fan tray, the drives and drive bays, power controllers, PCIe cards, CPUs, and memory, and so on must be removed.
An HP spokeswoman told us that upon becoming aware of the issue, "HP removed the components causing the failure", but didn't give any
technical details of the screwup.
She said that customers that completed the firmware update on the at-risk systems should contact HP for remediation as "in this case the
components causing the failure many need to be removed".
"HP expects that, due to the nature of the issue, some customers could experience this problem," she said, adding that it is confident the response
team handles problems quickly and efficiently.
HP added that it's still trying to work out how many customers are at risk with this problem, and will issue an update by April 30.
In other IT news
AuDA wants to introduce DNSSEC into the Australian domain name space, signing the .au domain in its production environment as the first step in a 4-month
.au Domain Administration Ltd (auDA) is the governing authority and industry self-regulatory body for the .au domain segment in Australia.
DNSSEC has been possible for years, but has been held back by industry inertia. Under DNSSEC, a DNS (domain name system) record is
signed, allowing resolvers to authenticate the relationship between domain names and IP addresses where they are hosted.
But the slowly evolving rollout has gathered some small momentum in response to the increasing use of DNS as an attack vector (for example, via redirections).
In 2013, Google began validating DNSSEC records in its public DNS resolvers. The issue for the typical system admin is that DNSSEC is
needed all the way up the chain, from their own site back to the root zone, meaning that the AuDA rollout is a vital step in the deployment
of the protocol for .au domains.
AuDA explains that it has taken a cautious approach over the last 1 1/2 year because the protocol introduces a new level of risk for
registry operators. DNSSEC requires the inclusion of cryptographic keys in the DNS and at times frequent editing of a zone file. This
level of interaction and the complexity of cryptographic keys largely increase the risk of error during a zone change or update.
A DNS error made to a signed zone can cause a zone to appear offline or bogus to validating resolvers, the organisation writes.
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.
Need to know more about the cloud? Sign up for your free Cloud Hosting White Paper.