Oracle new VM tool better at managing Windows virtual machines
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
July 4, 2014
Oracle said earlier this morning that it now makes its free VM tool better at managing Windows virtual machines and handling
Oracle's new VM Server for x86 is free, but the company isn't throwing the kitchen sink at the
product, as evidenced by the fact that this week it released version 3.3, more than two years after
January 2012's version 3.2 release.
Yet the product has fans who felt that even version 3.2 was sufficiently well-featured to challenge
Vsphere and Hyper-V.
To be sure, the 3.3 version upgrade looks to give Oracle a decent chance of winning more users, thanks to
improved support for Windows guests, Oracle Linux and Oracle Solaris.
Workloads running under the latter operating system can now be pointed at storage sources using
Fibre Channel, iSCSI, ZFS volume, local disk and NFS, which should be welcomed by those who dislike
running separate storage pools for different environments.
Oracle says that it has also tightened security, reduced I/O requirements generated by virtual
machines and simplified network design with new methods to define and operate VLANs.
There's even a new, HTML-5-powered, management console. Oracle's been very busy lately on the
In late June, it released a new version of its virtual compute appliance, a converged
infrastructure play now capable of using more recent Intel chippery, and also announced new ZFS
storage appliances said to be capable of booting 16,000 virtual machines in under seven minutes.
So the question is, will this make Oracle a contender with the leading virtualization services providers?
At present, Oracle doesn't rate a mention in studies like IDC's EMEA Quarterly Server Virtualization
With VMware expected to reveal a major vSphere refresh within weeks and rumors starting to emerge that
Microsoft will refresh Windows Server during 2015, it may take more than biennial updates to get
Oracle into the top tier. Then again, we shall see what will happen.
In other IT news
Oracle said this morning that it's moving forward with its Project Jigsaw.
The initiative is a major undertaking that aims to allow Java developers to break their programs
down into independent and interoperable modules.
Initially, Jigsaw was intended to be a major features of Java 8, but by 2012 Oracle decided
that waiting for Jigsaw to be ready would delay the entire Java 8 release, so work on the module
system was postponed until a later version.
Not much has been heard about Jigsaw since, but in a blog post yesterday, Mark Reinhold,
Oracle's chief Java architect, said that the Java Community was ready to begin "Phase Two"
of the Project Jigsaw effort.
"It's now time to switch gears and focus on creating a production-quality design and implementation
suitable for JDK 9 and then Java SE 9, as previously proposed," Reinhold wrote.
Reinhold has drafted a new document outlining the goals and requirements of Project Jigsaw. If
the developers get it right, Reinhold says that Jigsaw will not only make it easier to scale the
Java platform down to small devices, but it will also improve the security, performance, and maintainability
of Java applications.
But it won't be that easy. Back in 2011, when Jigsaw was still expected to be part of Java 8,
Reinhold said it would be a "revolutionary" release, as compared to the more "evolutionary" Java 7.
In the end, the Java 8 we actually got was mostly evolutionary as well, with lambda expressions
being the only truly significant new feature.
And Java 8 arrived two years late, even after cutting loose the Project Jigsaw boat-anchor.
Now work is set to begin in earnest on making Jigsaw ready for inclusion in Java 9, and Reinhold
is once again warning us of the difficulty this project has in store.
Still, he says it's important to press onward. "Jigsaw as a whole will bring enormous changes
to the JDK. It would be unwise to wait until it's completely finished before merging any of it,"
"Our intent is to proceed in large steps, each of which will have a corresponding JEP (JDK Enhancement
Proposal)," he added.
The first three steps will involve determining how to break the JDK down into modules, modularizing
the source code, and then finally modularizing the binary images.
The fourth and last step will be to introduce the module system itself, in the form that
developers will use for other programs outside the JDK.
Oracle is hoping for a 2016 release for Java 9, but given that there were three years between
versions 7 and 8 (even after dropping Jigsaw), that may be a tall order. We'll keep you posted.
In other IT news
VMware has made some changes to its virtual storage area networks after some components it
recommended were found as not being up to par anymore.
VMware's notification of the change suggests it is being made because some low-end IO
controllers it once recommended offer very low IO throughput.
So low that the probability of the controller queue getting full is high. When the controller
IO queue gets full, IO operations time out, and the VMs become unresponsive, the company said.
And VMware seems to have known about these issues for some time. A V-SAN user posted VMware support's
response to his problems with a Dell H310 controller.
That response, posted nearly a month ago says-- “While this controller was certified and is
in our Hardware Compatibility List, its use means that your VSAN cluster was unable to cope with
both a rebuild activity and running production workloads.”
The email from VMware goes on to say “to avoid issues such as this in the future, we are re-evaluating
our Hardware Compatibility List. Our intention is to support hardware that meets a minimum threshold
to work well during rebuild/resync scenarios.”
VMware has since listed nineteen controllers that, as of July 1st, are no longer supported in VSAN
The explanation for their withdrawal follows-- “As part of VMware’s ongoing testing and certification
efforts on Virtual SAN compatible hardware, VMware has decided to remove these controllers from the
Virtual SAN compatibility list. While fully functional, these controllers offer too low IO throughput
to sustain the performance requirements of most VMware environments.”
“Because of the low queue depth offered by these controllers, even a moderate IO rate could result in IO operations timing out, especially
during disk rebuild operations,” it said.
It added: “In this event, the controller may be unable to cope with both a rebuild activity
and running Virtual Machine IO causing elongated rebuild time and slow application responsiveness.
To avoid issues, such as the one described above, VMware is removing these controllers from the Hardware
In other IT news
Cloudera has talked up four major IT companies behind an initiative to link two open source
projects together for the good of the Hadoop community.
The newly proposed partnership between IBM, Intel, DataBricks, MapR and Cloudera to port
Apache Hive onto Apache Spark is due to be announced sometime this week at the Spark Summit
in San Francisco.
We even heard a few rumors of that last week after stumbling across a proposal by Cloudera
to lift Hive on Spark.
For those not familiar with the long list of codenames in the Hadoop world, Spark is a general-purpose
cluster computing system originally developed at the University of California, Berkeley and based on
the Hadoop File System.
It can be used as an alternative data processor to Hadoop MapReduce and is said to be
about 100 times faster than MapReduce when running in memory or ten times faster when running on
While all of this is happening, Hive is data warehouse software that uses a SQL-like language
to query data stored in Hadoop.
Both projects are important, with Spark seen by many as a potential successor to MapReduce
and Hive viewed as a likely candidate for accomplishing SQL-on-Hadoop work.
By lifting Hive up on Spark, Cloudera is hoping to force some consolidation in the Hadoop ecosystem,
and in doing so is placing less emphasis on one of Cloudera's own projects: Impala.
Justin Erickson, Cloudera's director of product management, said the company has decided
to push Hive because it wants to "go and combine the forces of the Spark community with the
Hive community to make batch processing in Hadoop faster."
"Overall, Hive is the standard choice for doing batch processing jobs on Hadoop right now,"
said Matt Brandwein, the company's head of product marketing.
"We want to cut the fragmentation in the community. People are getting a bit aware of the
fact there are so many options for so many different objects. Spark is the successor," he added.
The move has big ramifications for the Hadoop ecosystem and for Cloudera itself. In the recent
past, Cloudera has been skeptical of the value of Hive.
In a blog post late last year, Mike Olson, the company's chief strategy officer wrote-- "Decades of experience had taught people
to expect real-time responses from their databases. Hive, built on MapReduce, simply couldn't deliver, and
that was an issue for us."
To address the perceived shortcomings of Hive, Cloudera built its own software, Impala, but with the
new partnership between Cloudera, IBM, MapR, Databricks and Intel, it seems like Cloudera has
warmed up somehow to Hive and will use the technology as its main way of dealing with the wider
Hadoop community, while still continuing to develop Impala as a way to generate some revenue.
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.