Oracle moving forward with its Project Jigsaw
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
July 3, 2014
Oracle said this morning that it's moving forward with its Project Jigsaw.
The initiative is a major undertaking that aims to allow Java developers to break their programs
down into independent and interoperable modules.
Initially, Jigsaw was intended to be a major features of Java 8, but by 2012 Oracle decided
that waiting for Jigsaw to be ready would delay the entire Java 8 release, so work on the module
system was postponed until a later version.
Not much has been heard about Jigsaw since, but in a blog post yesterday, Mark Reinhold,
Oracle's chief Java architect, said that the Java Community was ready to begin "Phase Two"
of the Project Jigsaw effort.
"It's now time to switch gears and focus on creating a production-quality design and implementation
suitable for JDK 9 and then Java SE 9, as previously proposed," Reinhold wrote.
Reinhold has drafted a new document outlining the goals and requirements of Project Jigsaw. If
the developers get it right, Reinhold says that Jigsaw will not only make it easier to scale the
Java platform down to small devices, but it will also improve the security, performance, and maintainability
of Java applications.
But it won't be that easy. Back in 2011, when Jigsaw was still expected to be part of Java 8,
Reinhold said it would be a "revolutionary" release, as compared to the more "evolutionary" Java 7.
In the end, the Java 8 we actually got was mostly evolutionary as well, with lambda expressions
being the only truly significant new feature.
And Java 8 arrived two years late, even after cutting loose the Project Jigsaw boat-anchor.
Now work is set to begin in earnest on making Jigsaw ready for inclusion in Java 9, and Reinhold
is once again warning us of the difficulty this project has in store.
Still, he says it's important to press onward. "Jigsaw as a whole will bring enormous changes
to the JDK. It would be unwise to wait until it's completely finished before merging any of it,"
"Our intent is to proceed in large steps, each of which will have a corresponding JEP (JDK Enhancement
Proposal)," he added.
The first three steps will involve determining how to break the JDK down into modules, modularizing
the source code, and then finally modularizing the binary images.
The fourth and last step will be to introduce the module system itself, in the form that
developers will use for other programs outside the JDK.
Oracle is hoping for a 2016 release for Java 9, but given that there were three years between
versions 7 and 8 (even after dropping Jigsaw), that may be a tall order. We'll keep you posted.
In other IT news
VMware has made some changes to its virtual storage area networks after some components it
recommended were found as not being up to par anymore.
VMware's notification of the change suggests it is being made because some low-end IO
controllers it once recommended offer very low IO throughput.
So low that the probability of the controller queue getting full is high. When the controller
IO queue gets full, IO operations time out, and the VMs become unresponsive, the company said.
And VMware seems to have known about these issues for some time. A V-SAN user posted VMware support's
response to his problems with a Dell H310 controller.
That response, posted nearly a month ago says-- “While this controller was certified and is
in our Hardware Compatibility List, its use means that your VSAN cluster was unable to cope with
both a rebuild activity and running production workloads.”
The email from VMware goes on to say “to avoid issues such as this in the future, we are re-evaluating
our Hardware Compatibility List. Our intention is to support hardware that meets a minimum threshold
to work well during rebuild/resync scenarios.”
VMware has since listed nineteen controllers that, as of July 1st, are no longer supported in VSAN
The explanation for their withdrawal follows-- “As part of VMware’s ongoing testing and certification
efforts on Virtual SAN compatible hardware, VMware has decided to remove these controllers from the
Virtual SAN compatibility list. While fully functional, these controllers offer too low IO throughput
to sustain the performance requirements of most VMware environments.”
“Because of the low queue depth offered by these controllers, even a moderate IO rate could result in IO operations timing out, especially
during disk rebuild operations,” it said.
It added: “In this event, the controller may be unable to cope with both a rebuild activity
and running Virtual Machine IO causing elongated rebuild time and slow application responsiveness.
To avoid issues, such as the one described above, VMware is removing these controllers from the Hardware
In other IT news
Cloudera has talked up four major IT companies behind an initiative to link two open source
projects together for the good of the Hadoop community.
The newly proposed partnership between IBM, Intel, DataBricks, MapR and Cloudera to port
Apache Hive onto Apache Spark is due to be announced sometime this week at the Spark Summit
in San Francisco.
We even heard a few rumors of that last week after stumbling across a proposal by Cloudera
to lift Hive on Spark.
For those not familiar with the long list of codenames in the Hadoop world, Spark is a general-purpose
cluster computing system originally developed at the University of California, Berkeley and based on
the Hadoop File System.
It can be used as an alternative data processor to Hadoop MapReduce and is said to be
about 100 times faster than MapReduce when running in memory or ten times faster when running on
While all of this is happening, Hive is data warehouse software that uses a SQL-like language
to query data stored in Hadoop.
Both projects are important, with Spark seen by many as a potential successor to MapReduce
and Hive viewed as a likely candidate for accomplishing SQL-on-Hadoop work.
By lifting Hive up on Spark, Cloudera is hoping to force some consolidation in the Hadoop ecosystem,
and in doing so is placing less emphasis on one of Cloudera's own projects: Impala.
Justin Erickson, Cloudera's director of product management, said the company has decided
to push Hive because it wants to "go and combine the forces of the Spark community with the
Hive community to make batch processing in Hadoop faster."
"Overall, Hive is the standard choice for doing batch processing jobs on Hadoop right now,"
said Matt Brandwein, the company's head of product marketing.
"We want to cut the fragmentation in the community. People are getting a bit aware of the
fact there are so many options for so many different objects. Spark is the successor," he added.
The move has big ramifications for the Hadoop ecosystem and for Cloudera itself. In the recent
past, Cloudera has been skeptical of the value of Hive.
In a blog post late last year, Mike Olson, the company's chief strategy officer wrote-- "Decades of experience had taught people
to expect real-time responses from their databases. Hive, built on MapReduce, simply couldn't deliver, and
that was an issue for us."
To address the perceived shortcomings of Hive, Cloudera built its own software, Impala, but with the
new partnership between Cloudera, IBM, MapR, Databricks and Intel, it seems like Cloudera has
warmed up somehow to Hive and will use the technology as its main way of dealing with the wider
Hadoop community, while still continuing to develop Impala as a way to generate some revenue.
Another little complication in this story is that there already is a Hive-on-Spark
project called Shark. But Cloudera feels that Shark has diverged too much from the mainstream
"To be sure, Shark took an approach of replacing several key components of Hive, including
the query planner and other elements of Hive," Cloudera said.
The result of this was that maintaining compatibility with Hive became very difficult as changes
to Hive can not be transparently back-ported to Shark.
With the Hive-on-Spark approach, we are making a much more limited change to only the physical query planner, which means that the Hive
community can make changes and add new functionality to Hive and have this transparently work with either
Spark or MapReduce or Tez.
As such, the maintenance burden will be much lower for Hive on Spark and will be more deeply
integrated with the core Hive community.
Speaking of Tez, Cloudera's move also puts pressure on Hortonworks, which helped develop
the competing data-processing framework.
But Cloudera says Spark, like Tez, is merely an option. As the company explains in a FAQ document,
"It is not a goal for the Spark execution backend to replace Tez or MapReduce. It is healthy
for the Hive project for multiple backends to coexist. Users have a choice whether to use Tez,
Spark or MapReduce. Each has different strengths depending on the use case. And the success of Hive
does not completely depend on the success of either Tez or Spark."
When contacted for comment, Hortonworks said the decision to pour development resources into
Hive on Spark is broadly a good thing.
"It's an admission that the open source community driven model is the right one," said Shaun Connolly,
the company's vice president of strategy.
With this new initiative, Cloudera can develop a better understanding of the future direction
of the software and more carefully hone its business to reap the benefits of its growing user base.
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.