Nuances between SQL and NoSQL elements are fading over time
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
May 16, 2014
We wrote about Postgres this week, and judging by
the latest beta version of the open source PostgreSQL database, the 'nuances' between the SQL and NoSQL concepts are
fading over time. And it looks like that trend doesn't seem to want to reverse itself.
With the beta release of the open source PostgreSQL 9.4 database yesterday, system admins
have been given more of the features typically associated with NoSQL systems like MongoDB.
To be sure, the main new feature is the JSONB ("binary JSON") storage format, which lets the
database deal in a more efficient manner with JSON-formatted data.
Overall usage of the JSON file format is one of the things that distinguishes typical NoSQL
systems, like MongoDB, from their relational counterparts.
By supporting the format from version 9.2 onwards, PostgreSQL lets DB admins use a format
that is easily parsed by interpreters to store their data, giving them some of the flexibility
typically associated with document databases.
By writing JSON objects into binary, DB admins can manipulate them more efficiently. "JSONB uses
an internal storage format that is not exposed to clients.
JSONB values are sent and received using the JSON text representation," explained the chief
architect at PostgreSQL company EnterpriseDB, Robert HaaS.
MongoDB's "BSON" format is unable to represent an integer or floating-point number with greater
than 64 bits of precision, whereas JSONB can, he explained.
"JSONB can represent arbitrary JSON values. The PostgreSQL community believes that limitations
of this type are unacceptable, and wants to provide the full power of JSON to our users," he explained.
Though this may seem a bit trivial, it's worth remembering that the overall capabilities of
JSONB will have a huge effect on the efficiency of JSON-based systems, especially as they grow.
Since a lot of younger developers have grown up writing data into this format, it's understandable
to see the PostgreSQL take an opinionated view on it.
"With JSONB and other enhancements in version 9.4, we now have full document storage and improved
performance with less effort," explained Craig Kerstiens, a developer at Salesforce-backed
Besides JSONB, the technology also includes the "Data Change Streaming API" which relies on
advances in changeset extraction, among numerous other features which are described in the
With this release, the PostgreSQL community appears to feel confident that as systems like
MongoDB grow in popularity, relational systems will keep on advancing as well.
"We definitely see PostgreSQL evolving with new capabilities popularized by NoSQL," Haas added.
"PostgreSQL is a relational database, so it is very flexible and extensible, and as new formats
like JSON emerge and become popular, it's natural for us to introduce support for those in
"And with the HSTORE contribution module, the PostgreSQL DB has had the capacity to support
key/value stores since 2006, pre-dating many of the NoSQL advances," he added.
"The implication for NoSQL solutions is that innovation around the format in which you
store your data is not enough-- you've got to come up with truly novel capabilities for working
with the data, which is more challenging."
In other IT news
Adobe is blaming a maintenance failure for the 27-hour service outage in its Creative Cloud suite
that left video and photo editors unable to log into online services.
“The outage happened during database maintenance activity and affected services that require users
to log in with an Adobe password,” Adobe said in a blog post, apologizing for the issue.
“We understand that the time it took to restore the service has been frustrating, but we
wanted to be as thorough as possible. We have identified the root cause of this problem and
are putting standards in place to prevent this from happening again,” the company added.
We asked Adobe what that “root cause” might be, but the company hadn’t gotten back to us at
the time we wrote this entry.
That leaves the software firm's own maintenance as the top culprit, a rather dim prospect for
users worried about whether this sort of thing might happen again.
If Twitter is any indication, graphics professionals are not feeling very forgiving, even now
that the service is back online.
In other IT news
Initially installed in November of last year, the Lawrence Livermore National Laboratory Catalyst
supercomputer is now officially open for industry workloads.
At that time, the lab talked about features like the 800 GB of Flash attached to each of its
304 nodes via PCIe, in addition to the per-node 128 GB of DRAM.
No matter how you look at this, it's a massive supercompter capable of millions of calculations per
The LLNL design mapped out the solid-state drives into application memory to make it look like
In big data analysis applications, fast memory becomes a top priority, and this supercomputer
is no exception.
The lab's scientists now seem satisfied with how the Cray-Intel appliance is working, and
are seeking partnerships in bioinformatics, big data analysis, graph networks, machine learning
and natural language processing, or for exploring new approaches to application checkpointing,
in-situ visualisation, out-of-core algorithms and data analytics.
The program will be offered to American companies through LLNL's HPC Innovation Centre.
Here's some more features and specs:
304 dual-socket compute nodes
2.4 Ghz 12-core Xeon E5-2695v2 CPUs
A total of 7,776 cores
128 GB DRAM
800 GB flash per node
Dual-rail Quad Data Rate (QDR-80)
150 teraflops for the full Cray CS300 cluster
We will keep you in the loop as how this special project evolves over time, and will provide
you feedback from the supercomputer's users.
In other IT news
In Japan, the agency which predicts tsunamis and earthquakes isn't keen on using the Cloud
in making it the core of its supercomputing operations.
While cloud vendors have been touting supercomputing appliances for several years already,
Tatsuya Kimura, head of the office of international affairs at the Japan Meteorological Agency,
questioned their suitability for the critical and time sensitive predictions his agency has to
make to protect Japan's citizens.
Since the magnitude 9.0 earthquake and soon after the powerful tsunami that devasted Japan in
March of 2011, in the event of another such event, the agency has to make a call in just a minute
or two as to whether or not to issue a tsunami alert.
As well as providing Japan's weather services including tracking typhoons, the agency also issues
earthquake warnings for the Tokai area, where the tectonic plates are particularly well understood.
“It’s a very time-critical service,” he told journalists at the agency's Tokyo headquarters
today. “We can’t say the warning was late because of the cloud service. I think it’s a little
unlikely to move to the cloud.”
JMA’s current supercomputer is a 847 Teraflop supercomputer built by Hitachi and housed in Tokyo itself--
Fujitsu provides communications and other ICT services. Kimura said in the event of the JMA's supercomputer
copping it, it doesn’t have a redundant backup, and would initially have to rely on weather data
from other agencies such as the UK’s Met Office for its weather predictions.
The agency’s tsunami warnings are decided by humans, who rely on a previously compiled databases of
models covering different magnitudes and depths of quake across key locations.
Incredibly, Japan can experience up to 1,000 earthquakes a day. The system for tsunami warnings was
overhauled in the wake of the devastating 2011 quake, which resulted in a tsunami that killed over 10,000
Kimura said that 9.0 magnitude earthquake was off the scale-- the agency’s seismometers were “saturated”
and could initially not give a reading for its magnitude, leading to an underestimation of the danger of the
Kimura said the agency’s new protocol meant if a tsunami of more than 1 meter in height is expected, it then
issues an immediate evacuation notice for areas likely to be hit.
While questioning cloud providers' suitability for underpinning its warning system, Kimura
did say that the agency uses cloud services for disseminating information, and will do so with
imaging data from its upcoming new weather satellite, due to launch in October.
But cloud vendors are unlikely to be able to change the agency’s mind any time soon. The agency
upgrades its supercomputer every five years, and has just put the advisory team together for the
next refresh in four years time. Outsourcing the service is absolutely not on the agenda.
Source: The Postgres Development Team.
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.
Need to know more about the cloud? Sign up for your free Cloud Hosting White Paper.