Get the lowest-cost and the best server colocation service in the business. Learn more.
Information Technology News.


Oracle acquires desktop software virtualizer GreenBytes

Share on Twitter.

Install your server in Sun Hosting's modern colocation center in Montreal. Get all the details by clicking here.


Do it right this time. Click here and we will take good care of you!





Click here to order our special clearance dedicated servers.

Get the most reliable SMTP service for your business. You wished you got it sooner!

May 17, 2014

Oracle has acquired the desktop software virtualizer GreenBytes, which uses ZFS technology, for an undisclosed amount.

Overall, GreenBytes' software is based on its own highly rated deduplication engine and replication.

It can run on flash hardware and GreenBytes sold a VDI flash appliance but got out of the hardware business in August 2013.

It was founded in 2007 by CEO Bob Petrocelli and took in some $37 million in GreenBytes’s exec-contributed and venture capital funding.

This may suggest that, unless the company was distressed, the backers could have received up to 4 to 5 times the payout, meaning about $150 to $185 million. That sure sounds like a lot of cash for a SW-only VDI supplier, especially with Atlantis making waves in the market.

We haven’t had any recent announcements from GreenBytes about its business progress so we can't tell if it was doing well or not.

There might be some distress here which would reduce the amount of cash or shares that Oracle paid. Then again, maybe not, it's hard to tell.

The deal announcement said GreenBytes’ technology ”is expected to enhance Oracle's ZFS Storage Appliances, and that could mean the ZFS appliance getting GreenBytes’ deduplication engine. Oracle said it “is currently reviewing the existing GreenBytes product roadmap” and will be providing guidance to customers at some point in time.

In other IT news

We wrote about Postgres this week, and judging by the latest beta version of the open source PostgreSQL database, the 'nuances' between the SQL and NoSQL concepts are fading over time. And it looks like that trend doesn't seem to want to reverse itself.

With the beta release of the open source PostgreSQL 9.4 database yesterday, system admins have been given more of the features typically associated with NoSQL systems like MongoDB.

To be sure, the main new feature is the JSONB ("binary JSON") storage format, which lets the database deal in a more efficient manner with JSON-formatted data.

Overall usage of the JSON file format is one of the things that distinguishes typical NoSQL systems, like MongoDB, from their relational counterparts.

By supporting the format from version 9.2 onwards, PostgreSQL lets DB admins use a format that is easily parsed by interpreters to store their data, giving them some of the flexibility typically associated with document databases.

By writing JSON objects into binary, DB admins can manipulate them more efficiently. "JSONB uses an internal storage format that is not exposed to clients.

JSONB values are sent and received using the JSON text representation," explained the chief architect at PostgreSQL company EnterpriseDB, Robert HaaS.

MongoDB's "BSON" format is unable to represent an integer or floating-point number with greater than 64 bits of precision, whereas JSONB can, he explained.

"JSONB can represent arbitrary JSON values. The PostgreSQL community believes that limitations of this type are unacceptable, and wants to provide the full power of JSON to our users," he explained.

Though this may seem a bit trivial, it's worth remembering that the overall capabilities of JSONB will have a huge effect on the efficiency of JSON-based systems, especially as they grow.

Since a lot of younger developers have grown up writing data into this format, it's understandable to see the PostgreSQL take an opinionated view on it.

"With JSONB and other enhancements in version 9.4, we now have full document storage and improved performance with less effort," explained Craig Kerstiens, a developer at Salesforce-backed Heroku.

Besides JSONB, the technology also includes the "Data Change Streaming API" which relies on advances in changeset extraction, among numerous other features which are described in the release notes.

With this release, the PostgreSQL community appears to feel confident that as systems like MongoDB grow in popularity, relational systems will keep on advancing as well.

"We definitely see PostgreSQL evolving with new capabilities popularized by NoSQL," Haas added. "PostgreSQL is a relational database, so it is very flexible and extensible, and as new formats like JSON emerge and become popular, it's natural for us to introduce support for those in PostgreSQL."

"And with the HSTORE contribution module, the PostgreSQL DB has had the capacity to support key/value stores since 2006, pre-dating many of the NoSQL advances," he added.

"The implication for NoSQL solutions is that innovation around the format in which you store your data is not enough-- you've got to come up with truly novel capabilities for working with the data, which is more challenging."

In other IT news

Adobe is blaming a maintenance failure for the 27-hour service outage in its Creative Cloud suite that left video and photo editors unable to log into online services.

“The outage happened during database maintenance activity and affected services that require users to log in with an Adobe password,” Adobe said in a blog post, apologizing for the issue.

“We understand that the time it took to restore the service has been frustrating, but we wanted to be as thorough as possible. We have identified the root cause of this problem and are putting standards in place to prevent this from happening again,” the company added.

We asked Adobe what that “root cause” might be, but the company hadn’t gotten back to us at the time we wrote this entry.

That leaves the software firm's own maintenance as the top culprit, a rather dim prospect for users worried about whether this sort of thing might happen again.

If Twitter is any indication, graphics professionals are not feeling very forgiving, even now that the service is back online.

In other IT news

Initially installed in November of last year, the Lawrence Livermore National Laboratory Catalyst supercomputer is now officially open for industry workloads.

At that time, the lab talked about features like the 800 GB of Flash attached to each of its 304 nodes via PCIe, in addition to the per-node 128 GB of DRAM.

No matter how you look at this, it's a massive supercompter capable of millions of calculations per second.

The LLNL design mapped out the solid-state drives into application memory to make it look like standard DRAM.

In big data analysis applications, fast memory becomes a top priority, and this supercomputer is no exception.

The lab's scientists now seem satisfied with how the Cray-Intel appliance is working, and are seeking partnerships in bioinformatics, big data analysis, graph networks, machine learning and natural language processing, or for exploring new approaches to application checkpointing, in-situ visualisation, out-of-core algorithms and data analytics.

The program will be offered to American companies through LLNL's HPC Innovation Centre.

Click here to order the best dedicated server and at a great price.

Here's some more features and specs:

  • 304 dual-socket compute nodes
  • 2.4 Ghz 12-core Xeon E5-2695v2 CPUs
  • A total of 7,776 cores
  • 128 GB DRAM
  • 800 GB flash per node
  • Dual-rail Quad Data Rate (QDR-80)
  • 150 teraflops for the full Cray CS300 cluster
  • We will keep you in the loop as how this special project evolves over time, and will provide you feedback from the supercomputer's users.

    In other IT news

    In Japan, the agency which predicts tsunamis and earthquakes isn't keen on using the Cloud in making it the core of its supercomputing operations.

    While cloud vendors have been touting supercomputing appliances for several years already, Tatsuya Kimura, head of the office of international affairs at the Japan Meteorological Agency, questioned their suitability for the critical and time sensitive predictions his agency has to make to protect Japan's citizens.

    Since the magnitude 9.0 earthquake and soon after the powerful tsunami that devasted Japan in March of 2011, in the event of another such event, the agency has to make a call in just a minute or two as to whether or not to issue a tsunami alert.

    As well as providing Japan's weather services including tracking typhoons, the agency also issues earthquake warnings for the Tokai area, where the tectonic plates are particularly well understood.

    “It’s a very time-critical service,” he told journalists at the agency's Tokyo headquarters today. “We can’t say the warning was late because of the cloud service. I think it’s a little unlikely to move to the cloud.”

    JMA’s current supercomputer is a 847 Teraflop supercomputer built by Hitachi and housed in Tokyo itself-- somewhat quake-prone.

    Fujitsu provides communications and other ICT services. Kimura said in the event of the JMA's supercomputer copping it, it doesn’t have a redundant backup, and would initially have to rely on weather data from other agencies such as the UK’s Met Office for its weather predictions.

    The agency’s tsunami warnings are decided by humans, who rely on a previously compiled databases of models covering different magnitudes and depths of quake across key locations.

    Source: Oracle.

    Get the most dependable SMTP server for your company. You will congratulate yourself!

    Share on Twitter.

    Advertisement
    Need to know more about the cloud? Sign up for your free Cloud Hosting White Paper.

    IT News Archives | Site Search | Advertise on IT Direction | Contact | Home

    All logos, trade marks or service marks on this site are the property of their respective owners.

    Sponsored by Sure Mail™, Avantex and
    by Montreal Server Colocation.

           © IT Direction. All rights reserved.