Get the lowest-cost and the best server colocation service in the business. Learn more.
Information Technology News.


Japan isn't keen on using the Cloud when it comes to tsunami warnings

Share on Twitter.

Install your server in Sun Hosting's modern colocation center in Montreal. Get all the details by clicking here.


Do it right this time. Click here and we will take good care of you!





Click here to order our special clearance dedicated servers.

Get the most reliable SMTP service for your business. You wished you got it sooner!

May 15, 2014

In Japan, the agency which predicts tsunamis and earthquakes isn't keen on using the Cloud in making it the core of its supercomputing operations.

While cloud vendors have been touting supercomputing appliances for several years already, Tatsuya Kimura, head of the office of international affairs at the Japan Meteorological Agency, questioned their suitability for the critical and time sensitive predictions his agency has to make to protect Japan's citizens.

Since the magnitude 9.0 earthquake and soon after the powerful tsunami that devasted Japan in March of 2011, in the event of another such event, the agency has to make a call in just a minute or two as to whether or not to issue a tsunami alert.

As well as providing Japan's weather services including tracking typhoons, the agency also issues earthquake warnings for the Tokai area, where the tectonic plates are particularly well understood.

“It’s a very time-critical service,” he told journalists at the agency's Tokyo headquarters today. “We can’t say the warning was late because of the cloud service. I think it’s a little unlikely to move to the cloud.”

JMA’s current supercomputer is a 847 Teraflop supercomputer built by Hitachi and housed in Tokyo itself-- somewhat quake-prone.

Fujitsu provides communications and other ICT services. Kimura said in the event of the JMA's supercomputer copping it, it doesn’t have a redundant backup, and would initially have to rely on weather data from other agencies such as the UK’s Met Office for its weather predictions.

The agency’s tsunami warnings are decided by humans, who rely on a previously compiled databases of models covering different magnitudes and depths of quake across key locations.

Incredibly, Japan can experience up to 1,000 earthquakes a day. The system for tsunami warnings was overhauled in the wake of the devastating 2011 quake, which resulted in a tsunami that killed over 10,000 citizens.

Kimura said that 9.0 magnitude earthquake was off the scale-- the agency’s seismometers were “saturated” and could initially not give a reading for its magnitude, leading to an underestimation of the danger of the tsunami.

Kimura said the agency’s new protocol meant if a tsunami of more than 1 meter in height is expected, it then issues an immediate evacuation notice for areas likely to be hit.

While questioning cloud providers' suitability for underpinning its warning system, Kimura did say that the agency uses cloud services for disseminating information, and will do so with imaging data from its upcoming new weather satellite, due to launch in October.

But cloud vendors are unlikely to be able to change the agency’s mind any time soon. The agency upgrades its supercomputer every five years, and has just put the advisory team together for the next refresh in four years time. Outsourcing the service is absolutely not on the agenda.

In other IT news

Database upstart TransLattice has pushed out a new free version of the Postgres DB, based on technology that it acquired from StormDB.

The company claims the software is ideal for supporting modern web-based applications, and similar needs.

TransLattice says that the new database, dubbed Postgres-XL, has extreme online transaction processing scalability and massively parallel processing analytics.

Like MySQL, PostgreSQL is one of the most widely used open-source relational databases, and is seen by many as a good alternative to more-basic systems such as MariaDB.

The Postgres-XL database was announced by TransLattice yesterday, and it incorporates know-how from its recent acquisition, StormDB.

StormDB was one of the major contributors to the Postgres-XC project, which is described as a "multi-master write-scalable PostgreSQL cluster based on shared-nothing architecture."

Postres-XL has been built to have the desirable characteristics of Postgres-XC, but fewer of its issues, TransLattice explained.

"In the case of Postgres-XC, a core difference is if you need to join data on one table on one node with another table on another node, it will ship everything to the coordinator where the session originated from."

"When joining two large tables, it's going to ship everything to one single table, so in some cases Postgres-XC performs worse than Postgres," explained TransLattice's chief architect Mason Sharp.

"In the case of Postgres-XL, the node knows exactly where the end-user data is stored, and all communicate with one another so when a query comes in it's parsed and planned once on a coordinator, then serialized and sent down to all other nodes."

TransLattice cofounder Michael Lyle said about two to three developers would work on Postgres-XL as their main project, along with contributions from the development team behind TransLattice's commercial TED database.

"Overall, Postgres is a great general purpose open source database, and we've taken that and expanded it and allowed you to expand it on multiple nodes," explained Sharp. "It does a pretty good job on write- scalability. I think that's pretty unique. It's also nice for mixed workloads."

Postgres consulting firm OpenSCG has agreed to recommend the software when talking to enterprise customers, and TransLattice tells us that it's in discussions with several potential users involved in advertising and telephony.

We wondered how the technology may differ from WebScaleSQL, a new MySQL-based relational database that is being developed by Google, LinkedIn, Twitter and Facebook.

Both databases have superficially similar goals, though WebScaleSQL has more of an emphasis on scale-out deployment, while Postgres-XL is geared mostly towards analytics.

"WebScaleSQL isn't a clustered database, but instead a set of changes to MySQL to allow its production use at greater scales," Lyle explained. "The users of WebScaleSQL, like Facebook, divide their information between many WebScaleSQL/MySQL instances at the application layer."

"This requires significant application logic to keep everything coherent and uses a lot of developer time. In contrast, Postgres-XL provides a single large relational database across many servers, providing both excellent write scalability (for OLTP), and the ability to run sophisticated queries across the entire dataset in parallel," he added.

Additionally, more people seem to use Postgres than other database upstarts such as MongoDB, so it'll be fascinating to see how this project develops.

If TransLattice is good to its word and puts developers behind it and some independent benchmarks justify its performance, it could be an exciting project. We'll keep you posted.

In other IT news

Some IT industry analysts are wondering if Salesforce.com's decision to acquire Heroku was a good idea in the first place.

Why would a big SaaS company like SalesForce.com with its own development platform need a PaaS play like Heroku?

The eternally-enthusiastic company has just spelled out why, and how, it thinks the two will work together.

The business model is straightforward-- developers building apps for employees will do it on Force.com and developers building apps for consumers will do it on Heroku.

Click here to order the best dedicated server and at a great price.

The concept seems to be that Heroku can take care of all the heavy lifting required to deliver a web app at scale.

Force.com retains its role as the place to do custom app development for your team and the new connector pipes data between the two with sufficient speed.

However, in the background things aren't that simple. Salesforce.com lives on Oracle software. Heroku relies on its own version of Postgres. So they offer rather different environments for developers, and that could pose some issues down the road.

Salesforce.com is also being a bit loosy-goosy on data centre arrangements-- let's hope they've got Force.com and Heroku as physically close as possible so that latency doesn't cause some delays during data transfers between the two platforms.

Salesforce is saying that the formal release of the Heroku Connector will enhance its overall proposition by making it possible to feed customer-generated data into apps that run business processes.

But in the recent past, that hasn't been that easy since you code your entire business by hand, either as a startup or a business colossus, with probably more developers than you really need.

However, there's a bit of truth in that concept. It's probably also not far from reality to suggest that what Salesforce has done is build middleware-as-a-service to link the two divergent parts of its platform.

Source: Japan's Meteorological Agency.

Get the most dependable SMTP server for your company. You will congratulate yourself!

Share on Twitter.

Advertisement
Need to know more about the cloud? Sign up for your free Cloud Hosting White Paper.

IT News Archives | Site Search | Advertise on IT Direction | Contact | Home

All logos, trade marks or service marks on this site are the property of their respective owners.

Sponsored by Sure Mail™, Avantex and
by Montreal Server Colocation.

       © IT Direction. All rights reserved.