Adobe blames maintenance failure for 27-hour service outage
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
May 16, 2014
Adobe is blaming a maintenance failure for the 27-hour service outage in its Creative Cloud suite
that left video and photo editors unable to log into online services.
“The outage happened during database maintenance activity and affected services that require users
to log in with an Adobe password,” Adobe said in a blog post, apologizing for the issue.
“We understand that the time it took to restore the service has been frustrating, but we
wanted to be as thorough as possible. We have identified the root cause of this problem and
are putting standards in place to prevent this from happening again,” the company added.
We asked Adobe what that “root cause” might be, but the company hadn’t gotten back to us at
the time we wrote this entry.
That leaves the software firm's own maintenance as the top culprit, a rather dim prospect for
users worried about whether this sort of thing might happen again.
If Twitter is any indication, graphics professionals are not feeling very forgiving, even now
that the service is back online.
In other IT news
Initially installed in November of last year, the Lawrence Livermore National Laboratory Catalyst
supercomputer is now officially open for industry workloads.
At that time, the lab talked about features like the 800 GB of Flash attached to each of its
304 nodes via PCIe, in addition to the per-node 128 GB of DRAM.
No matter how you look at this, it's a massive supercompter capable of millions of calculations per
The LLNL design mapped out the solid-state drives into application memory to make it look like
In big data analysis applications, fast memory becomes a top priority, and this supercomputer
is no exception.
The lab's scientists now seem satisfied with how the Cray-Intel appliance is working, and
are seeking partnerships in bioinformatics, big data analysis, graph networks, machine learning
and natural language processing, or for exploring new approaches to application checkpointing,
in-situ visualisation, out-of-core algorithms and data analytics.
The program will be offered to American companies through LLNL's HPC Innovation Centre.
Here's some more features and specs:
304 dual-socket compute nodes
2.4 Ghz 12-core Xeon E5-2695v2 CPUs
A total of 7,776 cores
128 GB DRAM
800 GB flash per node
Dual-rail Quad Data Rate (QDR-80)
150 teraflops for the full Cray CS300 cluster
We will keep you in the loop as how this special project evolves over time, and will provide
you feedback from the supercomputer's users.
In other IT news
In Japan, the agency which predicts tsunamis and earthquakes isn't keen on using the Cloud
in making it the core of its supercomputing operations.
While cloud vendors have been touting supercomputing appliances for several years already,
Tatsuya Kimura, head of the office of international affairs at the Japan Meteorological Agency,
questioned their suitability for the critical and time sensitive predictions his agency has to
make to protect Japan's citizens.
Since the magnitude 9.0 earthquake and soon after the powerful tsunami that devasted Japan in
March of 2011, in the event of another such event, the agency has to make a call in just a minute
or two as to whether or not to issue a tsunami alert.
As well as providing Japan's weather services including tracking typhoons, the agency also issues
earthquake warnings for the Tokai area, where the tectonic plates are particularly well understood.
“It’s a very time-critical service,” he told journalists at the agency's Tokyo headquarters
today. “We can’t say the warning was late because of the cloud service. I think it’s a little
unlikely to move to the cloud.”
JMA’s current supercomputer is a 847 Teraflop supercomputer built by Hitachi and housed in Tokyo itself--
Fujitsu provides communications and other ICT services. Kimura said in the event of the JMA's supercomputer
copping it, it doesn’t have a redundant backup, and would initially have to rely on weather data
from other agencies such as the UK’s Met Office for its weather predictions.
The agency’s tsunami warnings are decided by humans, who rely on a previously compiled databases of
models covering different magnitudes and depths of quake across key locations.
Incredibly, Japan can experience up to 1,000 earthquakes a day. The system for tsunami warnings was
overhauled in the wake of the devastating 2011 quake, which resulted in a tsunami that killed over 10,000
Kimura said that 9.0 magnitude earthquake was off the scale-- the agency’s seismometers were “saturated”
and could initially not give a reading for its magnitude, leading to an underestimation of the danger of the
Kimura said the agency’s new protocol meant if a tsunami of more than 1 meter in height is expected, it then
issues an immediate evacuation notice for areas likely to be hit.
While questioning cloud providers' suitability for underpinning its warning system, Kimura
did say that the agency uses cloud services for disseminating information, and will do so with
imaging data from its upcoming new weather satellite, due to launch in October.
But cloud vendors are unlikely to be able to change the agency’s mind any time soon. The agency
upgrades its supercomputer every five years, and has just put the advisory team together for the
next refresh in four years time. Outsourcing the service is absolutely not on the agenda.
In other IT news
Database upstart TransLattice has pushed out a new free version of the Postgres DB, based on technology that
it acquired from StormDB.
The company claims the software is ideal for supporting modern web-based applications, and
TransLattice says that the new database, dubbed Postgres-XL, has extreme online transaction processing
scalability and massively parallel processing analytics.
Like MySQL, PostgreSQL is one of the most widely used open-source relational databases, and is
seen by many as a good alternative to more-basic systems such as MariaDB.
The Postgres-XL database was announced by TransLattice yesterday, and it incorporates know-how
from its recent acquisition, StormDB.
StormDB was one of the major contributors to the Postgres-XC project, which is described as a
"multi-master write-scalable PostgreSQL cluster based on shared-nothing architecture."
Postres-XL has been built to have the desirable characteristics of Postgres-XC, but fewer of its
issues, TransLattice explained.
"In the case of Postgres-XC, a core difference is if you need to join data on one table on one
node with another table on another node, it will ship everything to the coordinator where the session
"When joining two large tables, it's going to ship everything to one single table, so in some cases
Postgres-XC performs worse than Postgres," explained TransLattice's chief architect Mason Sharp.
"In the case of Postgres-XL, the node knows exactly where the end-user data is stored, and all
communicate with one another so when a query comes in it's parsed and planned once on a coordinator,
then serialized and sent down to all other nodes."
TransLattice cofounder Michael Lyle said about two to three developers would work on Postgres-XL as
their main project, along with contributions from the development team behind TransLattice's commercial
"Overall, Postgres is a great general purpose open source database, and we've taken that and expanded
it and allowed you to expand it on multiple nodes," explained Sharp. "It does a pretty good job on write-
scalability. I think that's pretty unique. It's also nice for mixed workloads."
Postgres consulting firm OpenSCG has agreed to recommend the software when talking to enterprise
customers, and TransLattice tells us that it's in discussions with several potential users involved
in advertising and telephony.
We wondered how the technology may differ from WebScaleSQL, a new MySQL-based relational
database that is being developed by Google, LinkedIn, Twitter and Facebook.
Both databases have superficially similar goals, though WebScaleSQL has more of an emphasis
on scale-out deployment, while Postgres-XL is geared mostly towards analytics.
"WebScaleSQL isn't a clustered database, but instead a set of changes to MySQL to allow its
production use at greater scales," Lyle explained. "The users of WebScaleSQL, like Facebook, divide
their information between many WebScaleSQL/MySQL instances at the application layer."
"This requires significant application logic to keep everything coherent and uses a lot of developer
time. In contrast, Postgres-XL provides a single large relational database across many servers, providing
both excellent write scalability (for OLTP), and the ability to run sophisticated queries across the
entire dataset in parallel," he added.
Additionally, more people seem to use Postgres than other database upstarts such as MongoDB, so
it'll be fascinating to see how this project develops.
If TransLattice is good to its word and puts developers behind it and some independent benchmarks
justify its performance, it could be an exciting project. We'll keep you posted.
Source: Adobe Corp.
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.
Need to know more about the cloud? Sign up for your free Cloud Hosting White Paper.