Get the lowest-cost and the best server colocation service in the business. Learn more.
Information Technology News.


IT departments spend less on enterprise storage arrays

Share on Twitter.

Install your server in Sun Hosting's modern colocation center in Montreal. Get all the details by clicking here.


Do it right this time. Click here and we will take good care of you!





Click here to order our special clearance dedicated servers.

Get the most reliable SMTP service for your business. You wished you got it sooner!

May 19, 2014

Here's a fact that shouldn't surprise you-- IT depts are spending less on enterprise storage arrays, and instead are considering shifting to the cloud, if they haven't done so already.

This 4 year old trend was pointed out by Aaron Rakers, managing director of equity research firm Stifel Nicolaus.

He’s plotted the combined EMC, Hitachi, and IBM storage financial results over time, and his chart shows a revenue decline since 2010.

Stifel also polled system admins at businesses that have a need for enterprise storage, and found that:

  • About 60 percent of respondents thought 2014 storage spending would be greater than that in 2013.
  • Over 53 percent of surveyed CIOs and CTOs view cloud computing as the most disruptive technology to their data centre, followed by software-defined storage and converged compute-storage (32 percent) and flash storage (15 per cent).
  • About 60 percent of surveyed CIOs and CTOs view EMC as the best positioned company to capitalise on the data centre transition and trends taking place in the enterprise storage market, while 19 percent of respondents view NetApp as the best positioned.
  • Over 58.9 percent expect to evaluate a software-defined storage solution in the next 12 to 18 months.
  • 60 percent view server SAN software as the most attractive.
  • Rakers concluded by saying-- ``We believe traditional approaches to networked storage appear to be increasingly misaligned with the performance requirements of virtualised server environments.``

    ``We would view late 2014/2015 as potentially representing a pivotal period in how investors view the storage landscape over the next 3 to 5+ years,`` he added.

    ``We believe that server-side SAN or hyper-convergence represents potentially the most distributive architectural approach to software-defined storage as this approach is highlighted as being the closest comparison to Google, Facebook and Amazon.``

    Rakers has an “expectation of a two-quarter pause in storage spending; EMC and NetApp have consistently highlighted a belief that enterprise decision cycles have lengthened”.

    In other IT news

    Oracle has acquired the desktop software virtualizer GreenBytes, which uses ZFS technology, for an undisclosed amount.

    Overall, GreenBytes' software is based on its own highly rated deduplication engine and replication.

    It can run on flash hardware and GreenBytes sold a VDI flash appliance but got out of the hardware business in August 2013.

    It was founded in 2007 by CEO Bob Petrocelli and took in some $37 million in GreenBytes’s exec-contributed and venture capital funding.

    This may suggest that, unless the company was distressed, the backers could have received up to 4 to 5 times the payout, meaning about $150 to $185 million. That sure sounds like a lot of cash for a SW-only VDI supplier, especially with Atlantis making waves in the market.

    We haven’t had any recent announcements from GreenBytes about its business progress so we can't tell if it was doing well or not.

    There might be some distress here which would reduce the amount of cash or shares that Oracle paid. Then again, maybe not, it's hard to tell.

    The deal announcement said GreenBytes’ technology ”is expected to enhance Oracle's ZFS Storage Appliances, and that could mean the ZFS appliance getting GreenBytes’ deduplication engine. Oracle said it “is currently reviewing the existing GreenBytes product roadmap” and will be providing guidance to customers at some point in time.

    In other IT news

    We wrote about Postgres this week, and judging by the latest beta version of the open source PostgreSQL database, the 'nuances' between the SQL and NoSQL concepts are fading over time. And it looks like that trend doesn't seem to want to reverse itself.

    With the beta release of the open source PostgreSQL 9.4 database yesterday, system admins have been given more of the features typically associated with NoSQL systems like MongoDB.

    To be sure, the main new feature is the JSONB ("binary JSON") storage format, which lets the database deal in a more efficient manner with JSON-formatted data.

    Overall usage of the JSON file format is one of the things that distinguishes typical NoSQL systems, like MongoDB, from their relational counterparts.

    By supporting the format from version 9.2 onwards, PostgreSQL lets DB admins use a format that is easily parsed by interpreters to store their data, giving them some of the flexibility typically associated with document databases.

    By writing JSON objects into binary, DB admins can manipulate them more efficiently. "JSONB uses an internal storage format that is not exposed to clients.

    JSONB values are sent and received using the JSON text representation," explained the chief architect at PostgreSQL company EnterpriseDB, Robert HaaS.

    MongoDB's "BSON" format is unable to represent an integer or floating-point number with greater than 64 bits of precision, whereas JSONB can, he explained.

    "JSONB can represent arbitrary JSON values. The PostgreSQL community believes that limitations of this type are unacceptable, and wants to provide the full power of JSON to our users," he explained.

    Though this may seem a bit trivial, it's worth remembering that the overall capabilities of JSONB will have a huge effect on the efficiency of JSON-based systems, especially as they grow.

    Since a lot of younger developers have grown up writing data into this format, it's understandable to see the PostgreSQL take an opinionated view on it.

    "With JSONB and other enhancements in version 9.4, we now have full document storage and improved performance with less effort," explained Craig Kerstiens, a developer at Salesforce-backed Heroku.

    Besides JSONB, the technology also includes the "Data Change Streaming API" which relies on advances in changeset extraction, among numerous other features which are described in the release notes.

    With this release, the PostgreSQL community appears to feel confident that as systems like MongoDB grow in popularity, relational systems will keep on advancing as well.

    "We definitely see PostgreSQL evolving with new capabilities popularized by NoSQL," Haas added. "PostgreSQL is a relational database, so it is very flexible and extensible, and as new formats like JSON emerge and become popular, it's natural for us to introduce support for those in PostgreSQL."

    "And with the HSTORE contribution module, the PostgreSQL DB has had the capacity to support key/value stores since 2006, pre-dating many of the NoSQL advances," he added.

    "The implication for NoSQL solutions is that innovation around the format in which you store your data is not enough-- you've got to come up with truly novel capabilities for working with the data, which is more challenging."

    In other IT news

    Adobe is blaming a maintenance failure for the 27-hour service outage in its Creative Cloud suite that left video and photo editors unable to log into online services.

    “The outage happened during database maintenance activity and affected services that require users to log in with an Adobe password,” Adobe said in a blog post, apologizing for the issue.

    Click here to order the best dedicated server and at a great price.

    “We understand that the time it took to restore the service has been frustrating, but we wanted to be as thorough as possible. We have identified the root cause of this problem and are putting standards in place to prevent this from happening again,” the company added.

    We asked Adobe what that “root cause” might be, but the company hadn’t gotten back to us at the time we wrote this entry.

    That leaves the software firm's own maintenance as the top culprit, a rather dim prospect for users worried about whether this sort of thing might happen again.

    If Twitter is any indication, graphics professionals are not feeling very forgiving, even now that the service is back online.

    In other IT news

    Initially installed in November of last year, the Lawrence Livermore National Laboratory Catalyst supercomputer is now officially open for industry workloads.

    At that time, the lab talked about features like the 800 GB of Flash attached to each of its 304 nodes via PCIe, in addition to the per-node 128 GB of DRAM.

    No matter how you look at this, it's a massive supercompter capable of millions of calculations per second.

    The LLNL design mapped out the solid-state drives into application memory to make it look like standard DRAM.

    In big data analysis applications, fast memory becomes a top priority, and this supercomputer is no exception.

    The lab's scientists now seem satisfied with how the Cray-Intel appliance is working, and are seeking partnerships in bioinformatics, big data analysis, graph networks, machine learning and natural language processing, or for exploring new approaches to application checkpointing, in-situ visualisation, out-of-core algorithms and data analytics.

    Source: Stifel Nicolaus Inc.

    Get the most dependable SMTP server for your company. You will congratulate yourself!

    Share on Twitter.

    Advertisement
    Need to know more about the cloud? Sign up for your free Cloud Hosting White Paper.

    IT News Archives | Site Search | Advertise on IT Direction | Contact | Home

    All logos, trade marks or service marks on this site are the property of their respective owners.

    Sponsored by Sure Mail™, Avantex and
    by Montreal Server Colocation.

           © IT Direction. All rights reserved.