Get the lowest-cost and the best server colocation service in the business. Learn more.
Information Technology News.


Australia wants to test drive its new Cray supercomputer

Share on Twitter.

Install your server in Sun Hosting's modern colocation center in Montreal. Get all the details by clicking here.


Do it right this time. Click here and we will take good care of you!





Click here to order our special clearance dedicated servers.

Get the most reliable SMTP service for your business. You wished you got it sooner!

June 4, 2014

Designed and built by Cray, Australia's new iVEC supercomputer facility is looking for early adopters to run its new facility through its paces.

iVEC's current Magnus supercomputer, a Cray XC-30 with 104 blades (four nodes per blade, two 8-core Intel Sandy Bridge processors) comprises a total of 3,328 CPU cores delivering around 69 TFLOPS.

Beginning early next month, the facility kicks off a serious upgrade to bring Magnus to the petascale level, and Australia has indicated it wants to speed the process.

It will grow from its current two cabinets to eight, each with 1,488 nodes of two 12-core Haswell chips and 64 GB of RAM memory, taking Magnus to more than 35,000 cores and 95 TB of memory.

And Cray's Aries interconnect will hook up the whole thing together, and there'll be 3 PB of storage that can sustain a peak bandwidth of 70 GB/s.

During its August acceptance testing, iVEC wants “Petascale Pioneers” to give the machine some heat, and it's offering around 100 million core hours for testers. It's seeking projects that will stretch all aspects of Magnus, from its workload-handling to its communications.

In an invitation sent to researchers, iVEC is asking for loads that suck up more than 10 percent of the machine on a given run, demonstrating grand-challenge scientific problems; communications-heavy operations that exploit the “speed and bandwidth of the Cray Aries interconnect and Dragonfly topology across a substantial portion of the machine; or working on how the new machine can improve the performance and scalability of existing or new applications.

Overall technical support and training will be available during the pioneer phase, and academics are invited to express interest and ask questions of iVEC via submissions@ivec.org.

In other IT news

Server maker Supermicro is now building a 'cold storage' appliance aimed at data that must be kept for a very long time, can't be deleted but is accessed very rarely.

That new type of storage server "minimizes power consumption and reduces cooling requirements by spinning down or powering off idle drives and managing specific data streams via Supermicro’s compact, low-power Intel Atom C2750-based serverboard for cold storage," according to Supermicro.

The new server comes in a 1U form factor, 32-inch deep rack enclosure holding a dozen 4 TB or 6 TB 3.5-inch drives with Atom or, for more data-intensive apps, Xeon processors. The various options are:

  • A1SA7-2550, 4-core Atom CPU, up to 64GB, GbitE, redundant 400W power supplies. For cloud-based cold storage with drive spin-down.
  • A1SA7-2750, 8-core Atom CPU, up to 64GB, 10GbitE add-on-card, redundant 400W power supplies. For online, low-tier, scale-out storage.
  • X10SL7, 4-core Xeon E3-1200 v3 series CPU, up to 32GB, 10GbE add-on-card, redundant 400W power supplies. A big data or data lake platform for scale-out and object storage in cloud environments.
  • X9SRH-TPF, 6-12 core Xeon E5-1600/2600 v2 series CPU, up to 256GB ECC LR/RDIM or 64GB ECC UDIMM, onboard 10GbitE SFP+, redundant 600W power supplies. For big data analytics and native Hadoop 2.0 real time applications.
  • This is a bit like the Facebook OCP OpenVault cold storage product which is a 2U x 30-drive enclosures with two x86 server nodes to be built by Foxconn.

    But for its part, the Supermicro product is smaller, both physically and capacity-wise, while it also consumes less power and runs cooler.

    Stifel MD Aaron Rakers says-- "We believe that SuperMicro is finding some traction in cold storage (high-capacity) discussions, while also benefiting from supplier partnerships with emerging vendors such as Nimble Storage, Nutanix, Sun Hosting, Nexenta, Coho Data, Scale Computing, and a few others."

    In other IT news

    HP said earlier this morning that it has added two SAP-specific implementations to its ConvergedSystem product line-- the CS-900 for enterprise customers, and the CS-500 for smaller companies.

    HP says that with a configuration purpose-built for different application environments such as this one, it can get customers firing with new servers, without the system admins having to build and tune things for its very own specific needs.

    And the company has already rolled out ConvergedSystem variants for Citrix-- the Vertica and Microsoft big data environments and VMware virtualization environments.

    Born out of HP and SAP's Project Kraken, the SAP HANA-pitched CS-900 enterprise platform offers 12 TB per node as a single pool of memory, and can be scaled-out to 80 TB on multiple nodes.

    According to director and general manager for HP Servers & Converged Systems for the company's South Pacific Enterprise Group, Raj Thakur, “the system is optimized for in-memory computing and we have added the smarts from our Superdome Integrity line to provide high availability.”

    In particular, the Superdome's hardware-level partitioning is implemented in the CS-900 to provide better uptime and availability, said Thakur.

    “To be sure, the hardware partitioning is about twenty times more reliable than software-only virtualization”, Thakur added, saying that workloads will still move automatically between partitioned process chunks.

    “SAP HANA has a requirement for high in-memory computing capacity,” he continued, “so the CS-900 is pre-certified for that configuration. Not all workloads need or are tuned to run in-memory database and analytics workloads”.

    Since not everybody needs between 6 TB and 12 TB of in-memory computing power, there's also an SMB-level variant, the CS-500, a two-to-four socket server running between 1 TB and 256 TB of RAM per node, depending on the needs of the customer.

    The CS-500s can scale out to 16 TB, Thakur said, and there's an SAP Business Suite variant that's a four-socket machine supporting up to 2 TB of RAM per node.

    “We will also wrap flexible capacity services around it,” Thakur said, a model that fits the increasingly popular pay-as-you-grow program.

    Hewlett Packard is hoping that its channel partners will take on the SAP HANA solutions as an as-a-service offering, he said, with extra capabilities soon to be offered.

    Those would include fault-tolerance and DB replication, which he said would offer “around 60 percent reduction in downtime”. He added that the system can failover in “as little as four seconds”.

    In other IT news

    Panasonic said earlier this morning that it is recalling 43,000 ToughBook batteries across the globe, including in Australia, and after three caught fire in Asia. The CF-H2 tablet computer batteries were supplied from July 2011 until May 2012, battery model Model CF-VZSU53AW, in manufacturing lots B6NA, B6YA, B71A, B74A, B75A, B76A, B7CA, B7VA, B83A, BBGA, BBHA, BBJA, and BBWA are all affected.

    So far, two of the fires occurred in Japan and one in Thailand, with no injuries reported. Panasonic's recall notice for Australian customers can be found on its website.

    Users should remove the battery and only use the ToughBook on its AC adapter until they have obtained a free replacement from the company.

    Panasonic recently just said it wants to be the only battery supplier in Tesla's planned Gigafactory. Refreshingly, the Li-ion batteries supplied in Tesla cars aren't the same as used in the tablets. Thank God for that.

    Click here to order the best dedicated server and at a great price.

    In other IT and electronics news

    Cloud systems operator Joyent went through a catastrophic failure late yesterday when an absent-minded administrator brought down an entire data center's computing assets.

    The cloud services provider began reporting "transient availability issues" for its US-East-1 data center at around 6.30 PM, EST.

    "Due to an internal operator error, all computing nodes in our US-East-1 data center were simultaneously rebooted," Joyent wrote.

    "Some computing nodes are already backed up, but due to the very high loads on the control plane, this is taking some time to reboot the whole system. We are dedicating all operational and engineering resources to getting this issue resolved, and will be providing a full report on this failure once every computing node and customer virtual machine is back online and operational," to company added.

    A percentage of the issues were fixed an hour or so later. A datacenter-wide forced reboot on all servers is just about the worst thing that can happen to a provider aside from the deletion of customer data, or multiple data centers going down simultaneously.

    Source: Australia's iVEC Supercomputer Initiative.

    Get the most dependable SMTP server for your company. You will congratulate yourself!

    Share on Twitter.

    IT News Archives | Site Search | Advertise on IT Direction | Contact | Home

    All logos, trade marks or service marks on this site are the property of their respective owners.

    Sponsored by Sure Mail™, Avantex and
    by Montreal Server Colocation.

           © IT Direction. All rights reserved.