Get the lowest-cost and the best server colocation service in the business. Learn more.
Information Technology News.


Oracle launches new Cloud Services catalog

Share on Twitter.

Install your server in Sun Hosting's modern colocation center in Montreal. Get all the details by clicking here.


Do it right this time. Click here and we will take good care of you!





Click here to order our special clearance dedicated servers.

Get the most reliable SMTP service for your business. You wished you got it sooner!

June 4, 2014

Oracle said this morning that it has injected some powerful new capabilities into its Oracle Enterprise Manager software.

It, like the rest of Oracle's newer products, is representative of the reinvention going on at the company as it wakes up to concepts such as multi-tenancy and cloud-based services.

New features in Oracle Enterprise Manager 12c4 include a service catalog, a Java Virtual Machine diagnostic service for developers, faster software provisioning, a data warehouse for database performance information and a few more elements.

The company has launched a Cloud Services catalog that lets system admins select the specific capabilities of a new database they want to be provisioned, such as high availability, as an example.

While a simple cosmetic change rather than a deep technical modification, this type of feature has been present in clouds for a long time and has lessened the learning curve.

Taken as a whole, the new features see the company combine capabilities that enterprise customers could easily stitch together from a bevy of open source technologies, along with some of its proprietary capabilities as well.

Two other features stand out as being particularly useful for developers. One is the "JVM Diagnostics as a Service" component, which lets application developers test out pre-production apps for common problems in the JVM, like garbage collection or heap errors.

This should help companies spot some underlying issues before they get bigger and cause a cascading system failure.

"We try to use these upfront visualizations with selective drilldown to make it human consumable," Oracle's senior director for systems and application management Dan Koloski said.

"Our approach with JVMD is we're trying to simplify wherever possible. We can essentially say to a less sophisticated consumer 'here's the way we would troubleshoot this problem'," he added.

Oracle hopes that the JVMD service will get developers "used to the nomenclature of operating production JVMs," says Koloski.

This may help relieve some of the pressure on some of those poor developers who are employed by organizations to troubleshoot JVM-based issues in apps.

Another important feature in this new release is a Performance Warehouse which takes Oracle "Automatic Workload Repository" data from a live database and then places it into a warehouse where system admins can run analytics on it to check for performance issues.

"That allows AWR data to be captured over a wide timespan and run analytics against it to do a wide amount of performance analysis," Koloski explained.

"What we've done with the performance warehouse is taken those snapshots offline allowing them to be stored indefinitely. What we're doing is enabling sys admins to do much more fine-grained analysis of the data," he added.

These newer features show that Oracle is waking slowly to the capabilities of more modern and frequently lower cost software tools.

Oracle's assumption is that by handling as much as possible in-house, it can reduce complexity for its clients, many of which lack the IT budget internally.

In other IT news

Designed and built by Cray, Australia's new iVEC supercomputer facility is looking for early adopters to run its new facility through its paces.

iVEC's current Magnus supercomputer, a Cray XC-30 with 104 blades (four nodes per blade, two 8-core Intel Sandy Bridge processors) comprises a total of 3,328 CPU cores delivering around 69 TFLOPS.

Beginning early next month, the facility kicks off a serious upgrade to bring Magnus to the petascale level, and Australia has indicated it wants to speed the process.

It will grow from its current two cabinets to eight, each with 1,488 nodes of two 12-core Haswell chips and 64 GB of RAM memory, taking Magnus to more than 35,000 cores and 95 TB of memory.

And Cray's Aries interconnect will hook up the whole thing together, and there'll be 3 PB of storage that can sustain a peak bandwidth of 70 GB/s.

During its August acceptance testing, iVEC wants “Petascale Pioneers” to give the machine some heat, and it's offering around 100 million core hours for testers. It's seeking projects that will stretch all aspects of Magnus, from its workload-handling to its communications.

In an invitation sent to researchers, iVEC is asking for loads that suck up more than 10 percent of the machine on a given run, demonstrating grand-challenge scientific problems; communications-heavy operations that exploit the “speed and bandwidth of the Cray Aries interconnect and Dragonfly topology across a substantial portion of the machine; or working on how the new machine can improve the performance and scalability of existing or new applications.

Overall technical support and training will be available during the pioneer phase, and academics are invited to express interest and ask questions of iVEC via submissions@ivec.org.

In other IT news

Server maker Supermicro is now building a 'cold storage' appliance aimed at data that must be kept for a very long time, can't be deleted but is accessed very rarely.

That new type of storage server "minimizes power consumption and reduces cooling requirements by spinning down or powering off idle drives and managing specific data streams via Supermicro’s compact, low-power Intel Atom C2750-based serverboard for cold storage," according to Supermicro.

The new server comes in a 1U form factor, 32-inch deep rack enclosure holding a dozen 4 TB or 6 TB 3.5-inch drives with Atom or, for more data-intensive apps, Xeon processors. The various options are:

  • A1SA7-2550, 4-core Atom CPU, up to 64GB, GbitE, redundant 400W power supplies. For cloud-based cold storage with drive spin-down.
  • A1SA7-2750, 8-core Atom CPU, up to 64GB, 10GbitE add-on-card, redundant 400W power supplies. For online, low-tier, scale-out storage.
  • X10SL7, 4-core Xeon E3-1200 v3 series CPU, up to 32GB, 10GbE add-on-card, redundant 400W power supplies. A big data or data lake platform for scale-out and object storage in cloud environments.
  • X9SRH-TPF, 6-12 core Xeon E5-1600/2600 v2 series CPU, up to 256GB ECC LR/RDIM or 64GB ECC UDIMM, onboard 10GbitE SFP+, redundant 600W power supplies. For big data analytics and native Hadoop 2.0 real time applications.
  • This is a bit like the Facebook OCP OpenVault cold storage product which is a 2U x 30-drive enclosures with two x86 server nodes to be built by Foxconn.

    But for its part, the Supermicro product is smaller, both physically and capacity-wise, while it also consumes less power and runs cooler.

    Stifel MD Aaron Rakers says-- "We believe that SuperMicro is finding some traction in cold storage (high-capacity) discussions, while also benefiting from supplier partnerships with emerging vendors such as Nimble Storage, Nutanix, Sun Hosting, Nexenta, Coho Data, Scale Computing, and a few others."

    In other IT news

    HP said earlier this morning that it has added two SAP-specific implementations to its ConvergedSystem product line-- the CS-900 for enterprise customers, and the CS-500 for smaller companies.

    Click here to order the best dedicated server and at a great price.

    HP says that with a configuration purpose-built for different application environments such as this one, it can get customers firing with new servers, without the system admins having to build and tune things for its very own specific needs.

    And the company has already rolled out ConvergedSystem variants for Citrix-- the Vertica and Microsoft big data environments and VMware virtualization environments.

    Born out of HP and SAP's Project Kraken, the SAP HANA-pitched CS-900 enterprise platform offers 12 TB per node as a single pool of memory, and can be scaled-out to 80 TB on multiple nodes.

    According to director and general manager for HP Servers & Converged Systems for the company's South Pacific Enterprise Group, Raj Thakur, “the system is optimized for in-memory computing and we have added the smarts from our Superdome Integrity line to provide high availability.”

    In particular, the Superdome's hardware-level partitioning is implemented in the CS-900 to provide better uptime and availability, said Thakur.

    “To be sure, the hardware partitioning is about twenty times more reliable than software-only virtualization”, Thakur added, saying that workloads will still move automatically between partitioned process chunks.

    “SAP HANA has a requirement for high in-memory computing capacity,” he continued, “so the CS-900 is pre-certified for that configuration. Not all workloads need or are tuned to run in-memory database and analytics workloads”.

    Since not everybody needs between 6 TB and 12 TB of in-memory computing power, there's also an SMB-level variant, the CS-500, a two-to-four socket server running between 1 TB and 256 TB of RAM per node, depending on the needs of the customer.

    Source: Oracle.

    Get the most dependable SMTP server for your company. You will congratulate yourself!

    Share on Twitter.

    IT News Archives | Site Search | Advertise on IT Direction | Contact | Home

    All logos, trade marks or service marks on this site are the property of their respective owners.

    Sponsored by Sure Mail™, Avantex and
    by Montreal Server Colocation.

           © IT Direction. All rights reserved.