Developer wins award for his work in protecting code
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
June 5, 2014
For the past few months, better code obfuscation has attracted the attention of the prestigious
Association of Computing Machinery, which has singled out a developer working at IBM's T.J. Watson Research Centre
with an award for his work.
On any given day, protecting programming code, even as a binary, from being reverse-engineered
is very difficult-- any method that encrypts the code has to keep its functionality in place, and
decrypting the code for execution has to be extremely fast in order for the protection to work seamlessly.
Sanjam Garg, an alumni of the Institute of Technology of New Delhi, India, claims to have cracked
that issue in his recently published paper, Candidate Multilinear Maps from Ideal Lattices.
As that paper explains, bilinear maps are very well-known, and their applications are “too numerous”
to list but tripartate Diffie-Hellman and identity-based encryption are two examples of how it's done.
Expanding that concept to multi-lineal maps has been theorized, and Garg now writes it in his paper, but it wasn't
previously achieved, however.
That work was then expanded in collaboration between Garg and researchers from Microsoft, Boston University
and UCLA which demonstrated that Garg's concepts are workable for program code obfuscation.
As they put it in the paper's abstract, Garg's work provides a “candidate obfuscator that
cannot be broken by algebraic attacks”.
As the ACM notes-- “Garg described new mathematical tools that serve as key ingredients for transforming
a program into a jigsaw puzzle of encrypted pieces.
Corresponding to each and every single input is a unique set of puzzle pieces that, when assembled,
reveal the output of the program.
Security of the obfuscated program hinges on the fact that illegitimate combinations of the
puzzle pieces do not reveal anything.”
In other IT news
PCIe-card maker Fusion-io has designed new ioMemory hardware and is calling it the Atomic Series. This
3rd-generation architecture solution stores up to 6.4 TB of data. The existing ioScale card stores 3.2 TB
That’s made possible by using 2Xnm class MLC NAND (29 to 20nm). Fusion-io says the new card
is faster, maybe twice as fast as the ioScale product.
The new hardware uses a PCIe 2.0 x8 interface card and has self-healing features. Fusion-io says
the architecture is simplified and the product offers "the highest transaction rate per gigabyte for
everything from read intensive workflows to mixed workloads".
The two Atomic products replace both the ioSCale and ioDrive II products. This looks like a
solid announcement which should help Fusion-io grow its business.
We'll have to wait and see how its competitors Intel, WD, Micron and Seagate respond to this. And we
suspect it shouldn't take too long either.
With up to 6.4 TB of capacity then, given affordability concerns are met, we could be seeing the
gradual ending of disk utilization on servers needing faster access to data.
There is a five-year warranty for the PX-600 and a 3-year one for the SX-300. The Atomic Series
products are available from Cisco, Dell, HP, IBM and Supermicro, also from Fusion’s certified
There's no word on whether Facebook and Apple have had a change of heart on hardware purchase orders. We'll keep
you posted on that.
In other IT news
Oracle said this morning that it has injected some powerful new capabilities into its
Oracle Enterprise Manager software.
It, like the rest of Oracle's newer products, is representative of the reinvention going on
at the company as it wakes up to concepts such as multi-tenancy and cloud-based services.
New features in Oracle Enterprise Manager 12c4 include a service catalog, a Java Virtual Machine
diagnostic service for developers, faster software provisioning, a data warehouse for database
performance information and a few more elements.
The company has launched a Cloud Services catalog that lets system admins select the specific
capabilities of a new database they want to be provisioned, such as high availability, as an example.
While a simple cosmetic change rather than a deep technical modification, this type of feature
has been present in clouds for a long time and has lessened the learning curve.
Taken as a whole, the new features see the company combine capabilities that enterprise
customers could easily stitch together from a bevy of open source technologies, along with
some of its proprietary capabilities as well.
Two other features stand out as being particularly useful for developers. One is the "JVM Diagnostics
as a Service" component, which lets application developers test out pre-production apps for common
problems in the JVM, like garbage collection or heap errors.
This should help companies spot some underlying issues before they get bigger and cause a
cascading system failure.
"We try to use these upfront visualizations with selective drilldown to make it human consumable,"
Oracle's senior director for systems and application management Dan Koloski said.
"Our approach with JVMD is we're trying to simplify wherever possible. We can essentially
say to a less sophisticated consumer 'here's the way we would troubleshoot this problem'," he added.
Oracle hopes that the JVMD service will get developers "used to the nomenclature of operating production
JVMs," says Koloski.
This may help relieve some of the pressure on some of those poor developers who are employed by
organizations to troubleshoot JVM-based issues in apps.
Another important feature in this new release is a Performance Warehouse which takes Oracle "Automatic
Workload Repository" data from a live database and then places it into a warehouse where system admins
can run analytics on it to check for performance issues.
"That allows AWR data to be captured over a wide timespan and run analytics against it to
do a wide amount of performance analysis," Koloski explained.
"What we've done with the performance warehouse is taken those snapshots offline allowing
them to be stored indefinitely. What we're doing is enabling sys admins to do much more fine-grained
analysis of the data," he added.
These newer features show that Oracle is waking slowly to the capabilities of more modern and
frequently lower cost software tools.
Oracle's assumption is that by handling as much as possible in-house, it can reduce complexity
for its clients, many of which lack the IT budget internally.
In other IT news
Designed and built by Cray, Australia's new iVEC supercomputer facility is looking for early adopters to run its new facility
through its paces.
iVEC's current Magnus supercomputer, a Cray XC-30 with 104 blades (four nodes per blade, two 8-core
Intel Sandy Bridge processors) comprises a total of 3,328 CPU cores delivering around 69 TFLOPS.
Beginning early next month, the facility kicks off a serious upgrade to bring Magnus to the
petascale level, and Australia has indicated it wants to speed the process.
It will grow from its current two cabinets to eight, each with 1,488 nodes of two 12-core
Haswell chips and 64 GB of RAM memory, taking Magnus to more than 35,000 cores and 95 TB of
And Cray's Aries interconnect will hook up the whole thing together, and there'll be 3 PB
of storage that can sustain a peak bandwidth of 70 GB/s.
During its August acceptance testing, iVEC wants “Petascale Pioneers” to give the machine
some heat, and it's offering around 100 million core hours for testers. It's seeking projects
that will stretch all aspects of Magnus, from its workload-handling to its communications.
In an invitation sent to researchers, iVEC is asking for loads that suck up more than 10 percent
of the machine on a given run, demonstrating grand-challenge scientific problems; communications-heavy
operations that exploit the “speed and bandwidth of the Cray Aries interconnect and Dragonfly topology
across a substantial portion of the machine; or working on how the new machine can improve the performance
and scalability of existing or new applications.
Overall technical support and training will be available during the pioneer phase, and academics
are invited to express interest and ask questions of iVEC via firstname.lastname@example.org.
In other IT news
Server maker Supermicro is now building a 'cold storage' appliance aimed at data that must be kept for a very long time, can't be deleted but is
accessed very rarely.
That new type of storage server "minimizes power consumption and reduces cooling requirements by spinning
down or powering off idle drives and managing specific data streams via Supermicro’s compact, low-power Intel
Atom C2750-based serverboard for cold storage," according to Supermicro.
The new server comes in a 1U form factor, 32-inch deep rack enclosure holding a dozen 4 TB or 6 TB 3.5-inch
drives with Atom or, for more data-intensive apps, Xeon processors. The various options are:
A1SA7-2550, 4-core Atom CPU, up to 64GB, GbitE, redundant 400W power supplies. For cloud-based cold storage
with drive spin-down.
A1SA7-2750, 8-core Atom CPU, up to 64GB, 10GbitE add-on-card, redundant 400W power supplies. For online, low-tier,
X10SL7, 4-core Xeon E3-1200 v3 series CPU, up to 32GB, 10GbE add-on-card, redundant 400W power supplies. A big
data or data lake platform for scale-out and object storage in cloud environments.
X9SRH-TPF, 6-12 core Xeon E5-1600/2600 v2 series CPU, up to 256GB ECC LR/RDIM or 64GB ECC UDIMM, onboard 10GbitE SFP+,
redundant 600W power supplies. For big data analytics and native Hadoop 2.0 real time applications.
This is a bit like the Facebook OCP OpenVault cold storage product which is a 2U x 30-drive enclosures with two x86
server nodes to be built by Foxconn.
But for its part, the Supermicro product is smaller, both physically and capacity-wise, while it also consumes less power
and runs cooler.
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.