Microsoft offers a new version of its EMET app
Share on Twitter.
Get the most reliable SMTP service for your business. You wished you got it sooner!
August 4, 2014
Microsoft said this morning that it's now offering a new version of its EMET app (Enhanced
Mitigation Experience Toolkit) and can be downloaded now.
The software behemoth recommends the deployment of EMET as a frontline defense against
various potential attacks, so the release of a new version is useful.
The two largest enhancements that Microsoft is talking up the loudest on this new implementation
are an improved Attack Surface Reduction (ASR) tool.
The ASR tool is configured to block some modules and plug-ins from being loaded by Internet
Explorer while navigating to websites belonging to the Internet Zone.
The new ASR will also block the Adobe Flash plug-in from being loaded by Microsoft Word,
Excel and PowerPoint.
If you really want Flash to load, EMET can also be tweaked to make it so, but that's not something
that Microsoft recommends.
The company is also keen on Export Address Table Filtering Plus, a new revision that offers additional
integrity checks on stack registers and stack limits when export tables are read from certain lower-level
There's also a new EMET service that takes care of evaluating the Certificate Trust rules, appropriately
dispatching EMET Agents in every user’s instance, and automatically applying Group Policy settings pushed through
In other IT news
In an unexpected decision, HP has given OpenVMS a new chance on life, effectively reversing last
year's move to kill that server's operating system. HP hasn't changed its mind about its latest OpenVMS
roadmap, which has it ending standard support for some versions of the OS in 2015 and then completely abandoning
its support by 2020.
Instead, it has granted an exclusive license to another company, VMS Software Inc, to take over
after its own support ends.
However, that doesn't just mean providing hospice care for older servers, either. VSI says it plans to
produce new builds of OpenVMS for more recent hardware architectures, beginning with the latest Intel
Itanium chips and eventually crossing over to the x86 architecture.
"We are grateful and thrilled that HP has granted us stewardship over the future of this operating
system," VSI CEO Duane Harris said in a press release.
"Our passion for taking OpenVMS into future decades is only matched by the many developers and customers
who have relied on it to faithfully run their mission critical applications over the last thirty years," Harris added.
The move comes just a few days after a French OpenVMS user group published an open letter to HP
urging it to reconsider its decision to abandon the operating system, which began life as VAX/VMS
in 1977, and has survived in one form or another ever since.
"It is impossible to think that HP could not actively maintain support for dependable systems
that are such an important part of its portfolio," read the letter, which was penned by Gerard Calliet
of HP-Interex France.
Under the new arrangement, current customers will continue to receive support for OpenVMS 8.4 and
earlier from HP, as specified in the company's roadmap.
VSI doesn't plan to offer extended support for those versions once HP's support expires.
Rather, it will use OpenVMS source code licensed from HP to produce new versions of the OS.
These will also include support for newer hardware, beginning with HP Integrity i4 servers
based on the Intel Itanium 9500 Poulson CPUs.
The forthcoming "Kittson" Itaniums will be next on VSI's list, followed at some point by a version
of x86 processors, which VSI has dubbed OpenVMS v.Next.
VSI has committed to providing standard support for each new OpenVMS version for a minimum
of five years.
Overall, no release dates have been given for any of these builds, but VSI has published a preliminary
rolling roadmap explaining its plans, which it says it will update every three months.
VSI was founded by the principals of Nemonix Engineering, a company that has provided support
services for OpenVMS systems for the last thirty years.
The company, which is based in Bolton Massachusetts, says it has assembled an onshore team of
veteran OpenVMS developers, many of which it says have experience dating back to the Digital Equipment
In announcing the partnership, Ric Lewis, vice president and general manager of HP's enterprise
server division, said that HP customers now have "a complete long-term solution" for their OpenVMS
"HP customers who would like to deploy OpenVMS on current and future HP technologies now have
additional options, and those who choose to stay on their existing OpenVMS platform will be
protected by the extended HP support services announced previously without interruption or change in
process," Lewis said.
In other IT news
A DARPA-driven project based on OpenStack has been revealed to the IT community, with the
claim that it will eventually lead to sub-second provisioning for connectivity between clouds.
The world is already familiar with the concept of elastic clouds, with Amazon, Google and Microsoft
offering some variant on such themes for enterprise customers.
However, cloud-to-cloud elasticity is a completely different matter altogether, since carriers and
their optical networks have to be pulled into the stack. And that's where the similarity ends.
To be sure, over the past three years or so, IBM, AT&T and Applied Communications Sciences already
worked together on the project, which IBM describes as a proof of concept demonstrating “a cloud system
that monitors and automatically scales the network up or down as various applications need them.”
The basic signalling is quite simple, said Doug Freimuth from IBM Research. “It simply works by the
cloud data centre sending a signal to a network controller that describes the bandwidth needs, and which
cloud data centre need to connect”.
For that, Freimuth says, the system needs an orchestrator between data centres, akin to the in-data-centre
orchestration already pursued by cloud vendors and the open source world alike.
According to R&D Magazine, AT&T developed the bandwidth-on-demand networking architecture, while
ACS provided its optical-layer routing and signalling technology.
AT&T Labs' Robert Doverspike said the proof-of-concept combined SDN implementations with
“advanced, cost-efficient network routing in a realistic carrier network environment”.
“This prototype was implemented on OpenStack, an open-source cloud-computing platform for public
and private clouds, elastically provisioning WAN connectivity and placing virtual machines between
two clouds for the purpose of load balancing virtual network functions,” R&D Mag said.
The prototype was developed under DARPA's seven-year-old Coronet program, of which we should
get more news soon.
In other IT news
Cisco has written an interesting white paper in which it suggests that network virtualization
can produce unwanted consequences and that there are no immediate guarantees that the whole
process will work going forward.
The scathing report goes on to say that-- “Virtualization isn't a new concept but it's now being
applied to network functions such as those in switches, routers and the myriad other network appliances
deployed,” the white paper suggests, before going on to offer the observations that “The early days
of server virtualization had a dramatic impact on lowering server capital expenditures”.
That Cisco would come out with such revelations isn't surprising, considering that network virtualization
has been cutting into the company's revenues for the past two years.
However, Cisco says that companies that are making meagre savings didn't last because “Operational
costs skyrocketed as more labor-intensive and complex processes were required in the end.”
Which is difficult to dispute-- all manner of vendors and products have emerged to manage
large collections of virtual servers. And there are other direct and indirect costs as well.
Network virtualization, the paper continues, should be approached with that experience in
“Furthermore, reducing complexity through automation and management will speed up operations to a certain degree,
contribute to service agility, and lower operational costs,” the paper adds.
Some of the arguments that follow are not sophisticated. Cisco says you probably wouldn't
bother with network virtualization for seldom-access resources, but landline and wireless
carriers will do well to put it to work for this month's new released movie downloads.
The paper also preaches some automation, optimization and the careful design of virtual network
resources as well, so it makes for good reading.
The paper is more interesting for the fact that Cisco seems to see the need to hose down enthusiasm
for network virtualization despite being in the market.
Another eyebrow-raiser is the mention of server virtualization's dark side, as those pitfalls
were often discovered by early VMware adopters.
Cisco and VMware used to be the best buddies and are still happy to be seen in public together
when discussing the VCE joint venture or the NetApp/Cisco/VMware Flexpod stack-in-a-box/reference
architecture, but are they still best buddies after this?
Behind the scenes, we can of understand that each would consider the other an enemy when
it comes to network virtualization, and that would be fair to assume.
Cisco must know that by pointing out that server virtualization created some messes in the past
two years and it is now spreading FUD-by-association in the direction of VMware's NSX network virtualization
efforts. It will be interesting to follow these developments in the next year or so.
Get the most dependable SMTP server for your company. You will congratulate yourself!
Share on Twitter.