Information Technology News.

Is it time to start regulating algorithms? Some people seem to think so

Share on Twitter.

Sponsered ad: Get a Linux Enterprise server with 92 Gigs of RAM, 16 CPUs and 8 TB of storage at our liquidation sale. Only one left in stock.

Sponsered ad: Order the best SMTP service for your business. Guaranteed or your money back.

April 17, 2017

Let's be totally honest: algorithms seem to be everywhere in our lives these days, and the trend is growing rapidly, not just with Google but with so many other services and venues that it can be mind boggling at times.

Wikipedia defines an algorithm as a self-contained sequence of actions to be performed in a segment of software. Algorithms can perform multiple calculations, data processing and automated reasoning tasks, among several others.

An algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input, the various instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing an output and terminating at a final ending state.

So the big question would be: is it time to start regulating algorithms? The concept of regulating algorithms has gained traction among a few countries and not without cause. The idea behind such regulation is often very much in line with other laws: that without oversight and legal culpability they could obviously be damaging.

Such concerns are not unique to some observers. UCL's Dr Hannah Fry warned last month that we needed to be careful when using algorithms behind closed doors. And Fry has a good point.

Dr Fry says the problem is that without access to seeing how such algorithms function, you can't argue against them when they provide doubtful or inaccurate results.

"If their assumptions and biases aren't made open to scrutiny, then you're simply placing a system in the hands of a few programmers who have no accountability for the decisions that they're making," Fry asserted.

"An example I often use in my speeches is of a young man who was convicted of the statutory rape of a young girl. It was supposedly a consensual act, but still a statutory crime and his data was put into an algorithm and that was used in his sentencing. Because he was so young and it was a sex crime, it judged him to have a higher rate of offending and so he got a custodial sentence," she asserted.

"But if he had been 36 instead of 19, he would have received a more lenient sentence, though by any reasonable metric, one might expect a 36-year-old to receive a more punitive sentence," she added.

She explained how various algorithms about predicting re-offending rates for individuals in the United States are being used in sentencing, where the analysis of such data has very serious consequences. There's a lot of controvery right now in the legal system when it comes to algorithms.

However, some legislative action has been suggested so far. In 2016, an industrial spokesperson in Britain and shadow minister, Chi Onwurah, told The Guardian in an interview that "algorithms aren't above the law" and that as "the outcomes of algorithms are regulated. The companies which use them have to meet employment law and competition law. The question is, and Onwurah underlined: how do we make that regulation effective when we can't even see the algorithm?"

And Onwurah's opinion is shared by many observers. On a different angle, the European Union's commissioner for competition, Margrethe Vestager, urged competition enforcers to keep an eye out for cartels that use software "to work more effectively" as cartels.

So what is regulation, and how do we do it when it applies to algorithms? The immediate answer to many of these concerns is to reveal biases in algorithms by opening them up to public scrutiny. This has been the most fundamental of all human political activities since the Enlightenment: to observe and to measure the expression of power in society.

If the last fifteen to twenty years of open-source software have taught us anything, it's that simple availability does not incentivise investigation. Very old security vulnerabilities are constantly being discovered still lying in software which had certainly been in use long enough for such vulnerabilities to have been discovered earlier.

Additionally, the increasing popularity of machine learning algorithms will make this issue more apparent. When an organization doesn't know what it wants from an algorithm, how can it measure what the results are? And how will unintended results be noticed and reported to the regulating authorities?

One such method could be to require organizations using algorithms to retain records on all of the data they are using, and to reappraise previous findings whenever updates are imparted. This would be expensive in the first place, and the results of reappraisal could be very high in deed.

Whatever the immediate outcome is, software algorithms are increasingly being used in our daily modern lives and something needs to be done quickly to prevent them from taking over our existence and our social conscience.

Source: University College of London.

Sponsered ad: Get a Linux Enterprise server with 92 Gigs of RAM, 16 CPUs and 8 TB of storage at our liquidation sale. Only one left in stock.

Sponsered ad: Order the best SMTP service for your business. Guaranteed or your money back.

Share on Twitter.

IT News Archives | Site Search | Advertise on IT Direction | Contact | Home

All logos, trade marks or service marks on this site are the property of their respective owners.

Sponsored by Sure Mail™, Avantex and
by Montreal Server Colocation.

       © IT Direction. All rights reserved.