5 min read

Algorithmical Intelligence and the policy process

At every point in time, people have probably claimed that ‘we are going through the most important moment in the history of mankind’. So I am going to go ahead and say that this is at very least, one of the most important moments in the history of mankind. With the advent of technology and computational power, algorithms determine most of the processes that shape our everyday lives. There are increasingly fewer situations in which our decisions are not influenced by a computer program ‘suggesting’ how we should go about our day. Spam filters, news feeds, route suggestions, music recommendations, invasive advertisements. Even which coffee we might like best. The amount of data being generated each day is becoming so large that it is important for us to think about how it is used for or can be used against us.

There are two sides to this coin: as end users, we need to understand how algorithmic decisions affect us both positively and negatively; but there needs to be increased awareness to review the use of algorithms from a policy perspective. This post deals mostly with the second part.

The question I would like to consider is, how should regulation work to align the expectations of the population (what should policy incentives be aligned with from the end user side, or even, should they?) and service providers, both government and private entities. Let me use a couple of examples.

A couple of weeks ago, a self-driving car was performing a test in Tempe, Arizona, in the United States. While the car was driving down the road, a woman by the name of Elaine Herzberg crossed the street and got run over by the vehicle. She died a couple of hours later in the hospital from the injuries. The car showed no sign of slowing down, and the accompanying test driver, allegedly, could not do much to prevent the accident. Here is the released video in which the car is seen driving down the road when Elaine suddenly appears in front of the vehicle1.

Tammy Dobbs, a senior citizen suffering from a cerebral palsy, used to receive 40 hours of government provided assistance every week. After filling out a regular check-up survey, the person administering it told Tammy that the hours of service would be reduced by 8, to 32 hours a week. Nothing particular had changed in Tammy’s life, but because of how the system makes decisions it was implied that she should get less assistance. The story (which you can read in further detail at the link below) ends in that she and her family found a lawyer specialising in these kinds of cases, and found that the hour-allocating algorithm incorrectly considered Tammy’s situation as an improvement and reduced the hours2.

Tammy’s situation seems to be somewhat more familiar, in particular for those who have ever asked for a loan at a bank and waited to see if the magic number allowed you to buy a house or not. Elaine’s situation was, unfortunately, fatal and should lead to a better understanding of how we are allowing technological changes to be implemented in society. One of the underlying issues here is that complex systems that result in binary (or very discrete) decisions pose a problem for all of those who stand on the verge of the decision: to reduce the number of hours or not, to reduce the speed or not.

The common thread I would like to weave in this article is that both algorithms are using the available information to try and answer a question. In the first case, what should be the velocity to go from A to B. In the second, can service provision costs be reduced given the patient’s conditions. At least this is what they seem to be asking. Are these the right questions? Are these questions comprehensive enough? Do they give enough value to ALL of the available information? Are they open to the possibility of a black swan, and if so, how can it react? What is the role of the government, of private companies, and of individuals in regulation and self-regulation?

As more and more of these technologies come into our lives, I wonder if we are aware enough of how they are impacting our lives. My understanding is that we are only now coming to terms with what it means to use technology as a production process, in the classic economic sense. As such, we need to see how they all fit into the rest of the economy, and how we fit into that process.

On the 9th of April, the Information Commissioner’s Office (ICO) of the United Kingdom held their Data Practitioner’s Conference3. Earlier this year, the head of the Office held a hearing in parliament to discuss algorithms in decision making4 5. The interview is very insightful. At the end of the Spring, the ICO is expected to publish a report with the results of their enquiry. It seems like there are people in a powerful government thinking about how these matters. More people should be doing the same.