I spent last Friday at a seminar about algorithms. We discussed everything from health technology and citizenship in the digital era to the structure of neural nets.
I got the sense that talk about algorithms is both too hyperbolic and not serious enough at the same time. Technology is not magic, it has very specific ways it works or doesn’t work and knowing the difference helps in avoiding the kind of superstition often related to the power of algorithms. At the same time, we have to acknowledge that there are no neutral pieces of software, separated from our everyday lives. Algorithms will affect how societies work – that is the whole point of using algorithms in problem-solving.
For example, media will use algorithms to sort, filter and promote because it makes their service better, but it’s not given that the solution is always in line with journalistic integrity. There is no reason to believe that using computing to make news better is automatically in conflict with journalistic values, but that’s a question that needs to be examined and answered in each case. Assuming either answer is falling into simple technodeterminism, whether it is optimistic or pessimistic.
Often, the solution could be some kind of algorithmic transparency, where examining how and based on what criteria the algorithm makes the decisions. This might not always be easy, in case of deep neural nets, but it could be a start. More difficult is the current situation, where most of the important algorithms that affect our lives are proprietary and closed. Why does Google show you exactly these results? What is Facebook showing you and what is it hiding? It’s a trade secret, so we can’t know for sure.
There was one simple idea at the seminar by Veikko Eranti that I liked and which might prove useful for thinking later. He emphasized how algorithms will impact human political agency in two ways: those with access, resources and understanding will be able to use algorithmic agency to amplify their political agency by broadcasting, manipulating and generally affecting even larger populations than before. It’s not new that those in power can use their power to gain even more power, but it might be even less transparent than it used to be, because it’s harder to keep track of things like microtargeting Facebook ads.
However, most people don’t have the resources to access and affect these algorithms and they might be placed in an even more marginal position. Instead of an openly unequal society, we might get one that is implicitly stacked against some people. Eranti argued that we are left with the possibility of algorithmic resistance, where we use the algorithms against themselves. He used Google bombing George W. Bush to the top result of “miserable failure” as an example of algorithmic resistance.
I think I’m a bit less optimistic than Eranti. I’m worried that we might end up in a situation where algorithmic resistance becomes impossible, because we are not aware of the algorithms that affect our lives. It’s hard to resist, if you don’t know what you should be resisting. The effect of algorithmic inequality is not linear, and the worst of it will be experienced by the people already most disadvantaged in society.
It also seems that the odds are stacked against having open and accountable algorithms, since all the incentives are for making them closed and opaque. At least currently, they are optimized for profit, not public interest or accountability. But it’s not certain how things will work out in the long run. All I know is that I don’t want Facebook and Google being the only ones with a voice in this discussion.
This post follows my earlier post on computers in trying to figure out how digital technology affects our lives. Check it out, if you liked this one. Pew did a thorough questionnaire on what experts think about algorithms. Maybe check that out too.