In the early 2000s many companies buried their heads in the sand hoping the internet would go away. The retreat strategy in those days involved a reinvestment in analog-based business models. There was plenty of life there to do interesting things, and they got a chance to think through their digital transition after everything calmed down.
But the game has changed again. It is moving much faster than before, and failure to complete the previous transition from analog-to-digital means there are no safety nets to fall back on.
Getting through this storm is going to require something really different.
What’s exciting for those willing to think in terms of networks instead of destinations is that the arrows on Mary Meeker’s slide on Google and Facebook’s dominance in advertising are all pointing up and to the right.
There’s a massive advertising market that keeps getting bigger and, as the newcomers keep demonstrating, there are new ways to capture pieces of that market appearing every day.
What’s stopping media orgs from joining up and doing something about this?
I’ve tried to explain what’s going on and describe some of the things the media industry can do in an article for The Guardian. I think the answer is to form a network.
Algorithms are often characterised as dark and scary robotic machines with no moral code. But when you open them up a little and look at their component parts, it becomes apparent how human-powered they are.
Last month, Google open sourced a tool that helps make sense of language. Giving computers the power to understand what people are saying is key to designing them to help us do things. In this case, Google’s technology exposes what role each word serves in a sentence.
The technical jargon for it is natural language processing (NLP). There is mathematics cooked into the tools, but knowing what sine and cosine mean is not a prerequisite to understanding how they work.
When you give Google’s tool or any NLP system some text, it uses what it has been told to look for to decipher what it is looking at. If the creators taught it parts of speech then it will find nouns, verbs, etc. If the creators taught it to look for people’s names then it will identify word pairs and match them against lists they were given by trainers.
The computer then processes the things it found and provides results. As the one asking for results, users have to decide things like whether to exclude words with relevance scores at certain thresholds or maybe to only match words in a whitelist they have provided.
Different tools provide different types of results. Maybe the tool is designed to look for negative or positive sentiment in the text. Maybe it’s designed to identify mentions of city streets. Maybe it’s designed to find all the articles in a large data set that are talking about the same subject.
Many startups today are using NLP to inform artificial intelligence systems that assist with everyday tasks such as x.ai’s calendar scheduler. Over time we are going to see more startups using these tools.
But there are many practical things that media companies can do with NLP today, too. They might want to customise emails or cluster lots of content. I can imagine publishers creating ad targeting segments on the fly using simple algorithms powered by NLP systems.
It’s worth noting that Google is actually late to the game, as many solutions already exist. IBM’s AlchemyAPI will look at text users supply it and then return data about relationships between people, places and things. There is an open source solution called OpenNLP from the Apache Foundation. Apache is also where you find Lucene, a popular search service used by companies such as Elasticsearch that can solve similar problems that NLP systems solve.
At their most basic level, these technologies essentially automate decisions at scale. They take a lot of information in; they work out what answers resolve certain kinds of questions based on what people teach them; and then they spit out an answer or lots of answers.
But every step of the way, people have told the computers what to do. It is people who provide the training data. It is people who instruct the algorithms to make the decisions they want made on their behalf. It is people who apply the results the algorithm returns.
These tools are particularly powerful when they are given the authority to make lots of decisions quickly that could never be done by hand. And that is also where problems emerge.
Lots of small errors in judgment can turn into an offensive or even threatening force. Sometimes adverse effects are infused accidentally. Sometimes they are not. And sometimes unwanted behaviour is really just an unintended consequence of automating decisions at scale.
Like any new technology we don’t yet have a clear model for understanding and challenging what people are doing with algorithms. The Tow Center’s recent book on algorithms dives into the issues and poses important questions about accountability.
How has the algorithm been tuned to benefit certain stakeholders? What biases are introduced through the data used to train them? Is it fair and just, or discriminatory?
It’s a great piece of research that begins to expose the implications of this increasingly influential force in the world that effectively amplifies commercial and government power. The key, according to the Tow report, is “to recognise that [algorithms] operate with biases like the rest of usâ€.
Algorithms aren’t monsters. Think of them more like puppies. They want to make you happy and try to respond to your instructions. The people who train them all have their own ideas of what good behaviour means. And what they learn when they are young has a profound effect on how they deal with people when they’re grown up.
As policy folks get their heads around what’s going on they are going to need some language to deal with it. Perhaps we already know how to talk about accountability and liability when it comes to algorithms.
California’s dog laws state that the owner of the dog is liable if it hurts someone. It reinforces that by saying the owner is liable even if they didn’t know their dog could or would hurt someone. Finally, being responsible for the dog’s behaviour also means the owner must do what a court decides which may include “removal of the animal or its destruction if necessary.â€
A policy with similar foundations for algorithms would encourage developers to think carefully about what they are training their machines to do for people and to people.
Maybe it’s overkill. Maybe it’s not enough.
But let there be no confusion about who is creating these machines, teaching them what to do, and putting them to work for us. It is us. And it is our nature that is being reflected, multiplied and amplified through them. Tools like natural language processing are merely paint brushes used by the artists who dream them up.