Market transitions in digital media can be absolutely fascinating both as a bystander and as a participant.
I’ve been on both the frontfoot and the backfoot as part of media businesses trying to lead, fast-follow, steer away from or kill different technologies and services that come and go.
Transitioning seems to be part of being in this business, something you’re always doing, not the end game.
There are a few patterns I’ve observed, but i’m hoping some clever business model historian will do some research and really nail down how it works.
There are job training issues that people face. Remember all those Lotus Notes specialists? Organizational issues. How about when the tech teams finally let editors control the web site home page? Leadership issues. It wasn’t until about 2003 before media CEOs started talking openly aout the day when their Internet businesses would overtake their traditional businesses. Technology strategies. Investing in template-driven web sites was once a major decision. Etc.
The mobile wave we’re in right now shares all the same issues for people who are in the business of running web sites. Re-educate your developers or hire new ones? Should mobile be a separate business; should it be added to the queue for the tech team; should the editors or product managers manage it? Is mobile-first a subset of digital-first or does it mean something more profound than that? Responsive, native, both? What’s the mobile pureplay for what you already do?
Media organizations have become so diverse over the last several years that they can easily get caught thinking that you can do all of the above – a hedge-your-bets strategy by investing lightly in all aspects of the transition. While that strategy has drawbacks it is definitely better than the hide-and-hope or cut-til-you-profit strategy.
The most interesting part of this story is about the anomolies, those moments of clarity that signal to the rest of us what is happening.
For example, everyone who disregarded the Newton and then the PalmPilot for failing missed the point. These were anomolies in a world dominated by Microsoft and the PC. They understood computing was going to be in people’s pockets, and they were right to execute on that idea.
What they failed to get right was timing of the market transition, and timing is everything.
(Harry McCracken’s retrospective review of the Newton is a fun read.)
So, when is an anomoly worth noticing? And when is the right time to execute on the new model?
Google has been a great example of both in many ways. They cracked one of the key business models native to the Internet at just the right time…they weren’t first or even best, but they got it right when it mattered. Android is another example.
But social has been one challenge after another. They ignored the anomoly that was Twitter and Facebook (and Orkut!) and then executed poorly over and over again.
They don’t want to be behind the curve ever again and are deploying some market tests around the ubiquitous, wearable network – Google Glass.
But if they are leading on this vision of the new way, how do they know, and, importantly, how do other established businesses know that this is the moment the market shifts?
I’m not convinced we’ve achieved enough critical mass around the mobile transition to see the ubiquitous network as a serious place to operate.
The pureplay revenue models in the mobile space are incomplete. The levels of investment being made in mobile products and services are growing too fast. The biggest catalysts of commercial opportunity are not yet powerful enough to warrant total reinterpretations of legacy models.
The mobile market is at its high growth stage. That needs to play out before the next wave will get broader support.
The ubiquitous network is coming, but the fast train to success aka ‘mobile’ has arrived and left the station and everyone’s onboard.
Is Google Glass this era’s Newton? Too early? Dorky rather than geeky-cool? Feature-rich yet brilliant at nothing in particular? Too big?
They’ve done a brilliant job transitioning to the mobile era. And you have to give them props for trying to lead on the next big market shift.
Even if Google gets it wrong with Google Glass (See Wired’s commentary) they are becoming very good at being in transition. If that is the lesson they learned from their shortcomings in ‘social’ then they may have actually gained more by doing poorly then if they had succeeded.
The semantic web folks, including Sir Tim Berners-Lee, have been saying for years that the Internet could become significantly more compelling by cooking more intelligence into the way things link around the network.
The movement is getting some legs to it these days, but the solution doesn’t look quite like what the visionaries expected it to look like. It’s starting to look more human.
The more obvious journey toward a linked data world starts with releasing data publicly on the Internet.
Many startups have proven that opening data creates opportunity. And now the trend has turned into a movement within government in the US, the UK and many other countries.
Sir Tim Berners-Lee drove home this message at his 2009 TED talk where he got the audience to shout “Raw data now!”:
“Before you make a beautiful web site, first give us the unadulterated data. You have no idea the number excuses people come up with to hang on to their data and not give it to you even though you’ve paid for it as a taxpayer.”
Openness makes you more relevant. It creates opportunity. It’s a way into people’s hearts and minds. It’s empowering. It’s not hard to do. And once it starts happening it becomes apparent that it mustn’t and often can’t stop happening.
The forward-thinking investors and politicians even understand that openness is fuel for new economies in the future.
“It’s a prototype of a service for people moving into a new area. It gathers information about your area, such as local services, environmental information and crime statistics.”
Opening data is making government matter more to people. That’s great, but it’s just the beginning.
After openness, the next step is to work on making data discoverable. The basic unit for creating discoverability for content on a network is the link.
Now, the hyperlink of today simply says, “there’s a thing called X which you can find over there at address Y.”
The linked data idea is basically to put more data in and around links to things in a specific structure that matches our language:
subject -> predicate -> object
This makes a lot of sense. Rather than derive meaning, explicit relationship data can eliminate vast amounts of noise around information that we care about.
However, there are other ways to add meaning into the network, too. We can also create and derive meaning across a network of linked data with short messages, as we’ve seen happening organically via Twitter.
What do we often write when we post to Twitter?
@friend said or saw or did this interesting thing over here http://website.com/blah
The subject is a link to a person. The predicate is the verb connecting the person and the object. And the object is a link to a document on the Internet.
Twitter is already a massive linked data cloud.
It’s not organized and structured like the links in HTML and the semantic triple format RDF. Rather it is verbose connectivity, a human-readable statement pointing to things and loosely defining what the links mean.
So, now it starts to look like we have some opposing philosophies around linked data. And neither is a good enough answer to Tim Berners-Lee’s vision.
Short messages lack standard ways of explicitly declaring meaning within links. They are often transient ideas that have no links at all. They create a ton of noise. Subjectivity rules. Short messages can’t identify or map to collections of specific data points within a data set. The variey of ways links are expressed is vast and unmanageable.
The semantic web vision seems like a far away place if its dependent on whether or not an individual happens to create a semantic link.
But a structural overhaul isn’t a much better answer. In many ways, RDF means we will have to rewrite the entire web to support the new standard. The standard is complicated. Trillions of links will have to obtain context that they don’t have today. Documents will compete for position within the linked data chain. We will forever be reidenitfying meaning in content as language changes and evolves. Big software will be required to create and manage links.
But there’s another approach to the linked data problem being pioneered by companies like MetaWeb who run an open data service called Freebase and Zemanta who analyze text and recommend related links.
The approach here sits comfortably in the middle and interoperates with the extremes. They focus on being completely clear about what a thing is and then helping to facilitate better links.
They know that Wikipedia, The New York Times and the Congressional Biography web sites who are all very authoritative on politicians have a single URL representing everything they each know about Abraham Lincoln, too.
So, Freebase maintains a database (in addition to the web site that users can see) that links the authoritative Abraham Lincoln pages on the Internet together.
This network of data resources on Abraham Lincoln becomes richer and more powerful than any single resource about Abraham Lincoln. There is some duplication between each, but each resource is also unique. We know facts about his life, books that are written about him, how people were and still are connected to him, etc.
Of course, explicit relationships become more critical when the context of a word with multiple meanings enters the ecosystem. For example, consider Apple which is a computing company, a record company, a town, and a fruit.
Once the links in a network are known, then the real magic starts to happen when you mix in the social capabilities of the network.
Because of the relationships inherent in the links, new apps can be built that tell more interesting and relevant stories because they can aggregate data together that is connected.
You can imagine a whole world of forensic historians begging for more linked data. Researchers spend years mapping together events, geographic locations, relationships between people and other facts to understand the past. For example, a company called Six to Start has been working on using Google Maps for interactive historical fiction:
“The Six to Start team decided to literally “map†Cumming’s story, using the small annotation boxes for snippets of text and then illustrating movement of the main character with a blue line. As users click through bits of the story, the blue line traces the protagonist’s trajectory, and the result is a story that is at once text-based but includes a temporal dimension—we watch in real time as movement takes place—as well as an information dimension as the Google tool is, in a sense, hacked for storytelling.”
Similarly, we will eventually have a bridge of links into the physical world. This will happen with devices who have sensors that broadcast and receive short messages. OpenStreetMap will get closer and closer to providing a data-driven representation of the physical world, built collectively by people with GPS devices carefully uploading details of their neighborhoods. You can then imagine that games developers will make the real world itself into a gaming platform based on linked data.
We’ve gotten a taste of this kind of thing with Foursquare. “Foursquare gives you and your friends new ways of exploring your city. Earn points and unlock badges for discovering new things.”
And there’s a fun photo sharing game called Noticin.gs. “Noticings are interesting things that you stumble across when out and about. You play Noticings by uploading your photos to Flickr, tagged with ‘noticings’ and geotagged with where they were taken.”
It’s conceivable that all these forces and some creative engineers will eventually shrink time and space into a massive network of connected things.
But long before some quasi-Matrix-like world exists there will be many dotcom casualties who have benefitted from the existence of friction in finding information. When those challenges go away, so will the business models.
Search, for example, is an amazingly powerful and efficient middleman linking documents off the back of the old school hyperlink, but its utility may fade when the source of a piece of information can hear and respond directly to social signals asking for it somewhere in the world.
It’s all pointing to a frictionlessness information network, sometimes organized, sometimes totally chaotic.
It wasn’t long ago I worried the semantic web had already failed, but I’ve begun to wonder if in fact Tim Berners-Lee’s larger vision is going to happen just in a slightly different way than most people thought it would.
Now that linked data is happening on a more grassroots level in addition to the standards-driven approach I’m starting to believe that a world of linked data is actually possible if not closer than it might appear.
Again, his TED talk has some simple but important ideas that perhaps need to be revisited:
Paraphrasing: “Data is about our lives – a relationship with a friend, the name of a person in a photograph, the hotel I want to stay in on my holiday. Scientists study problems and collect vast amounts of data. They are understanding economies, disease and how the world works.
A lot of the knowledge of the human race is in databases sitting on computers. Linking documents has been fun, but linking data is going to be much bigger.”
I have only one prediction for 2008. I think we’re finally about to see the useful combination of the 4 W’s – Who, What, Where, and When.
Marc Davis has done some interesting research in this area at Yahoo!, and Bradley Horowitz articulated how he sees the future of this space unfolding in a BBC article in June ’07:
“We do a great job as a culture of “when”. Using GMT I can say this particular moment in time and we have a great consensus about what that means…We also do a very good job of “where” – with GPS we have latitude and longitude and can specify a precise location on the planet…The remaining two Ws – we are not doing a great job of.”
I’d argue that the social networks are now really honing in on “who”, and despite having few open standards for “what” data (other than UPC) there is no shortage of “what” data amongst all the “what” providers. Every product vendor has their own version of a product identifier or serial number (such as Amazon’s ASIN, for example).
We’ve seen a lot of online services solving problems in these areas either by isolating specific pieces of data or combining the data in specific ways. But nobody has yet integrated all 4 in a meaningful way.
Jeff Jarvis’ insightful post on social airlines starts to show how these concepts might form in all kinds of markets. When you’re traveling it makes a lot of sense to tap into “who” data to create compelling experiences that will benefit everyone:
“
At the simplest level, we could connect while in the air to set up shared cab rides once we land, saving passengers a fortune.
We can ask our fellow passengers who live in or frequently visit a destination for their recommendations for restaurants, things to do, ways to get around.
We can play games.
What if you chose to fly on one airline vs. another because you knew and liked the people better? What if the airline’s brand became its passengers?
Imagine if on this onboard social network, you could find people you want to meet – people in the same business going to the same conference, people of similar interests, future husbands and wives – and you can rendezvous in the lounge.
The airline can set up an auction marketplace for at least some of the seats: What’s it worth for you to fly to Berlin next Wednesday?
“
Carrying the theme to retail markets, you can imagine that you will walk into H&M and discover that one of your first-degree contacts recently bought the same shirt you were about to purchase. You buy a different one instead. Or people who usually buy the same hair conditioner as you at the Walgreen’s you’re in now are switching to a different hair conditioner this month. Though this wouldn’t help someone like me who has no hair to condition.
Similarly, you can imagine that marketing messages could actually become useful in addition to being relevant. If CostCo would tell me which of the products I often buy are on sale as I’m shopping, or which of the products I’m likely to need given what they know about how much I buy of what and when, then my loyalty there is going to shoot through the roof. They may even be able to identify that I’m likely buying milk elsewhere and give me a one-time coupon for CostCo milk.
Bradley sees it playing out on the phone, too:
“On my phone I see prices for a can of soup in my neighbourhood. It resolves not only that particular can of soup but knows who I am, where I am and where I live and helps me make an intelligent decision about whether or not it is a fair price.
It has to be transparent and it has to be easy because I am not going to invest a lot of effort or time to save 13 cents.”
It may be unrealistic to expect that this trend will explode in 2008, but I expect it to at least appear in a number of places and inspire future implementations as a result. What I’m sure we will see in 2008 is dramatic growth in the behind-the-scenes work that will make this happen, such as the development and customization of CRM-like systems.
Lots of companies have danced around these ideas for years, but I think the ideas and the technologies are finally ready to create something real, something very powerful.
“One thing that has become clear is that the success of social production collectives hinges on the intensive contributions of a very small subset of their members. Not only that, but it’s possible to identify who these people are and to measure their contributions with considerable precision. That means, as well, that these people are valuable in old-fashioned monetary terms – that they could charge for what they do. They have, in other words, a price, even if they’re not currently charging it. The question, then, is simple: Will the “amateurs” go pro? If they have a price, will they take it?”
Nick’s challenge is accurate, particularly when a peer production model doesn’t have a strong enough purpose to hold it together through adversity.
“I’m absolutely convinced that the top 20 people on DIGG, Delicious, Flickr, MySpace, and Reddit are worth $1,000 a month and if we’re the first folks to pay them that is fine with me–we will take the risk and the arrows from the folks who think we’re corrupting the community process”
I guess it’s the assumption that people are motivated first and foremost by money that bothers me. No doubt I’ll do something for money if the benefit of doing it for love or because it’s right is less than the benefit of having the cash. I want to give my family all the advantages that I can.
But I think Nick misunderstands a value proposition inherent in the concept of communities.
There are a lot of people who put a lot of energy into building their church community when that time could be spent elsewhere making money. And I doubt most churches would suffer any significant memership losses if a nearby competing church offered to pay people to switch churches. They participate in the church community because the investment returns have personal and social value that have nothing to do with their material wealth.
People who moderate online communities like some of the more active Yahoo! groups invest themselves because of their interest in things like social influence or sometimes even for other selfish gains. The really successful groups have an undeniable and crystal clear purpose.
For example, the San Francisco Golden Gate Mother’s Group is a highly engaged community of women with new babies who help each other with the day-to-day challenges of urban motherhood. The community holds itself together by the shared desire to raise children well. That mission couldn’t be any simpler or more important to a first time mother. Even the least-engaged member understands that answering someone’s question now results in better answers for you when you need help in the future.
Paying people to participate wouldn’t make them better at what they do. I’d argue it might actually make them worse. If Netscape was a brand with a purpose that mattered to me, then Jason wouldn’t have to pay me or even the best bookmarkers to participate.
Nick also challenges the notion that peer production can operate without management overhead. I think he miscalculates the role of management in peer production. Yes, it may be required, but management is a service to the group, a service to the mission. Management in peer production could probably be outsourced.
I do think Benkler may actually underestimate the importance of a clear and cohesive mission for the group. Without a core purpose that the members of the group find important, a competing commercial market could very well break down the community.
But that then begs the question of how valuable the community was in the first place.