Eternal transitions

Market transitions in digital media can be absolutely fascinating both as a bystander and as a participant.

I’ve been on both the frontfoot and the backfoot as part of media businesses trying to lead, fast-follow, steer away from or kill different technologies and services that come and go.

Transitioning seems to be part of being in this business, something you’re always doing, not the end game.

There are a few patterns I’ve observed, but i’m hoping some clever business model historian will do some research and really nail down how it works.

There are job training issues that people face. Remember all those Lotus Notes specialists? Organizational issues. How about when the tech teams finally let editors control the web site home page? Leadership issues. It wasn’t until about 2003 before media CEOs started talking openly aout the day when their Internet businesses would overtake their traditional businesses. Technology strategies. Investing in template-driven web sites was once a major decision. Etc.

The mobile wave we’re in right now shares all the same issues for people who are in the business of running web sites. Re-educate your developers or hire new ones? Should mobile be a separate business; should it be added to the queue for the tech team; should the editors or product managers manage it? Is mobile-first a subset of digital-first or does it mean something more profound than that? Responsive, native, both? What’s the mobile pureplay for what you already do?

Media organizations have become so diverse over the last several years that they can easily get caught thinking that you can do all of the above – a hedge-your-bets strategy by investing lightly in all aspects of the transition. While that strategy has drawbacks it is definitely better than the hide-and-hope or cut-til-you-profit strategy.

The most interesting part of this story is about the anomolies, those moments of clarity that signal to the rest of us what is happening.

For example, everyone who disregarded the Newton and then the PalmPilot for failing missed the point. These were anomolies in a world dominated by Microsoft and the PC. They understood computing was going to be in people’s pockets, and they were right to execute on that idea.

What they failed to get right was timing of the market transition, and timing is everything.

(Harry McCracken’s retrospective review of the Newton is a fun read.)

So, when is an anomoly worth noticing? And when is the right time to execute on the new model?

Google has been a great example of both in many ways. They cracked one of the key business models native to the Internet at just the right time…they weren’t first or even best, but they got it right when it mattered. Android is another example.

But social has been one challenge after another. They ignored the anomoly that was Twitter and Facebook (and Orkut!) and then executed poorly over and over again.

They don’t want to be behind the curve ever again and are deploying some market tests around the ubiquitous, wearable network – Google Glass.

But if they are leading on this vision of the new way, how do they know, and, importantly, how do other established businesses know that this is the moment the market shifts?

I’m not convinced we’ve achieved enough critical mass around the mobile transition to see the ubiquitous network as a serious place to operate.

The pureplay revenue models in the mobile space are incomplete. The levels of investment being made in mobile products and services are growing too fast. The biggest catalysts of commercial opportunity are not yet powerful enough to warrant total reinterpretations of legacy models.

The mobile market is at its high growth stage. That needs to play out before the next wave will get broader support.

The ubiquitous network is coming, but the fast train to success aka ‘mobile’ has arrived and left the station and everyone’s onboard.

Is Google Glass this era’s Newton? Too early? Dorky rather than geeky-cool? Feature-rich yet brilliant at nothing in particular? Too big?

They’ve done a brilliant job transitioning to the mobile era. And you have to give them props for trying to lead on the next big market shift.

Even if Google gets it wrong with Google Glass (See Wired’s commentary) they are becoming very good at being in transition. If that is the lesson they learned from their shortcomings in ‘social’ then they may have actually gained more by doing poorly then if they had succeeded.

Mobilising the web of feeds

I wrote this piece for the Guardian’s Media Network on the role that RSS could play now that the social platforms are becoming more difficult to work with. GeoRSS, in particular, has a lot of potential given the mobile device explosion. I’m not suggesting necessarily that RSS is the answer, but it is something that a lot of people already understand and could help unify the discussion around sharing geotagged information feeds.


Powered by Guardian.co.ukThis article titled “Mobilising the web of feeds” was written by Matt McAlister, for theguardian.com on Monday 10th September 2012 16.43 UTC

While the news that Twitter will no longer support RSS was not really surprising, it was a bit annoying. It served as yet another reminder that the Twitter-as-open-message-utility idea that many early adopters of the service loved was in fact going away.

There are already several projects intending to disrupt Twitter, mostly focused on the idea of a distributed, federated messaging standard and/or platform. But we already have such a service: an open standard adopted by millions of sources; a federated network of all kinds of interesting, useful and entertaining data feeds published in real-time. It’s called RSS.

There was a time when nearly every website was RSS-enabled, and a cacophony of Silicon Valley startups fought to own pieces of this new landscape, hoping to find dotcom gold. But RSS didn’t lead to gold, and most people stopped doing anything with it.

Nobody found an effective advertising or service model (except, ironically, Dick Costolo, CEO of Twitter, who sold Feedburner to Google). The end-user market for RSS reading never took off. Media organisations didn’t fully buy into it, and the standard took a backseat to more robust technologies.

Twitter is still very open in many ways and encourages technology partners to use the Twitter API. That model gives the company much more control over who is able to use tweets outside of the Twitter owned apps, and it’s a more obvious commercial strategy that many have been asking Twitter to work on for a long time now.

But I think we’ve all made a mistake in the media world by turning our backs on RSS. It’s understandable why it happened. But hopefully those who rejected RSS in the past will see the signals demonstrating that an open feed network is a sensible thing to embrace today.

Let’s zoom out for context first. Looking at the macro trends in the internet’s evolution, we can see one or two clear winners as more information and more people appeared on the network in waves over the last 15 years.

Following the initial explosion of new domains, Yahoo! solved the need to surface only the websites that mattered through browsing. The Yahoo! directory became saturated, so Google then surfaced pages that mattered within those websites through searches. Google became saturated, so Facebook and Twitter surfaced things that mattered that live on the webpages within those web sites through connecting with people.

Now that the social filter is saturated, what will be used next to surface things that matter out of all the noise? The answer is location. It is well understood technically. The software-hardware-service stack is done. The user experience is great. We’re already there, right?

No – most media organisations still haven’t caught up yet. There’s a ton of information not yet optimised for this new view of the world and much more yet to be created. This is just the beginning.

Do we want a single platform to be created that catalyses the location filter of the internet and mediates who sees what and when? Or do we want to secure forever a neutral environment where all can participate openly and equally?

If the first option happens, as historically has been the case, then I hope that position is taken by a force that exists because of and reliant on the second option.

What can a media company do to help make that happen? The answer is to mobilise your feeds. As a publisher, being part of the wider network used to mean having a website on a domain that Yahoo! could categorise. Then it meant having webpages on that website optimised for search terms people were using to find things via Google. And more recently it has meant providing sharing hooks that can spread things from those pages on that site from person to person.

Being part of the wider network today suddenly means all of those things above, and, additionally, being location-enabled for location-aware services.

It doesn’t just mean offering a location-specific version of your brand, though that is certainly an important thing to do as well. The major dotcoms use this strategy increasingly across their portfolios, and I’m surprised more publishers don’t do this.

More importantly, though, and this is where it matters in the long run, it means offering location-enabled feeds that everyone can use in order to be relevant in all mobile clients, applications and utilities.

Entrepreneurs are all over this space already. Pure-play location-based apps can be interesting, but many feel very shallow without useful information. The iTunes store is full of travel apps, reference apps, news, sports, utilities and so on that are location-aware, but they are missing some of the depth that you can get on blogs and larger publishers’ sites. They need your feeds.

Some folks have been experimenting in some very interesting ways that demonstrate what is possible with location-enabled feeds. Several mobile services, such as FlipBoard, Pulse and now Prismatic, have really nice and very popular mobile reading apps that all pull RSS feeds, and they are well placed to turn those into location-based news services.

Perhaps a more instructive example of the potential is the augmented reality app hypARlocal at Talk About Local. They are getting location-aware content out of geoRSS feeds published by hyperlocal bloggers around the UK and the citizen journalism platform n0tice.com.

But it’s not just the entrepreneurs that want your location-enabled feeds. Google Now for Android notifies you of local weather and sports scores along with bus times and other local data, and Google Glasses will be dependent on quality location-specific data as well.

Of course, the innovations come with new revenue models that could get big for media organisations. They include direct, advertising, and syndication models, to name a few, but have a look at some of the startups in the rather dense ‘location‘ category on Crunchbase to find commercial innovations too.

Again, this isn’t a new space. Not only has the location stack been well formed, but there are also a number of bloggers who have been evangelising location feeds for years. They already use WordPress, which automatically pumps out RSS. And many of them also geotag their posts today using one of the many useful WordPress mapping plugins.

It would take very little to reinvigorate a movement around open location-based feeds. I wouldn’t be surprised to see Google prioritising geotagged posts in search results, for example. That would probably make Google’s search on mobile devices much more compelling, anyhow.

Many publishers and app developers, large and small, have complained that the social platforms are breaking their promises and closing down access, becoming enemies of the open internet and being difficult to work with. The federated messaging network is being killed off, they say. Maybe it’s just now being born.

Media organisations need to look again at RSS, open APIs, geotagging, open licensing, and better ways of collaborating. You may have abandoned it in the past, but RSS would have you back in a heartbeat. And if RSS is insufficient then any location-aware API standard could be the meeting place where we rebuild the open internet together.

It won’t solve all your problems, but it could certainly solve a few, including new revenue streams. And it’s conceivable that critical mass around open location-based feeds would mean that the internet becomes a stronger force for us all, protected from nascent platforms whose their future selves may not share the same vision that got them off the ground in the first place.

To get more articles like this sent direct to your inbox, sign up for free membership to the Guardian Media Network. This content is brought to you by Guardian Professional.

guardian.co.uk © Guardian News & Media Limited 2010

Published via the Guardian News Feed plugin for WordPress.

Positioning real-time web platforms

Like many people, I’ve been thinking a lot about the live nature of the web more and more recently.

The startup world has gone mad for it. And though I think Microsoft’s Chief Software Architect Ray Ozzie played down the depth of Microsoft’s commitment to it in his recent interview with Steve Gillmor, it’s apparent that it’s at the very least a top-of-mind subject for the people at the highest levels of the biggest companies in the Internet world. As it should be.

The live web started to feel more tangible in shape and clearer for me to see because of Google Wave. Two of the Guardian developers here, Lisa van Gelder and Martyn Inglis, recently shared the results of a DevLab they did on Wave.

My brain has been spinning on the idea ever since.

(A DevLab is an internal research project where an individual or team pull out of the development cycle for a week and study an idea or a technology. There’s a grant associated with the study. They then share their findings with the entire team, and they share the grant with the individual who writes the most insightful peer review of the research.)

Many before me have noted the ambition and tremendous scale of the Wave effort. But I also find it fascinating how Google is approaching the development of the platform as a service.

The tendency when designing a platform is to create the rules and restrictions that prevent worst-case scenario behavior from ruining everything for you and your key partners. You release capability gradually as you understand its impact.

You then have to manage the constant demand from customers to release more and more capability.

Google turned this upside down and enabled a wide breadth of capability with no apologies for the unknowns. Developers won’t complain about lack of functionality. Instead it will probably have the opposite effect and invite the developers to tell Google how to close down the risks so their work won’t get damaged by the lawlessness of the ecosystem.

That’s a very exciting proposition, as if new land has been found where gold might be discovered.

But on the other hand, is it also a bit lazy or even irresponsible to put the task of creating the rules of the world that your service defines on the customers of your service? And do those partners then get a false sense of security because of that, as if they could influence the evolution of the platform in their favor when really it’s all about Google?

Google takes no responsibility for the bad things that may happen in the world they’ve created, yet they have retained full authority on their own for decisions about the service.

They’ve mitigated much of their risk by releasing the code as “open source” and allowing Wave to run in your own hosted environment as you choose. It’s a good PR move, but it may not have the effect they want it to have if they aren’t also sharing the way contributions to the code are managed and sharing in the governance.

They list the principles for the project on the site:

  • Wave is an open network: anyone should be able to become a wave provider and interoperate with the public network
  • Wave is a distributed network model: traffic is routed peer-to-peer, not through a central server
  • Make rapid progress, together: a shared commitment to contribute to the evolution and timely deployment of protocol improvements
  • Community contributions are fundamental: everyone is invited to participate in the public development process
  • Decisions are made in public: all protocol specification discussions are recorded in a public archive

Those are definitions, not principles. Interestingly, there’s no commitment to opening decision-making itself, only sharing the results of decisions. Contrast that with Apache Foundation projects which have different layers of engagement and specific responsibilities for the different roles in a project. For example,

“a Project Management Committee member is a developer or a committer that was elected due to merit for the evolution of the project and demonstration of commitment. They have write access to the code repository, an apache.org mail address, the right to vote for the community-related decisions and the right to propose an active user for committership.”

That model may be too open for Google, but it would help a lot to have a team of self-interested supporters when things go wrong, particularly as there are so many security risks with Wave. If they are still the sole sponsor of the platform when the first damage appears then they will have to take responsibility for the problem. They can only use the “we don’t control the apps, only the platform” excuse for so long before it starts to look like a cop out.

Maybe they should’ve chosen a market they thought would run with it and offer it in preview exclusively for key partners in that market until Google understood how to position it. With a team of launch partners they would have seemed less autocratic and more trustworthy.

Shared ownership of the launch might also have resulted in a better first use-case app than the Wave client they invented for the platform. The Google Wave client may take a long time to catch on, if ever.

As Ray Ozzie noted,

“When you create something that people don’t know what it is, when they can’t describe it exactly, and you have to teach them, it’s hard…all of the systems, as long as I’ve been working in this area, the picture that I’ve always had in my mind is kind of three overlapping circles of technology, social dynamics, and organizational dynamics. And any two of those is relatively straightforward and understandable.”

I might even argue that perhaps Google actually made a very bad decision to offer a client at all. This was likely the result of failing to have a home for OpenSocial when it launched. Plus, it’s never a good idea to launch a platform without a principle customer app that can drive the initial requirements.

In my opinion, open conference-style IM and email or live collaborative editing within docs is just not groundbreaking enough as an end-user offering.

But the live web is fractionally about the client app.

The live web that matters, in my mind, harnesses real-time message interplay via multiple open networks between people and machines.

There’s not one app that runs on top of it. I can imagine there could be millions of client apps.

The Wave idea, whether it’s most potent incarnation is Wave itself or some combination of a Twitter/RabbitMQ mesh or an open XML P2P server or some other new approach to sharing data, is going to blow open the Internet for people once again.

I remember trying very hard to convince people that RSS was going to change the Internet and how publishing works several years ago. But the killer RSS app never happened.

It’s obvious why it feels like RSS didn’t take off. RSS is fabric. Most people won’t get that, nor should they have to.

In hindsight, I think I overvalued RSS but undervalued the importance of the idea…lubricating the path for data to get wherever it is needed.

I suspect Wave will suffer from many of the same issues.

Wave is fabric, too.

When people and things create data on a network that machines can do stuff with, the world gets really interesting. It gets particularly interesting when those machines unlock connections between people.

And while the race is on to come up with the next Twitter-like service, I just hope that the frantic Silicon Valley Internet platform architects don’t forget that it’s about people in the end.

One of the things many technology innovators forget to do is to talk to people. More developers should ask people about their day and watch them work. You may be able to breakthrough by solving real problems that real people have.

That’s a much better place to start than by inventing strategic points of leverage in order to challenge your real and perceived competitors.

Building markets out of data

I’m intrigued by the various ways people view ‘value’. There seem to be 2 camps: 1) people who view the world in terms of competition for finite resources and 2) people who see ways to create new forms of value and to grow the entire pie.

Umair Haque talks about choices companies make that push them into one of those 2 camps. He often argues that the market needs more builders than winners. He clarifies his position in his post The Economics of Evil:

“When you’re evil, your ability to co-create value implodes: because you make moves which are focused on shifting costs and extracting value, rather than creating it. …when you’re evil, the only game you want to – or can play – is domination.”

I really like the idea that the future of the media business is in the way we build value for all constituencies rather than the way we extract value from various parts of a system. It’s not about how you secure marketshare, control distribution, mitigate risk or reduce costs. It’s about how you enable the creation of value for all.

He goes on to explain how media companies often make the mistake of focusing on data ownership:

“Data isn’t the value. In fact, data’s a commodity…What is valuable are the things that create data: markets, networks, and communities.

Google isn’t revolutionizing media because it “owns the data”. Rather, it’s because Google uses markets and networks to massively amplify the flow of data relative to competitors.”

I would add that it’s not just the creation of valuable data that matters but also in the way people interface with existing data. Scott Karp’s excellent post on the guidelines for transforming media companies shares a similar view:

“The most successful media companies will be those that learn to how build networks and harness network effects. This requires a mindset that completely contradicts traditional media business practices. Remember, Google doesn’t own the web. It doesn’t control the web. Google harnesses the power of the web by analyzing how websites link to each other.”

Ad networks vs ad exchanges

I spent yesterday at the Right Media Open event in Half Moon Bay at the Ritz Carlton Hotel.


Right Media assembled an impressive list of executives and innovators including John Battelle of Federated Media, David Rosenblatt of DoubleClick, Scott Howe of Microsoft, entrepreneur Steve Jenkins, Jonathan Shapiro of MediaWhiz, Ellen Siminoff of Efficient Frontiers, and Yahoo!’s own Bill Wise and the Right Media team including Pat McCarthy to name a few.

It was an intimate gathering of maybe 120 people.

Much of the dialog at the event revolved around ad exchange market dynamics and how ad networks differ from exchanges. DoubleClick’s Roseblatt described the 2 as analagous to stock exchanges and hedge funds…there are a few large exchanges where everyone can participate and then there are many specialized networks that serve a particular market or customer segment. That seemed to resonate with people.

The day opened with a very candid dialog between Jerry Yang and IAB President Randall Rothenberg where Jerry talked about his approach to refocusing the company and his experiences at Yahoo! to date.

Battelle’s panel later in the afternoon was very engaging, as well. The respective leaders of the ad technology divisions at Yahoo! (Mike Walrath of Right Media), Miscrosoft (Scott Howe of Drivepm and Atlas) and Google (David Rosenblatt of DoubleClick) shared the stage and took questions from John who, as usual, didn’t hold back.

The panelists seemed to have similar approaches to the exchange market, though it seems clear that Right Media has a more mature approach, ironically due in large part to the company’s youth. Microsoft was touting its technology “arsenal”. And DoubleClick wasn’t afraid to admit that they were still testing the waters.

I also learned about an interesting market of middlemen that I didn’t know existed. For example, I spoke with a guy from a company called exeLate that serves as a user behavior data provider between a publisher and an exchange.

There were also ad services providers like Text Link Ads and publishers like Jim Mansfield’s PhoneZoo all discussing the tricky aspects of managing the mixture of inventory, rates and yield, relationships with ad networks, and the advantages of using exchanges.

I’ve been mostly out of touch with the ad technology world for too long.

Our advanced advertising technology experiments at InfoWorld such as behavioral targeting with Tacoda, O & O contextual targeting services like CheckM8, our own RSS advertising, lead generation and rich media experiences were under development about 3 years ago now.

This event was a great way to reacquaint myself with what’s going on out in the market starting at the top from the strategic business perspective. I knew ad exchanges were going to be hot when I learned about Right Media a year ago, but I’m even more bullish on the concept now.

Gatekeepers need to stop calling themselves gatekeepers

Time business columnist Justin Fox questioned the success of the new media methods in a recent post “The reign of the enthusiasts“.

He suggests the algorithms that proudly surface the deep dark corners of the Internet are actually just self-referential popularity contests. When searching for his name Justin found that the articles he’s written that are likely most influential in the real world fail to rank higher than the articles he’s written which attracted the most link love from media-obsessed blogger types, like myself.

“There are web2topians out there–Battelle and my friend Matt McAlister immediately spring to mind–who are convinced that the Googles (and Diggs and del.icio.uses and Amazons and Last.fms) of the future will do a vastly better job of steering people to what they want, such a good job that most of the gatekeepers of the current media universe will prove wholly extraneous.”

This isn’t the first time someone has accused me of being a Web 2.0 blogger. Coincidentally, the same day Justin posted this, I was mocked by a local construction worker waiting for the bus with his buddies as I passed on my way to the office. He shouted to nobody in particular,

“Man, you know what I hate? Dotcommers.” He watched me walk by stonefaced and waited for a response. The guys standing around him turned to look. Unsure still, he blurted out, “Architects, too. Hate all of them.” He got the laugh he was looking for.

Jeez, am I that boring? Or that obvious and annoying? (Please don’t say anything. I think I know the answer.)

Anyhow, Justin’s question is top-of-mind for a lot of people in the media business. Where I disagree with him and the wisdom of the media industry crowd is on the notion of “gatekeepers” or rather the need for them at all.

Perhaps the most important part of being successful in media is distribution, and the reason we’re asking what the role of the gatekeeper is today is because the Internet has disintermediated the media distribution models that helped them become gatekeepers in the first place.

Online search changed the way people access relevant information, and those who once thought of themselves as gatekeepers suddenly found themselves at the mercy of the link police, the new gatekeepers, the search engines.

Yet, Justin’s explanation of the weakness of Google’s algorithm is exactly what I think many people who get mocked for their trendy glasses, old man sport coats, carefully orchestrated facial hair events, designer shoes and man purses (I don’t have a man purse) all see improving with the introduction of explicit and implicit human data into the media distribution model. The act of hyperlinking to a web page is not a strong enough currency to hold together a market of information as big as the Internet has become in recent years. It’s a false economy.

But the link currency opened the door to the idea of using behavior to help people find things. I love Last.fm not just for the music it recommends to me but because it proves this to be true. The Internet is made of people, people with a wide range of knowledge, tastes, and interests.

Now, there will always be a role for experts, and there are many cases where being an expert is not just subjective. Experts are hugely influential on the Internet as they are in other media. But I don’t see that a gatekeeper is an expert by definition.

There will also always be a role for enablers. Good enablers are often community builders who understand the rhythms of human psychology and emotion. Henry Luce was such a man, and I think he might have been a very successful web2topian today.

If those who call themselves “gatekeepers” want to share their expertise in valuable ways, then they will need to understand how the role of human data helps with distribution of that expertise. If those who aim to be enablers of communities want to be relevant, they will find ways to do that in many of the social technologies that have proven successful in this new world.

Similarly, if the people Justin affectionately refers to as web2topians appear smug, glib or arrogant when talking about media, then they are only doing themselves and everyone in the business a disservice. Gatekeepers know better than anyone that expertise does not by definition make you important. That’s a lesson the Internet generation will learn the hard way when someday they become irrelevant, too, I’m sure.

Copycat ad networks threaten Google’s stability

Any successful business model is going to have imitators.  Google knows this as well as anybody.   But now the stranglehold on the distributed ad model is feeling weaker than ever with new competitors every day.

The magic formula = isolate revenue collection system into a platform + make it available to other web sites – share earnings back to transaction/click source.

Yahoo! rolled out a similar offering about a year ago with YPN.  eBay launched their own version recently.  Amazon has had their affiliate program for years.  Kanoodle, IndustryBrains, Feedburner and a host of others all know this solution with their own twist on it.  Media networks such as IDG smartened up to the opportunity, as well.

The magic formula is showing cracks, though.  Click fraud is not being measured effectively by independent audits nor is payment being adjusted to compensate for it.  And Google has no short term incentive to solve the problem just as Microsoft once had no incentive to fix Windows security threats.

Linux gave Microsoft reason to change.  I wonder who will push Google into panic mode.  They may just sleepwalk into the death trap as long as their search market share remains strong.

Though have no doubt that Google can change.  At some point Schmidt’s insistence that Google is a technology company may actually trickle down and create some revenue opportunities that are more service based.  If they can scale their office products for mass adoption and perhaps create a browser optimized for those products, then they will finally have a potential revenue model to match the rhetoric.

The question is whether the market share losses surely in AdSense’s near future will fracture Wall Street’s love affair with the company before they can not only diversify but also stabilize on a mix of technology service revenue streams.

I can’t even imagine the complexity of the cultural war that will wage internally when/if the “technology” part of the business actually becomes a real slice of Google’s revenue pie.  Manufacturing consent will probably work while Google continues to grow.  I’d still hate to be on a “technology” product team at a company where 99% of the revenue comes from media products…wait…from one media product.

The Google Phd’s are probably predicting the copycats, the corporate positioning conflicts and internal competitive challenges as I write this, but are they smart enough to get their Product Managers and Biz Dev guys to help them actually figure out how to solve the problems, or do they just write papers and send long emails with subject lines in all caps?

CORPORATE STRATEGY RESEARCH STUDY: IMPACT OF ‘TECHNOLOGY’ MARKET POSITION IN THE FACE OF MULTI-FRONT WAR ON ONLY REVENUE STREAM MAY CAUSE INTERNAL STRIFE

Maybe Microsoft’s MSN team has some advice for Google’s technology product teams about operating in the shadow of the cash cow.

Why (and how) the online ad model needs to change

Somehow I keep expecting some company to break through and solve the problems with the Google AdSense model. As advertisers, buyers and media vehicles get smarter about efficiency, the holes in the system get bigger and bigger.

AdSense revenues help a lot of mid to large-sized web sites, but really more as incremental revenue. By the time you’re big enough for AdSense to support your business there are several other revenue opportunities with larger payouts avaiable to you.

And there’s no doubt that AdSense (and most Internet advertising) is failing to help people find and buy the things that matter to them. How can it be that we have an ad model that is considered wildly successful when a campaign or ad unit gets a click-through rate of 1%? And the reality is that it’s much much worse than that on average.


Photo: DWS

Why are click through rates so low? Because the ads don’t matter to people. They aren’t relevant. They don’t help people identify products or brands that matter to them. They don’t help people locate the right deal at the right time.

Yes, some people get lucky if they’re paying attention. There wouldn’t have been $5B in search ad revenue in the market in 2005 if nobody was clicking on the ads. But the click performance and subsequent conversion rates suggest this kind of ad network is just a spray hose of wasteful bits showering the Internet with clutter.

It doesn’t work for advertisers, either. Advertisers want more control over their ads, where they appear and to whom they are shown. Blanketing text links blindly across the Internet does not necessarily result in paying customers. They know they’re wasting money, but they can’t afford not to be present in the network.

The AdSense model does much more to help Google and the Google shareholders than it does to help any of the customers it is supposed to serve.

I think the Amazon affiliate program is much closer to a more sustainable ad model for the future. When you can track clicks all the way to a sale then everybody wins. The weakest link in the Amazon affiliate chain is the media vehicle which has to work a lot harder to drive clicks that convert to sale. But the buyer and the seller are both happy, and that’s ultimately what matters most.

I’d love to see an ad network that is able to let media vehicles optimize the ad content and display rules for the ads. The look and feel of an ad is not going to crank up the conversion rates. Media vehicles need to help the right ad get to the right person.

For example, when I post on my blog, I should be able to flag a stream of ad content and define the type of algorhythm that makes the most sense for that post and the users who are most likely to read it. This post should probably link to lead generation service providers even though I haven’t explicitly used the term “lead generation” anywhere in the post…uh, well, you get the idea.

Likewise, users should be able to self-identify as buyers. I haven’t yet setup a wifi network in my home, so I’d love for every tech-related web site I visit to show me the latest deals and setup guides and retailers for wifi gear. I’d actually like the content on all those sites to adjust, as well. I want to see what’s new and interesting at these sites, but they should be able to surface content from deep in their archives that is relevant to the things I’m actively pursuing. My intent should edit the home page for me.

I guess I’m saying that somebody needs to build a service that on one side connects directly into an advertiser’s sales conversion or transaction systems and on the other side distributes marketing links and images for media vehicles to take and optimize. The system should track performance across the chain and offer optimization options at all points along that chain.

Pieces of this exist and some of it is very complicated, I know, but I don’t see why efficiencies can’t be improved. And if enough advertisers are able to offer affiliate programs to track impression-to-click-to-sale, then they may even start competing with eachother and offer better incentives to media vehicles that find customers for them.

Users would see ads for things they want to buy. Advertisers would sell more product. And media vehicles would earn more from the revenue share.  Where’s the down-side?