People Inside & Web 2.0: An Interview with Tim O’Reilly

OpenBusiness spoke with Tim O’Reilly about the evolution of the Web and its most current trends, which are commonly labeled as Web 2.0. In September 2005, Tim wrote a seminal piece that presented many of the aspects of Web 2.0 and now surrounds much of the buzz around a new generation of internet applications. In the interview, he re-emphasizes the most important points of this development, talks about the evolutionary relationship between open & free and shares his vision of bionic systems that combine human and computational intelligence.

OB: At OpenBusiness, we’re especially interested in the rise of open content and open services and how they deal with the concept of “free”. How do you define that relationship? When are open and free the same and in what ways are they different?

For the last couple of year, I’ve been preaching an idea that Clayton Christensen first wrote about and called the “Law of Conservation of Attractive Profits.” We talked about it in response to my talk, the Open Source Paradigm shift, in which I focused a lot on lessons from the IBM PC.

What I saw was that IBM – through genius or accident or both – introduced this new, open architecture for a personal computer: anyone could build one and that was open hardware. It was not Open Source as we know it today but it was pretty close. IBM said, “Everything has to be built with off-the-shelf parts from at least two suppliers, here is the specification, now go out, be fruitful and multiply.” The unintended consequence of that decision was that it took all the profits out of assembling computer systems, which had been the source of great profits in the past. IBM was a completely dominant company and now we have low-margin players like Dell. But we also ended up with high-margin players like Intel and Microsoft, neither of which IBM foresaw. They signed a deal with Microsoft to do the operating system, Intel got control of a key component and ended up with near-monopoly profits, all while IBM struggled for many years. They have come back now but they had destroyed the computer industry as they knew it, replaced it with a new one, and there was a period other where –at least from the point of view from IBM – all the profits were disappearing from the system.

So when I started seeing comments by Ballmer saying Open Source is an intellectual property destroyer and it’s taking all the profits out of the system, I thought this is just what had happened before. We’re seeing the commoditization of software where the value is going out of many classes of software that people used to pay for. But it’s being rediscovered and moving up the stack and it’s moving down the stack. That led me to the couple of new ideas that we now call Web 2.0: the Internet as a platform, information businesses using software as a service, harnessing collective intelligence – that’s moving up the stack. Down the stack is what I call “Data as the Intel Inside.” This stack model is repeating itself as this economic model is repeating itself, and so I think that each time you see something becoming free, something else is becoming expensive, which goes back to the Law of Conservation of Attractive Profits.

Software became free, content even became largely free but now Google and Yahoo are collecting enormous sums of money by directing attention to their free content using a platform that’s built on top of their free software. Similarly, we look at Napster and thought that all of music would be free and now Apple has a billion dollar business selling songs. We’re also just a the early stages where Skype is making telephone calls free and Asterisk and making telephone calls free –relatively speaking– and I believe that there will be new sources of revenue that will be overlaid on top of that market.

I also think that it’s really easy early in a market with distributive innovation to see everything becoming cheap or free or commoditized and not to see the areas where there are new sources of control and new sources of revenue.

OB: Especially in the context of Web 2.0 business models, there has been a lot of emphasis on the ad-based model, which now supports everything from Wi-Fi to your mail account. What other layers do you see on top of that and are there alternate models that emerge?

Oh, absolutely – it actually goes back to this idea of “Data as the Intel Inside”. We look at all these mapping applications for example, in which Navtech and TeleAtlas are licensing data to Google, Yahoo and MSN where those companies are monetizing it by advertising but the data suppliers are monetizing it by license. The economic ecosystem is often much more complex than what people realize because I don’t think that it’s just an ad-supported market.

Ads are one way of collecting money but they’re far from the only way and if you look at the complexity of the web ecosystem, there are all kinds of people who are participating. All of those free bloggers are actually paying their blogging service provider or their ISP for hosting, as an example of the different models that start to work together and build any complex ecosystem.

OB: As you mentioned before, much of Web 2.0 is about user-generated content and harnessing collective intelligence. What were some of the catalysts that drove the web in this direction recently and what has sparked these recent shifts?

I wouldn’t say that anything really sparked it. Instead, we talk of network effects, by which networks grow as a result of the value of the connections they make. The internet always had this characteristic that its value was driven by the number of nodes and all the emergence of user-generated content and harnessing collective intelligence is just an expression of that fundamental dynamic.

What really happened was that the original Web had all of these characteristics: it was from the edges, it was bottom-up, it was long-tail. But then we had this detour where traditional content companies and people who are imitating traditional content companies decided that it was all about publishing, “content is king” and that this would get all the eyeballs that would be monetized by advertising – that was the dot-com boom and bust. But when the dust cleared, you saw that some companies had managed to survive. Pets.com was gone but here was Yahoo, here was Google, here was eBay, here was Amazon. All these companies that survived and we asked ourselves back when we first coined the term Web 2.0, “What distinguishes them?” In one way or another, they had rediscovered the logic of what makes Internet applications work – they had understood network effects.

Overall, there are certainly defining moments. For Google, it was Overture coming up with the advertising model, which put together Google’s user demand engine with a financial model. There was also the insight that you don’t just study the contents of documents but what people do with them as evidenced by the links they make.
If you look at eBay, it’s pretty clear that they had leveraged network effects in a fairly fundamental way too. Pierre [Omidyar] has this idealistic vision of a system he’s building in which buyers and sellers learn to trust each other.

Amazon also is a great example I keep bringing up because their system didn’t have a built-in architecture of participation; but they still worked it! On every page, they invite their users to participate, to annotate their data and to add value. They effectively overlaid an architecture of participation on a system that doesn’t intrinsically have one. In many ways, I think they’re the best company to study because they worked it whereas the other companies mostly locked into a sweet spot.

So as far as turning points go, the real one came when Tim Burners Lee introduced the world-wide web and everything else has just been a voyage of discovery.

OB: Since those earliest days, the Web has been an open platform but over the years, especially more recently, there has been the emergence of companies like Google and Yahoo that have started to centralize more and more data, attention and now also user-generated content like photos and videos. Is there are an increasing trend towards more centralization on the Web today?

Yes and No. On the one hand, the Web is extraordinarily good at decentralizing data: everyone has their own website with their own location and storage. Some sites have managed to become large aggregators for a certain class of data, such as the various photo sharing sites or music sharing sites for example.

But when you really think about centralization vs. decentralization, the biggest aspect of centralization actually comes via large-scale aggregators like Google – because it doesn’t matter whether you put your data on Google or on your own site: you’re still putting it on Google in the end as they’re indexing everything.

The real lesson is that the power may not actually be in the data itself but rather in the control of access to that data. Google doesn’t have any raw data that the Web itself doesn’t have, but they have added intelligence to that data which makes it easier to find things.

To me, one of the seminal applications that made me think seriously about the Internet as Platform was Napster in contrast to MP3.com. I had visited MP3.com not long before Napster appeared and they were proudly showing me their servers with “all this music” on them. But then the kid who grew up in the age of the Internet came out with Napster and asked, “Why do you need to have all this music in one place? My friends already have it and all we need is our set of pointers.” It’s that evolution from data to metadata that’s really interesting to me and where people are going to get access to it.

There are some cases where a certain type of data is hard to generate, as in Digital Globe launching a satellite to supplement the US satellite data or NavTech driving the streets for 500 millions dollars plus to build a unique database –that’s one source of control. But the aggregators – the Yahoos, the Googles, the Amazons – are the other type of control with data that they don’t actually own but which they control with the namespace or the search space or some higher-level metadata.

I think that we’ll find in some ways that this is the real secret of the relationship between free and non-free content. There will be so much free content that it’s going to be hard to find and those who can help you find what you want will be able to charge for it – in one way or the other, whether it’s through advertising or through subscription or something else. It’s about managing to find “the best”, and “the best” is a kind of metadata.

OB: What developments potentially worry you in this space?

First off, I think there will always be negative developments. All new technology goes from its wonderful use when all things seem possible and then, [Tim laughs] we get the blue screen of death – that’s a natural alternation. When bad things happen, they’re just a part of the evolution and of the ongoing cycle.

What worries me the most are governments getting involved and backing their existing companies. The patent system is a great example where the government is clueless and is disrupting the real activity of the market. We see it in the way that the Digital Millennium Copyright Act is trying to protect the interests of existing players while stifling the future. All of this is going to drive innovation to markets in countries that are more forward-looking because the internet is of course a global phenomenon and if you outlaw something, it will simply crop up somewhere else. So our challenge as an industry and as an economy is to discover the rules by which we can create value and ultimately create wealth in this new environment. It’s not about protecting the old ways of creating wealth but rather that creative destruction has to take place. Although companies may suffer from it, I think we’ll all be better for it.

OB: What upcoming developments excite you most and what do you see missing currently which you’d like to see grow?

I have been thinking a lot about “bionic software”, a concept that was introduced by You Mon Tsang Juman Zeng with his start-up called Boxxet, by which people are becoming components in software. I’ve talked about this for a number of years and I believe that Amazon’s Mechanical Turk might have been indirectly inspired by a talk I gave there in May of 2003. I talked about the Turk and asked, “What are the differences between web applications and PC applications?” Web applications have people inside of them. You take the people out of Amazon and it stops working. It’s not a one-time software artifact, instead it’s an ongoing process where people have to do things everyday for the software to keep working. So I referred to the Mechanical Turk, the chess-playing hoax which had a man inside, as a metaphor for the difference between internet applications and PC applications.

Amazon has given it a new twist and so have many other applications by harnessing the users to perform tasks that you couldn’t do with just the computer. And there is a really interesting thread there because for a long time, many people thought that we were going to arrive at some kind of artificial intelligence where we get the computers to be smart enough and match people. And what we’re doing instead is building a hybrid system, in which the computers make us smarter and we make them smarter – that’s bionic software.

When Google gives you 10 results and says, “One of these might be what you’re looking for”, it leaves us with the last mile. When a website uses a little CAPTCHA block, it’s asking that we do something that’s easy for humans but hard for computers when it comes to authentication.

The tag cloud also, which has spread from Flickr to all kinds of other websites, is a user-interface element that is basically built by the users of the system as the system is being used. So we are the software component that generates the tag cloud – we’re the input – and the tag cloud is a metaphor for this new kind of software.

OB: And to close what’s been a fascinating interview, I’m curious what you saw in the last month or two that stood out to you and sparked your curiosity.

There’s a site that’s essentially a “Hot-Or-Not” for avatars in virtual worlds [http://RateMyAv.com/] where you can put up your character from Second Life or World of Warcraft and get it rated by users just like the Hot-Or-Not site [http://www.HotOrNot.com/]. That was really interesting to me because it showed that the real and virtual are interpenetrating further. We’re going to see many of the things that took place on the web increasingly recapitulate themselves in some of these virtual worlds. There’s a real opportunity because many economic models out on the web could obviously be reproduced. It’s a cool, little signal of a future to come…

Tags:

4 Responses to “People Inside & Web 2.0: An Interview with Tim O’Reilly”

Add yours.

  1. [...] Tagcloudwatch intends to be a place a where it’s easy follow the evolution of tag clouds, and join the dialog shaping their evolution. I don’t know whether tag clouds will become part of the future, as Tim O’Reilly says, or pass away in time, but I am sure it’s will be very interesting to see what happens. We might learn something along the way. [...]

  2. [...] compact web2.0 definition at O’Reilly Radar and more recent article/interview with Tim O’Reilly on Web2.0 at Open Business. [...]

  3. [...] OpenBusiness » Blog Archive » People Inside & Web 2.0: An Interview with Tim O’Reilly [...]

  4. [...] 1) “People Inside” (confira a entrevista de Tim O´Reilly, em inglês), modelo em alta, sucessor do “Intel Inside” da Web original. [...]

Comments are closed.

Creative Commons License
This work is licensed under a Creative Commons License.