The recent Web 2.0 conference predictably accelerated some prognostication on Web 3.0. I don’t think these labels are very interesting in themselves, but I do admit that the conversations about what they might be, if they had a meaningful existence, expose some interesting ideas. Unfortunately, they (both the labels and the conversations) also tend to generate a lot of over-excitement and unrealistic expectations, both in terms of financial investment and doomed IT strategies. Dan Farber does his usual great job of collecting some of the thoughts on the recent discussion in “Web 2.0 isn’t dead, but Web 3.0 is bubbling up“.
One of the articles Dan links to is a New York Times article by John Markoff, where John basically equates Web 3.0 with the Semantic Web. Maybe that’s his way of saying very subtly that there will never be a Web 3.0? No, he is more optimistic. Dan also links to Nick Carr’s post welcoming Web 3.0, but even Carr is gentler that he should be.
But here’s the basic problem with the Semantic Web – it involves semantics. Semantics are not static, language is not static, science is not static. Even more, rules are not static either, but at least in some cases, syntax, and logical systems have longer shelf lives.
Now, you can force a set of semantics to be static and enforce their use – you can invent little worlds and knowledge domains where you control everything, but there will always be competition. That’s how humans work, and that is how science works as far as we can tell. Humans will break both rules and meanings. And although the Semantic Web is about computers as much (or more) than about humans, the more human-like we make computers, the more they will break rules and change meanings and invent their own little worlds.
This is not to say that the goal of a Semantic Web hasn’t and won’t generate some good ideas and useful applications and technologies – RDF itself is pretty neat. Vision is a good thing, but vision and near-term reality require different behavior and belief systems.