17 Jun 2005
I'm here at the Berkman Thursday meeting listening to Urs Gasser speak about what he calls "information quality". He started out talking about a few cases where the implicit meaning of quality with respect to information meant accuracy. While this made sense for some fairly straightforward issues (e.g. fake herbal medicines or lies in the news), I found it a pretty simple and obvious definition.
Then he talked about information as being relational between communicating entities. That makes more sense, because different individuals will have different goals for a piece of information. And now he's pulling in transnational legal issues that arise when stuff published on the internet gets delivered in a place that has different libel or hate speech laws.
At this point, the definition of information quality seems so broad that we couldn't possibly conclude anything about it in general. And Urs pauses for questions; I ask this (bluntly). First answer: yes, there are many aspects. Second answer: there are some internet-specific issues in information quality, like Wikipedia edit wars (I don't quite believe this). Thirdly, these issues are not just theoretical but also relevant in practice.
I am still not convinced, but I am listening.
Urs thinks there are three approaches worth considering:
- Laissez-faire. We don't want to even think about internet info quality regulation; no one has the power to regulate quality. The problem with this argument, in my opinion, is that information quality is fundamentally dependent on social exchanges and there is a power balance in those which will always serve to regulate it in the context of that social structure. Good and bad fight it out over time and the good lives (J.S. Mill).
I asked about why this is even possible. Mal asks a similar question: info quality is transactional, so just producing more information without adding more informed transactions implies a vacuous definition of information. As if you want produce information by adding speakers. Bah.
- Information order model. An authority, like China but also possible a democratic society, making choices about information quality regulation. Also consider the BBC and internet forums with a moderator.
- A decentralized model. Starts with the assumption "information quality is imporant" but admits it's hard to define and there are fundamental reasons we cannot produce a valuable shared normative definition of information quality.
Now we go off on a tangent and talk about markets for a while. Google seems like a sort of information market in the sense that it values each website. There are different approaches (reputation systems, etc.) and they have different advantages and disadvantages. I think this is what Urs' paper is about, so I'm hoping that means this will get more specific.
I want to ask: what problems are you trying to solve?
I don't really think there is such a thing as information quality. There are such things as trust, lies, information availability, listening, speaking, publishing, believing, regulation, communication, relationships, and censorship. But is there some meaningful common ground?
On blogs, Urs notes, that we don't have laws to establish social contracts for information-related transactions. Some bloggers explicitly state poilicies. On top of this, I think that all bloggers take part in implicit social contracts, unwritten assumptions that underlie communities and cultures and help people communicate.
Urs has to leave… which I think is kinda lame, but quite forgivable. I'm still not sure he's talked about anything specific enough to be meaningful.