On May 11, 2005, I gave one of the Alfred Deakin Innovation Lectures in the Town Hall in Melbourne. The subject of the evening was Reporting Change: the media and innovation. Jay Rosen from NYU and PressThink was the other speaker.
Here’s what I had to say:
Nullius in verba: navigating through the new media democracy
When the Berlin wall fell in November 1989, a new era of democracy was inaugurated.
In the 1940s, the number of democratic countries was less than ten (Australia among them). As late as the 1980s, still less than half the world’s nations were democratic. But now the clear majority of countries is democratic, to some degree.
So it’s not surprising that we’re at the early stages of understanding how a world where most countries are democracies work.
We know that an effective democracy requires both effective institutions and the culture to use those institutions well. Some social scientists use the terms hardware and software. One could equally talk about having the right tools and knowing how to use them.
You need, for example, an independent judiciary and an independent central bank (hardware), but you also need judges and central bankers who are comfortable with their independence (software).
You need laws and institutions allowing for universal suffrage (hardware), but you also need a populace that doesn’t feel intimidated into voting one way or another (software).
You need the machinery of a legislature to debate and create new laws (hardware), without the corruption and cronyism that can taint its workings (software).
The legal scholar Larry Lessig, best known for his work on intellectual property and a Deakin lecturer the other day, has written: “90% of the challenge is to build a culture that respects the rule of law, and that practices it. A document doesn’t build that culture.”
What he’s pointing out, in my terms, is that the hardware of a document, whether it’s a single law or something as grand as a constitution, isn’t enough. You need the culture – the software – too.
When you have both the hardware and the software, transformation can be rapid and successful. Look at Poland, Czech Republic, Hungary or even Slovakia.
Their transition certainly hasn’t been easy, but it took them less than 15 years to move from being undemocratic dictatorships with failing economies, in thrall to the Soviet Union, to being full members of the European Union. In Slovakia’s case, it didn’t really get to what might be termed normal democracy until they got rid of the appalling Vladimir Meciar as prime minister in 1998.
You can see what happens in the absence of these two constituent parts – hardware and software – in Russia.
Andrew Jack, in his recent book Inside Putin’s Russia, called the system there a “parody” of a political system. True, Putin was overwhelmingly elected by the people, but Jack writes that there are:
“Parties without ideas, debates without the most important participants, media without criticism… [and] no institutional framework, no broader political culture to help foster diversity of opinion for the future.”
In Russia, some call this “managed democracy”. Yegor Gaidar, who was Yeltsin’s first prime minister, calls it “closed democracy”. It is a system where the opposition is legal in principle, there is no mass repression, reasonably well-conducted elections take place, but “the results… are always a foregone conclusion”.
As we can see as well, when you have a system where the hardware is not accompanied by sufficiently developed software, where there may be the tools, but no one knows how to use them, you risk a steady degradation of the hardware. When Russia one day moves into a post-Putin era, it may need to reconstruct the hardware it is now neglecting.
What is the point of this digression about political hardware and software? To my mind, media is undergoing a similar transformation to the one we have witnessed in the political realm.
It has only recently acquired the trappings of democracy. I want to look at what the tools – the hardware – of that democracy are. I also want to explore how we might learn to use effectively these tools – to develop the software.
Although I picked on the fall of the Berlin wall as the key moment of political change, the story of democratisation should also be seen in a longer historical perspective, stretching over centuries. So, too, with media. I make dramatic claims about the potential of the new era we are entering. But there have been tidal movements in media democracy since media existed.
Before the invention of the printing press, dissemination of information was very tightly held. Partly, this was because of the low level of literacy. But it was also because media production – to use a crude term – was a highly skilled craft. The medieval monks created some great artworks with their illuminated manuscripts, but they were also slow, limited producers whose work was circulated only narrowly.
It would be nice to think that Gutenberg stimulated a rapid breakdown of this tight control. It’s unsurprising, however, to see that the authorities in most places had no interest in letting a free media flourish.
In 1557 the Stationers’ Company of London was formed. It had the exclusive right of printing and publishing in the English dominions. Only the 97 London stationers and their successors through apprenticeship could legally print. So publishing was centralised in London, under the immediate supervision of the government. As part of its exclusive privilege, the Stationers’ Company had the power to search and seize publications which infringed their right.
When Elizabeth came to the throne in 1558, it was soon decided that a different, more rigorous system was required. In 1559 the 51st of the Injunctions Concerning Religion stated that no book in any language should be printed without license from either the Queen, six members of the Privy Council, the Chancellor of Oxford, the Chancellor of Cambridge, the Archbishop of Canterbury, the Archbishop of York, the Bishop of London or the Bishop and Archdeacon of the place of publication. In the 1580s the power of licensing was simplified to two people – the Archbishop of Canterbury and the Bishop of London. The narrow group of permitted printing presses belonging to the London Stationers were supplemented by two new presses in Oxford and Cambridge.
To a large degree, this system persisted until the English civil war overturned so many institutions. Now the licensing was in the hands of parliament itself. In 1643, it issued an order returning the licensing to the Stationers’ Company. The reason for these powers was:
“The great late abuses and frequent disorders in printing many false, forged, scandalous, seditious, libellous, and unlicensed Papers, Pamphlets, and Books, to the great defamation of Religion and Government.”
The reason why I’ve focused on this chronology is because it provoked one of the great statements of press freedom, Milton’s Areopagitica. Milton’s powerful advocacy of freedom was unsuccessful: the order stood until 1695. But his arguments – couched in the intricate, poetic language we would expect from the author of Paradise Lost – remain valid today.
The Order is hostile to truth: First, as tending to efface knowledge already gained. The waters of truth have been likened to a fountain; but they will stagnate now into a muddy pool of conformity and tradition. The man of business, chiefly anxious to keep up appearances, and the man of pleasure, anxious to be saved trouble, will give up the attempt to think in religion and will become the merest formulists… Men with a good conscience and a real love of truth ought to wish for open discussion. Secondly – the Order is hostile to truth as preventing any addition to knowledge. Truth was once incarnate on earth; but is has been hewn in pieces by Falsehood, and the pieces have been cast to the four winds; and as Isis sought for the limbs of Osiris, slain and mangled by Typhon, so the friends of truth are even now looking for the scattered members. Do not be hinderers of the search.
The impulse to conceal, to limit media did not vanish in the seventeenth century. Raymond Williams, in his book Television, recalls that when the industrial revolution required the reorganisation of education, “the ruling class decided to teach working people to read but not to write. If they could read they could understand new kinds of instructions and, moreover, they could read the Bible for moral improvement. They did not need writing, however, since they would have no orders or instructions or lessons to communicate.”
Something unexpected happened, however. As Williams points out, “There was no way to teach a man to read the Bible which did not also enable him to read the radical press. A controlled intention became an uncontrolled effect.”
Here Williams reveals the thrilling thing about software – about knowing how to use tools. The effects are inevitably uncontrolled. That is precisely why plotting the impact of coming technology is so difficult.
Alan Kay is the computer scientist who developed the idea of the laptop computer and was instrumental in the creation of the graphical user interface we all now use. Kay has said that it is easy to predict what technology we will have in 20 years time. You can see it in its early stages in the research laboratories. What is unpredictable, however, is how society will choose to use the technology.
Who would have thought, for example, that the video cassette recorder would turn out to be such an effective babysitting device?
With that caution in mind about the unpredictable course of technology use, I want to examine four tools – four pieces of hardware in the political sense I’ve developed – that are helping create this new democratisation.
The most obvious, the most ballyhooed is weblogs or blogs. I don’t know if weblogs have received the attention here in Australia that they have in the US and Britain, although I do read some very good Australian blogs. But it’s extraordinary to realise with the trumpeting of blogs how recent an innovation they are.
My own, little-read weblog, Davos Newbies, was started in January 2000. I’m reasonably certain that makes me one of the world’s first 1,000 bloggers. Some people who know reckon it might put me in the first 500. That’s only five-and-a-half years ago. Technorati, which among other things, tracks weblogs, reckons there are now nearly 10 million weblogs.
I believe weblogs are at the centre of media democratisation for a simple reason. They enable what weblog pioneer Dave Winer terms the “two-way Web”. Most of the media we come into contact with demands that we be consumers or observers. Thanks to weblogs, there is a simple way for all of us to be creators.
The second tool central to the media transformation is the wiki. Some of you may not know what a wiki is. A wiki is a website that allows users to add content, and also allows anyone to edit the content.
The most dramatic use of the wiki so far is the Wikipedia, an encyclopedia that has been created entirely by users contributing and editing articles that now includes over 500,000 articles in the English version (the Bahasa Indonesia Wikipedia, incidentally, now contains over 8,000 articles).
None of the contributors – writers or editors – are paid. People participate out of interest in the project and for the kudos that arise in the community from participation.
The people who spawned the creation of the Wikipedia have also now started Wikinews, which is user created and edited news.
The Wikipedia is already a primary reference source for me. As Simon Waldman, The Guardian’s director of digital publishing, has pointed out, the Wikipedia is extraordinarily good for capturing an event as a live, reference publication. I have no idea when the Encyclopedia Britannica will do an article on the Indian Ocean tsunami. The Wikipedia provides a thorough, fascinating log. I don’t yet make any great claims for the success of Wikinews – I suspect that a wiki is not very efficient as a news outlet.
The advantage of an open, community-supported site like Wikipedia can be best seen by contrast with a closed system. The technology commentator David Weinberger recently compared The New York Times’s project to set up topic pages, using their 150 years of archives. So if you want to find out about the Brooklyn Bridge or the Brooklyn Dodgers, the Times topic site will neatly collect all the archive material. But Weinberger points out, “We’re going to find the Wikipedia page more useful, more current, more neutral and more linked into the Web. If we don’t, we’ll edit the Wikipedia page until it’s better. And then we’ll link it to the NYTimes.com topic page.” He concludes, “You’ll hear the groan of the hawser as the ship of trust changes berths.”
My third tool is Google.
One of the key problems we face in the new media age is finding a path through the deluge of information. I’ll discuss this further when I talk about how we use the tools. Before Google, there were certainly decent search engines, organised either through clever language searches – hunting for a web page that had your search terms prominent or used in a way that indicated relevance – or through designed hierarchies, as the original Yahoo was.
These methods still have their uses.
But Google has been built on a different insight. It produces search results based on Page rank (named after founder Larry Page, not the notion of web page). Page rank in turn is based on the collective wisdom of the Web. If a page is more linked to, and if the pages linking to it are higher in Page rank, then it will move up the search results.
So if I Google the word “Davos”, the World Economic Forum’s homepage comes out the top listing, the village of Davos itself comes number two and my weblog, Davos Newbies, comes number three. Given that my site is no longer about Davos that may not be very useful. But it’s because lots of people – some of them with very high Page ranks – have linked to me over the years.
My fourth tool is probably another unfamiliar one to many of you. RSS stands for Really Simple Syndication. RSS allows news feeds to be constructed from content on a website. These feeds then can be read by something called an aggregator.
So if I want to keep up with what The Age is writing about national news, I could subscribe to its national feed. Every time a new article is added to that feed, it appears in my aggregator. I don’t need to visit the website in the chance that something is new. The use of RSS is spreading rapidly: the BBC expects to have 10% of its web traffic driven by RSS by the end of this year.
Why do I think this is a crucial new tool? If you’re content with a small handful of sources of information, it’s not. But if you want to draw on the amazing multiplicity of sources that are now available, you can either laboriously visit each website regularly, or you can far more efficiently subscribe to feeds and let the flow come into your aggregator. There’s one place to look for anything new.
In thinking about this subject, I realised I could have chosen other tools as well. There is an extraordinary flowering of new tools – new hardware – for media democratisation. I’ll briefly mention one.
Skype provides free, peer-to-peer voice over IP telephony. That means if you are connected to the Internet and have Skype installed on your computer, you can call other Skype users for free. You can call anyone, even if they don’t have Skype, for a few pennies a minute.
Phil Shapiro and Taran Rampersad have recently suggested using Skype as a community media production tool. Shapiro writes, “99 per cent of the interesting people in this world are not celebrities… So who’s going to interview all those people? Answer: the people will interview the people. What tool will they use to do this? Skype. How will these interviews be shared? Over the Internet, via public access television stations, via podcasting, and via various computer media.”
So we have a multiplicity of tools, of hardware. What about the software that enables us to plot a course through the democratic landscape and make sense of it? Here, we’re at a far earlier stage of development.
Part of what we need from our software is a way to deal with all the information that assaults us. The latest research that I’ve seen estimates that on average, each of the 6.3 billion people on Earth created 800 megabytes of new information last year. That’s about a 10-metre shelf of books.
I’ve already suggested some of the ways we can navigate through this thicket. One of the requirements is for trusted intermediaries. The Nobel prize-winning physicist Murray Gell-Mann in the mid-90s identified this need, when he puzzled about who would help people distinguish the worthwhile in science from spurious pseudoscience.
Who is to decide which people are worthy processors of material that will in many cases be rather technical? The easiest approach, of course, is to leave the judgment to a marketplace composed largely of lay people in search of entertainment, but in that way superficially attractive nonsense may frequently emerge triumphant. We can avoid this phenomenon, which we witness every day on our television screens, only if we make better use of people who make a practice of thinking, knowing and understanding.
What’s interesting is that in many ways, Google has become the automated intermediary in this sense. It takes advantage of the hive intelligence of the Internet. But it’s not infallible.
We also need “people who make a practice of thinking, knowing and understanding”. Above all, that’s what the blogosphere represents to me. Of course there’s an enormous amount of unthinking, unknowing and misunderstanding out there, too.
But the interconnected nature of the Web means that we can find and develop our own list of trusted intermediaries. If we read someone’s weblog we come over time to respect and trust their judgments. And that leads us to follow their lead when they recommend we look at someone else.
The complex mesh of interconnections means we can realise the ideal of, in the motto of the Royal Society, nullius in verba – don’t trust in anyone’s word. We don’t have to accept one source. We can follow a trail to other voices and other documentation to develop our own perspective, to make up our own mind.
The writer John Udell has described this process:
I can delegate the job of tracking primary sources to people whose interests and inclinations qualify them to do so… The blog network is made of people. We are the nodes, actively filtering and retransmitting knowledge. Clearly this architecture can help manage the glut of information.
There’s another aspect to this that goes beyond the intermediation I think we need. One of the glories of the new media democracy is that there is widespread disintermediation.
One of the areas that interests me is global economic news. I can read Brad DeLong at Berkeley, John Quiggin here in Melbourne, Nouriel Roubini at NYU or even Nobel prizewinner Gary Becker at Chicago directly if I want to understand some development in economics. I don’t have to wait for the Financial Times or Wall Street Journal to give me their version of how experts interpret events.
The spread of weblogs means that for just about any subject, you could draw up a similar list. And the glory of syndicated feeds – that RSS tool I mentioned earlier – is that new information from these sources flows directly into your computer. Once you identify a source you trust and want to keep up with, there is no need to go out hunting for them.
The variety and profusion of sources has led many people to misapprehend this new media landscape. Some commentators have seized on a mathematical power curve – the Pareto distribution – to argue that the only weblogs that matter are those of the so-called A-list bloggers.
In a traditional media reckoning, this is right. What matters is reach and influence. The New York Times or the Financial Times have it. CNN, Fox News and the BBC have it. The Podunk Daily doesn’t.
That’s not the case, however, in the blogosphere. Some webloggers are journalists, and some do aim to reach the largest possible audience and have the greatest possible impact. But most are amateurs writing because they enjoy it.
You’d think what I’ve said can be offered of proof of the power curve. If the great mass of bloggers are not aiming for large audiences – and certainly not reaching them – how can they matter?
Chris Anderson, editor of Wired Magazine, last autumn wrote an essay on what he called the “long tail”. He was focusing on the long tail of products that have previously been thought of little value. Most media industries are built on hits: the best-selling books, the blockbuster films, the number one CDs.
And much of our world has necessitated this way of thinking. Even the biggest bookstores can only stock a fraction of the books in print; an electrical shop can’t possibly have every kind of light bulb or fuse; a video rental chain is unable to hold every DVD in issue.
Most of us are familiar with the 80-20 rule, sometimes known as the Pareto Principle after the same man who derived the eponymous distribution. It’s sometimes formulated as 80 per cent of the sales will come from 20 per cent of the products.
But Anderson found out that for online retailers Pareto’s distribution hides an important new reality. When Anderson wondered what percentage of the top 10,000 titles or DVDs or tracks sold in a month at Amazon, Netflix or itunes, he found the answer was generally 99 per cent. Number 10,000 may not sell many, but because there are so many more misses than hits, that adds up to a large market in aggregate.
So it is, too, for weblogs. Most blogs are in the long tail. Individually they may not have many readers, but when you multiply not many readers by several million weblogs, there is a huge audience out there. And for some specialist areas, we have already passed the point where a lone blogger has similar authority and reach in her niche as a specialist reporter in traditional media may have.
All this leads to the development of what The Guardian’s Simon Waldman calls “personalised information hypermarkets”.
These might be people you know and love. They might be complete strangers. But, they become important gateways for you: you start to rely heavily on them to sort out the internet’s wheat from the chaff. And every time they find you something good, you use them more and more.
There is another side to such personalised media.
Some technologists, notably a team at MIT’s Media Lab, have developed a seductive picture of a Daily Me. The Daily Me is a personalised newspaper that contains just what you want. So if you’re passionate about Aussie Rules but could care less about rugby union, your Daily Me would have pages of footie, and no rugger.
So, too, it could contain the local news you want, and perhaps updates on political developments in east Asia (because you think that’s important) but exclude the news from the Middle East (because you’re bored by those conflicts).
To me, and I suspect to many of you, the Daily Me is a terrifying prospect. The Roman writers had a concept of dulce et utile, to delight and instruct. That’s a worthy ideal. Media that are unrelentingly delightful lack the virtue of instruction. Similarly all instruction and no play makes for dull media.
But the Daily Me threatens all dulce, no utile. It’s already here in some forms.
You probably haven’t heard of Las Ultimas Noticias. It’s Chile’s most widely read newspaper. Las Ultimas Noticias uses a system where all clicks onto its website are recorded and displayed in the newsroom. The clicks – and the popular vote they represent – determine the print content of the newspaper. If a story gets lots of clicks, it will be followed up and similar stories will be written. If a story doesn’t get enough clicks, it and its follow-ups are spiked.
When I was writing this, I checked what the top economic stories on LUN were. Now my Spanish isn’t very good, and I haven’t kept up with financial and business developments in Chile. But I suspect the Concurso Miss Reef, a beauty pageant, and the success of the Passapoga nightclub – both with photos of scantily clad women – wouldn’t ordinarily be the top business stories.
You don’t even need Las Ultimas Noticias. It might be called My Yahoo, for example, or it could be something you construct using RSS in your aggregator.
So the question of software becomes: how can we circumvent, or at least ameliorate, the effect of the echo chamber of the Daily Me?
Part of the answer lies, I believe, in the strengths of traditional media. For all my enthusiasm for weblogs and the Wikipedia, I am a newspaper addict. I buy two a day without fail, check a couple reasonably thoroughly on their websites, and occasionally buy a third. What I enjoy in good newspapers is the serendipity their editors can orchestrate. I may seek out a columnist I know and like, but if there’s something intriguing on the facing page I’m hooked.
I prefaced my comments on software for media democracy by suggesting we’re at the early stages of development. One of the necessary requirements that does not yet exist, I believe, is an effective way to encourage – and perhaps enforce – serendipitous discoveries outside the comforts of the Daily Me. Good editors know how to do this, and we need to find ways to transfer their skills into new forms of media.
That they will need to do this seems clear. Particularly among young people newspaper readership is dropping, and network television news is losing audience.
David Mindich, author of Tuned Out: Why Americans Under 40 Don’t Follow the News, recently said that today’s young “are still just as thoughtful, intelligent — and I would argue, literate — as ever before. What has changed is that young people no longer see a need to keep up with the news.”
The media finds itself in a period of rapid, dramatic change. Is the pace of change unprecedented? From within the media world, it might well seem so. There have been enormous shifts in media before in the last century. The rise of radio and then television as mass media, for example, continues to have enormous impact on our world.
But consider again the rapidity with which the world I describe has arrived. Tim Berners-Lee only invented the World Wide Web in 1989, the same year the Berlin wall came down. The first weblog probably appeared in 1997. The Wikipedia debuted as Nupedia in 2000.
As Scott Rosenberg, until recently editor of online magazine Salon, points out, “If you run a newspaper or a TV news operation you have spent your whole professional life in a stable structure, one whose supporting beams of business and technology have never fundamentally shaken or broken under you.”
In many other businesses, however, wholesale, sweeping changes are regular occurrences. It’s not surprising that it’s a technology executive, Andy Grove of Intel, who coined the phrase “only the paranoid survive”. In his business, there were shifts every few years where the company, if it had missed the turn, could have vanished. Bill Gates has noted that not a single product Microsoft sells today will be sold in ten years time.
Orville Schell, dean of the journalism school at Berkeley, was recently quoted in Business Week: “The Roman Empire that was mass media is breaking up, and we are entering an almost-feudal period where there will be many more centers of power and influence. It’s a kind of disaggregation of the molecular structure of the medium.”
The exciting thing about such a world is that, for success, you need to be constantly finding new ways to do things. That’s where we are with media today. For many new ways are threatening and worrisome. For a few, new ways can be liberating and creative.
I’d like to make a final digression. I happen to be a collector of slide rules. If you’re under 40 years old, you may not know what a slide rule is. It’s a device, developed in the seventeenth century that uses logarithms to facilitate calculation of many different mathematical functions.
For nearly 350 years, the slide rule was the most usable, most accurate technology available for scientific and engineering calculations. Then, almost overnight, it vanished, replaced by digital calculators – more accurate, easier to use, cheaper to manufacture and buy.
On the whole, we’re probably better off with calculators and, now, computers. But we lost something when we lost the slide rule. Using a slide rule demanded an ability to make approximations of results and to understand the likely margin of error developing in a calculation. Now, when we can have a calculation produced to hundreds of significant digits in an instant, most people don’t see any need for that knowledge.
It’s why you sometimes get completely ridiculous totals in a shop or restaurant, because people rely on the computer rather than their own addition. It’s also why, indirectly, a recent planetary probe failed because the engineers mixed up metric and English measurements.
What has happened, I believe, is that we have moved from being tool users to tool managers. We direct processes rather than shape them. To be a tool manager, you merely push a button or use a scanner and the tool does what it is programmed to do. To be a tool user requires judgment, skill and experience.
When it comes to the media, most of us today are tool managers. We switch on the television, open the newspaper or read Google News.
What is so exciting about today’s explosion of diversity in the media is that it holds out the prospect that we can move in a more positive direction. The new tools we have enable more of us to become users rather than mere managers. And it is certainly better to be a tool user than a tool manager.