Skip to main content
Printable page generated Thursday, 11 Aug 2022, 12:06
Use 'Print preview' to check the number of pages and printer settings.
Print functionality varies between browsers.
Printable page generated Thursday, 11 Aug 2022, 12:06

Law, the internet and society

Introduction

Changes in law and technology, largely invisible to the general public and widely misunderstood by policymakers in the public and private sectors, are having a fundamental impact on our society. The aim of this unit is to provide an appreciation of how the internet paved the way for an explosion of innovation. You will explore some of the changes in the law and internet technology that have resulted from the reaction to that innovation. You will also consider the implications of these changes for society.

Learning outcomes

By the end of this unit you should be able to:

  • understand the value of ‘commons’ or ‘free’ resources in facilitating innovation and creativity;

  • understand some of the key influences in policymaking on intellectual property and technology;

  • understand Lawrence Lessig's model of constraints on behaviour;

  • understand Yochai Benkler's ‘three layers’ model of the internet;

  • be able to critically analyse the material you read.

1 Unit overview

This unit explains how the internet has enabled massive innovation. By analysing the (sometimes controversial) case law, we examine how this internet innovation has affected certain kinds of businesses, the response of those businesses, and the social and economic significance of this.

This course will not make you an expert in internet law. It would be impossible to cover all the relevant laws and cases in a short course such as this. We have, therefore, had to be selective about what we have included and we have sometimes simplified complex concepts. Nevertheless, you should gain a good appreciation of the main ideas, and the necessary skills and knowledge to investigate further areas of interest to you.

One of the most important topics in this unit is the notion of the ‘commons’ put forward by Lawrence Lessig, the author of The Future of Ideas.

To make things we need resources, including intangible resources like information and ideas. Authors, inventors, blues musicians – creators of all kinds – use language, stories, professional skills, musical notes and chords, facts and ideas, all building on the work of earlier creators. Many of these resources are free. A public highway, a public park, Fermat's last theorem or an 1890 edition of a Shakespeare play are all free to use or copy. These free resources are all part of what Lessig's book refers to as the ‘commons’.

We will look at this idea of a commons in more detail later. For the moment, just think of it as the raw materials for generating ideas or creative work, such as inventions, music or art.

"Nobody can be so amusingly arrogant as a young man who has discovered an old idea and thinks it is his own."

(Sydney J. Harris (1917–1986), American journalist and author)

As you read through this unit and Lessig's book, you might find that the writing is quite formal. Actually, both are much less formal than some of the websites you will encounter. Some of the language in this area is legalese, or complicated legal jargon, a way of communicating that many find long-winded and convoluted. It may take you some time to read some of the material, so to help we have often summarised the main points for you.

Lawyers are required to use words with great care and precision to put forward a case and to counter arguments. Advocates with an agenda (for example those who want copyright law restricted or those who want it expanded) use words as a tool to win us over to their side. It is important to learn to identify opinion and propaganda, and distinguish them from the facts of a case. Later in the unit we will look at some legal cases in detail and explore the tactics used by advocates to persuade us of the merits of their case.

Groucho Marx, on being warned that the name ‘Casablanca’ belonged to Warner Brothers: ‘You claim you own Casablanca and that no one else can use that name without your permission. What about Warner Brothers – do you own that, too? You probably have the right to use the name Warner, but what about Brothers? Professionally, we were brothers long before you were.’

Note: Much of this unit is based on the Open University course T182 The law, the internet and society, which is no longer offered for credit, and on The Future of Ideas by Lawrence Lessig, linked below. Though we have made a small number of minor amendments, the material was last substantively updated in the early part of 2005, so bear in mind that some of the cases, laws and technologies discussed will have moved on since then. Lessig's ideas, however, and therefore the bulk of this unit, remain every bit as important today as when his book was first published in 2001.

You can visit Lawrence Lessig's The Future of Ideas website at the-future-of-ideas.com.

The study calendar below is provided for anyone who would like to take a rigorous, systematic approach to working through the material, and for teachers considering using the material in whole or in part. The unit is structured so that study of the material is carefully paced over 10 weeks. The study calendar and list below should give you a general idea of the kind of expectations we had of students who took the original T182 course for credit.

Study calendar

Study week Unit section The Future of Ideas chapter
Week 1 1 Unit overview
2 The explosion of the internet Chapter 1
Week 2 3 Commons, layers and learning from history Chapters 2 and 3
Week 3 3 Commons, layers and learning from history Chapters 2 and 3
Week 4 4 The revolution Chapters 7, 8 and 3
Week 5 4 The revolution Chapters 7, 8 and 3
Week 6 5 Copyright wars Chapter 9
Week 7 6 Constraints on behaviour
Week 8 7 The counter-revolution Chapter 11
Week 9 7 The counter-revolution Chapter 11
Week 10 8 The architecture of the internet
9 Unit summary

As you work through the unit we suggest using your Learning Journal to record your thoughts and views on the questions and issues raised in the unit, including links to websites of interest.

In any one week of studying you might expect to:

  • read and make notes on material from the unit;

  • read material from The Future of Ideas;

  • do related activities or self-assessment questions (SAQs);

  • find further information on the web on some aspect of internet law or policy;

  • put these links with your own thoughts into your Learning Journal.

2 The explosion of the internet

2.1 Why is this subject important?

No one knows how many people use the internet. By September 2002, NUA estimated that it was about 605 million. It took radio nearly 40 years and television about 15 years to reach an audience of 50 million. The ‘World Wide Web’ part of the internet took just over three years to reach 50 million.

Note that the terms ‘World Wide Web’ and ‘internet’ are often taken to mean the same thing, but these should be distinguished. The internet is the global network of computer networks. The World Wide Web (WWW) is that bit of the internet you can access using a web browser, such as Mozilla Firefox or Microsoft Internet Explorer.

The internet, purely on the basis of its growth, constitutes a revolution in communications. The technology has changed the way many of us work, do business and socialise. It is a good idea, therefore, to understand something about it and the policies evolving to govern it (and consequently us).

The really interesting thing about the internet, though, is that it is a system that links the people who use these networks. The internet links a lot of people together. Box 1, below, tells the story of how 5,000 people cooperated to play a simple computer game at a Las Vegas conference in 1991.

Box 1: Playing ‘pong’

At a computer graphics conference in Las Vegas in 1991, Loren Carpenter, one of the technical wizards behind the film Toy Story, conducted an experiment. The 5,000 delegates were given a cardboard wand, green on one side and red on the other. They faced a massive video screen on which was a simple video game called ‘pong’. The audience on the left side of the hall controlled the left paddle of the game and those on the right controlled the right one. Flashing the red side of the wand instructed the paddle to go up; green made it go down. Video cameras scanned the sea of wands and relayed the ‘average’ instructions via computers to the paddles on the screen – so each move of the paddle was the average of 2,500 players' intentions. When the convener shouted ‘go’, in no time the 5,000 were having a pretty reasonable game of pong. It was quite a sensation for the people who were there, according to Kevin Kelly, who wrote about it in his 1994 book Out of Control: The New Biology of Machines, Social Systems, and the Economic World.

If you would like to know more, you can read his captivating description of the event.

Five thousand people playing a computer game? What's that got to do with the impact of the internet? Well, to begin with, those 5,000 people no longer need to be in the same place. They can be linked via the internet.

Suppose a proportion of the ever-increasing number of independent internet users could be focused on tackling some bigger challenge than a game – say the search for extraterrestrial intelligence or medical research on, for example, the human genome, or preservation of ‘fundamental values’ such as freedom. US District Judge Stewart Dalzell, for example, in an important case in the US about free speech on the internet, called the internet ‘the most participatory form of mass speech yet developed’. Alternatively, perhaps this large group of internet users could trigger a series of events leading to a global catastrophe such as nuclear war.

That's the thing about the Net – it has the capacity to be used for ‘good’ and ‘bad’ deeds. It gives me a terrific opportunity to do research and communicate with like-minded people all over the world; but equally it provides criminals with a system that enables them to communicate and plot securely.

We are only starting to realise the potential of the internet. Lawrence Lessig argues in The Future of Ideas that we should be careful about locking the Net up too tightly with regulation or technology intended to control it. It is natural to want to control something that might facilitate bad deeds, but care is required for two reasons.

  1. In trying to alleviate our fear of what criminals might do with the internet, we could easily kill its potential for explosive innovation and social good. This could be described as killing the goose that laid the golden egg.

  2. The law of unintended consequences – the internet is a complex system and complex systems behave in unexpected ways. This is sometimes called ‘emergence’. Attempts to regulate complex systems also have unanticipated or emergent outcomes. Take the recent case of a Turkish hacker who took it upon himself to help the FBI catch two child abusers by spreading a virus via a child pornography newsgroup. This allowed him to collect evidence which he passed on to law enforcement authorities in the USA. However, the Homeland Security Act in the US says that the hacker in this case would be liable to a life sentence in prison. It does not allow for the possibility that a hacker might do good.

We can think of the internet as having parallels with the environment. Both are complex systems with complex ecologies. The technical experts and ecologists understand, to some degree, the effect that changes to these systems will have. Most of the rest of us don't. That is not a criticism. It is impossible even for the experts to completely understand the internet or the environment in their entirety. That's very important to remember – sometimes even the experts don't understand. Alexander Graham Bell had no idea how the telephone would be used when he invented it. Initially he thought it could be used for alert calls to let people know that a telegram was coming. Then he thought it would be progress if there was a phone in every town. He also thought it might be used as a broadcasting device. He never imagined that there might be a phone in every house and that it would be used for personal communications.

In an information society access to, and control of, information is crucial. Who is to ensure that information technologies and the regulations governing them evolve to facilitate more good than bad? What political philosophies will underpin this evolution? Where, how and when will such decisions be made?

Sometimes these issues are left to groups of experts who draft legislation, on intellectual property for example, which potentially has a global effect. Yet an intellectual property expert thinks nothing of sending a letter to a pop music journalist called Bill Wyman to insist he ‘cease and desist’ using his own name because it infringes on the intellectual property rights of his client, Bill Wyman, who used to be a member of the Rolling Stones. This is not a criticism of the lawyer. Within the confines of the complex world of intellectual property law, this lawyer – this expert – could be considered to be acting reasonably in the interests of his client. It seems strange, however, that someone could be threatened with a lawsuit for using their own name.

Jessica Litman, in her book Digital Copyright, describes the process of creating intellectual property legislation in the US as the product of interparty deals. Representatives of the relevant industries get together, hammer out a deal and present the draft legislation to Congress for approval. It is perhaps not surprising that, with this industrial-scale focus, the resultant legislation can lead to odd results, like the Wyman example, when applied at a personal level.

For every human problem there is a solution which is simple, neat and wrong.

(H.L. Mencken)

It might be argued that we have a responsibility to try to understand the internet because it may have a significant effect on our lives. Some people also argue that we have a responsibility as citizens to call our democratic representatives and their experts to account when they are making laws on our behalf. We should understand the intentions guiding those laws, and be able to see when such laws have more wide-reaching effects than originally intended.

The experts, for their part, have a responsibility to translate the complicated issues and explain the systems in ways that non-technical people can understand. Maybe, as James Boyle suggests, the birdwatcher and the duck hunter don't like each other, but they have a shared interest in knowing that a local factory is dumping toxic heavy metals into the birds' habitat and poisoning them. Similarly, we might ask whether both a computer scientist and an artist might have an interest in a shared but endangered (if Lessig is to be believed) information ecology, the internet.

Why is this subject important then? It is important because we need to be streetwise. If the technology is going to have a major impact on us, we should try to understand that technology and the ways it gets regulated.

All progress is initiated by challenging current conceptions, and executed by supplanting existing institutions.

(George Bernard Shaw)

2.2 Our digital future … and why it matters

The internet is a relatively old technology – the network as we know it today dates from 1983, and its precursor, the Arpanet, dates from 1969. But media awareness of the internet as a social phenomenon dates only from the mid-1990s and the explosive growth of the World Wide Web. Since then, public discussion about the network has taken place between two extremes which we might call the Utopian and Dystopian views.

2.2.1 Two extreme views

Utopian: This sees the internet as a positive and liberating development, indeed the most important transformation of mankind's communications environment since the invention of printing. And just as the invention of print in the 15th century transformed Western society – leading to the Reformation, the rise of modern science, the Romantic movement and even (according to some writers) the redefinition of ‘childhood’ as a protected space in the lives of young people – so Utopians see the internet as having the same revolutionary potential. Proponents of this view emphasise the benefits of the technology: the huge volume of information available on the Web, the way email enables communication and cooperation, the economic potential of online trading, etc.

Dystopian: This view sees the Net as a dangerous and subversive development which facilitates anarchic and criminal tendencies, leading to an ungovernable world swamped with pornography, lies, propaganda and worse, and to societies populated by socially crippled, isolated individuals who are vulnerable to the marketing imperatives of huge transnational media corporations.

These are caricatures of extreme views, but representative samples of them can be found in public debates about the Net. Neither, however, offers an insightful way of thinking about future possibilities because they are both based on the assumption that the Net is a ‘given’ – i.e. something that is immutable or unchangeable (like a mountain), rather than an object that is dynamic and capable of change.

2.2.2 The malleable Net … and its implications

One of the central ideas underpinning this course is that the internet is not immutable. It is not hewn out of stone but constructed out of computer software and software can be changed – rewritten, upgraded, modified, patched. So a more productive way of thinking about the future is to ask questions like these:

  • In what ways might the internet change?

  • What are the forces that might lead to changes?

  • What would be the implications of changes in the nature of the Net?

Let's spend a few minutes thinking about these.

2.2.3 In what ways might the internet change?

The benefits and dangers perceived by (respectively) Utopians and dystopians are a consequence of the existing architecture of the internet. The network was designed on the principles that control would be completely decentralised (it was owned by nobody and you didn't need anyone's permission to join) and that the system was not interested in the identity of its users (only in the addresses of their computers). This meant that the system facilitated anonymity – as summed up in the famous 1993 New Yorker cartoon showing two dogs in front of a computer. One dog is saying ‘On the internet, nobody knows you're a dog.’ The anonymity facilitated by the Net's original design is a source of both freedom (e.g. to publish without fear of retribution) and irresponsibility (it enables criminals and others to communicate, paedophiles to distribute illegal material and music lovers to exchange copyrighted materials).

This talk of anonymity may puzzle you. After all, isn't everything you do on the Net logged by the servers through which your Web and email traffic flows? Answer: yes, but at present activity is logged only at the level of the computer you happen to be using. There is nothing that authenticates users – i.e. that links you as a named individual to what the computer you are using is doing on the Net.

But this could change. One of the points that Lessig makes in The Future of Ideas is that an authentication layer could easily be added to the Net. This would make it mandatory for some users to provide evidence of their identity and reverse the tolerance of anonymity that currently characterises the Net, thereby turning it into a different kind of space.

2.2.4 What are the forces that might lead to changes?

Broadly speaking, there are two kinds of forces with strong vested interests in authentication of internet users.

  1. State agencies (police, security services, social services, tax, VAT and other public authorities).

  2. Commerce and industry. A space in which ‘nobody knows you're a dog’ is not an ideal space in which to do business. As online commerce becomes more important, there is increasing pressure for authentication processes so that vendors and customers can be sure of the identities of those with whom they deal, online ‘signatures’ can be validated, fraud reduced and online-only contracts made legally enforceable.

2.2.5 What would be the implications of changes in the nature of the Net?

If we just take the loss of anonymity as an example, the implications could be far reaching. And there would be gains and losses. On the one hand, authentication might reduce online fraud and assist in the detection and prosecution of criminals; on the other hand it could enable repressive regimes to censor dissidents, limit access to the Net for certain classes of person (e.g. young people, those who do not have credit cards) and significantly encroach on people's personal privacy (e.g. by enabling publishers – and others – to detect exactly what each internet user is reading online). It might also increase the incidence of the crime known as ‘identity theft’.

2.3 The internet … too important to be left to engineers

Why do these possible changes to the internet matter? As the internet becomes more central to our lives, alterations in its nature or functioning could have consequences for us all. Implicit in the technology, for example, is the capacity to monitor the online activity of every single user: to track every website visited, every email sent (and to whom), and so on. So debates about the future of the Net, and about the legal framework within which it operates, are increasingly going to be debates about the future of society – just as, say, arguments about press freedom are inevitably arguments about liberty and democracy.

To illustrate this point let's take two cases. Both involve the right to privacy, which is defined in the European Convention on Human Rights as a fundamental human right. Despite this, there are strong pressures to erode privacy coming from both commerce and governments.

  • On the commercial side, you may have noticed that many e-commerce sites encourage you to sign up for e-mail updates, notification of special offers and so on. In order to do this you are asked to complete an online form giving some or all of the following details: name, age, gender, occupation, mail address, email address, postcode. The reputable sites have a link to a statement of the proprietor's privacy policy. Disreputable sites do not even have that. And the ‘policy’ may range from a solemn undertaking never to reveal your details to anyone outside the operator's organisation to an opaque statement which is difficult to understand but which in essence reserves the right to make your details available to other companies without consulting you. You may not be too concerned about this because you live in a country with strong ‘data protection’ legislation. But the site to which you have confided your data may be located in a country with much less stringent legal protection of privacy. The point is that, over time, your privacy may be eroded in ways and to an extent that you never envisaged.

  • On the government side, the privacy of your personal communications may be compromised by new laws governing the interception and monitoring of email. For example, the UK Regulation of Investigatory Powers Act of 2000 (RIPA) gives the Home Secretary and a range of other public authorities sweeping powers to intercept and read email, and to monitor the ‘clickstream’ – i.e. the record of websites visited – of any internet user. At the time of the passage of the Act, the government maintained that its purpose was simply to bring powers for online surveillance into line with existing powers to monitor telephone conversations or intercept written communications. Critics of the Act maintain that the new powers are more intrusive and exercised with less judicial oversight. And that, for example, an individual's clickstream (which can be accessed under RIPA with a warrant) can be much more revealing than a list of telephone numbers dialled by him or her.

What these examples show is that there is a strong connection between developments on and surrounding the internet that affect something (privacy) which is normally regarded as a subject for political concern and debate. Yet what we generally find is that debates about internet regulation tend to be regarded as technical arguments about a technological subject, rather than as matters that should concern every citizen. The viewpoint implicit in this unit – and one of the reasons we created it – is that this is misguided. The internet has become too important to be left to the engineers.

2.4 The Future of Ideas

The environmental movement regularly reminds us of the need to live sustainably – within our means – as far as natural resources are concerned. Otherwise, they say, we or our children will face environmental catastrophe, as our resources and the ability of this planet to support life will run out. As the World Commission on Environment and Development (the Brundtland Commission) put it in 1987, ‘Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs.’

The Future of Ideas applies the environmental movement's sustainability rallying call to the resources we use to create and innovate. Its author, Lawrence Lessig, presents the case for the sustainability of ideas. He argues that unless we nurture the resources used to create and innovate scientific discoveries, language, skills, stories, facts, etc., society will run out of ideas. Lessig thinks we are fencing off these resources with law and technology, handing control of them to private owners. You need to make up your own mind about whether he is right, and whether this is positive or negative.

Part of the inhumanity of the computer is that, once it is competently programmed and working smoothly, it is completely honest.

(Isaac Asimov)

2.4.1 Why this book?

The internet, and the law relating to it, are changing rapidly, and the danger in selecting a set book is that it will be out of date as soon as it is printed. When creating the T182 course we looked at a large number of books. These ranged from serious legal texts to reasonably readable but narrowly focused stories of particular cases. We wanted a book about internet policy that shows the effect it can have on ordinary people. The Future of Ideas provides a robust intellectual framework that you can rely on to interpret these issues. Few academics have influenced the way in which people think about the internet as much as Lessig.

The Future of Ideas is a very analytical book, and this is where its real value lies. Its analytical nature means that some of the material requires the reader to pause and think before continuing. It is, however, aimed at a general audience and is very readable. Absolute beginners may have some difficulty with the more complex ideas, but the book tells a relatively simple story that is worth remembering as you work through it and this unit:

  • the internet led to an information revolution – an explosion of innovation;

  • many of the resources used in this innovation were free, and this is not widely recognised;

  • some large commercial interests felt threatened by the innovation and decided to mount a counter-revolution;

  • commerce used financial and political clout to get laws and technologies changed to protect their markets;

  • these changes have a wider effect than intended – so did the internet change commerce or did commerce change the internet?

The book is persuasively argued, but it is important that you learn to read it critically. Try to question, for example, whether Lessig or his protagonists have oversimplified matters or have been selective in their use of evidence.

"A thing is not proved just because no one has ever questioned it. What has never been gone into impartially has never been properly gone into. Hence scepticism is the first step toward truth. It must be applied generally, because it is the touchstone."

(Denis Diderot)

2.5 Reading the book

We suggest that you read The Future of Ideas as indicated throughout the unit. Usually you will be directed to read a chapter, or part of a chapter, at the start of a section.

Chapter 1

Now, in order to introduce some basic concepts, to familiarise you with the tone of the book and to raise some questions, please read Chapter 1, linked below. Don't worry too much about understanding everything in depth. You should be prepared to read parts of the book more than once, but we provide reading guides that outline the important ideas.

Click ‘view document’ to open Chapter 1 of The Future of Ideas.

Note: Experience has shown that many students have already had a go at reading the set book before getting to this point in the unit. If you have done so and found the book difficult, then you might like to listen to a simplified explanation of Lessig's ideas is before going any further.

Download this audio clip.Audio player: t182_1_001s.mp3
Interactive feature not available in single page view (see it in standard view).
Discussion

Chapter 1 summarises what The Future of Ideas is about. It is also quite dense with philosophies about markets and property. We're not concerned too much with the philosophy; the main points you need to note are:

  • copyright law is full of rules that ordinary people would think of as silly, e.g. stopping a film that included a scene containing a chair that resembled an artist's sketch;

  • the internet led to an information revolution – an explosion of innovation and creativity;

  • creators/innovators need resources to create things;

  • many of the resources used in internet innovation were free – but a lot of people do not realise this;

  • many people confuse the concepts of ‘property’ and ‘intellectual property’;

  • some large commercial interests felt threatened by the innovation and decided to mount a counter-revolution;

  • the counter-revolution is succeeding;

  • the important question about resources used to innovate is whether they should be free, not who should own them.

You might find this mindmap of Chapter 1 helpful:

Figure 1: mindmap of Chapter 1

Mindmaps and notes occur throughout the unit and relate to the individual chapters of The Future of Ideas. They are a series of keyword and key phrase pictures on the chapter.

You can use them in a number of ways:

  • to skim through to get an overview or preview of the chapter;

  • to review once you have read the chapter and made your own notes;

  • to get another person's perspective on the key issues.

If the automobile had followed the same development cycle as the computer, a Rolls-Royce would today cost $100, get a million miles per gallon, and explode once a year, killing everyone inside.

(Robert X. Cringely, InfoWorld magazine)

2.6 Two kinds of innovation

Innovation can be defined as ‘the introduction of new things or methods’. It is one of those concepts – like freedom – of which we all approve, in the sense that we are all ‘for’ it. Politicians, industrialists, managers all loudly proclaim themselves to be in favour of innovation. This unanimity ought to make us suspicious. And we would be right to be sceptical because it turns out that the apparently universal approval of ‘freedom’ is in fact confined only to certain kinds of freedom. People should be free to roam, for example, but not on private land. Freedom of speech should not extend to the freedom to shout ‘Fire!’ in a crowded theatre. And so on.

So it is also with innovation. For it turns out that there are two kinds of innovation – incremental and disruptive. And while there is near-universal agreement that the former is a thoroughly good thing, opinions diverge sharply on the desirability of the latter.

2.6.1 Incremental innovation

Incremental innovation is a familiar and reassuring process. It has been defined as ‘a continual process of small improvements in efficiency and performance within the fixed parameters of one product, business model, institution or social practice’ (Wilson and Jones, 2002, p. 34). We see incremental innovation, for example, in the way industrial products ‘improve’ from one year to another. A car that you buy today will be significantly different from one purchased 20 years ago – it may have power steering, anti-lock brakes, airbags and air conditioning – but at the same time it is recognisably the same kind of entity as its 20-year-old predecessor. The concept has been refined and improved in steady, incremental stages.

2.6.2 Disruptive innovation

Disruptive innovation, as its name implies, is an altogether less cosy process. It enables us to do old things in new ways; or enables us to do things that were previously impossible or economically infeasible. Disruptive innovation challenges the established order and threatens to undermine established ways of doing business. It is the kind of innovation which the Italian political philosopher Machiavelli had in mind when he wrote that:

Innovation makes enemies of all those who prospered under the old regime, and only lukewarm support is forthcoming from those who would prosper under the new.

The problem for admirers of incremental change is that it is disruptive innovation that fuels economic development and social change. Historians of technology will tell you, for example, that over the last two centuries five separate waves of disruptive technology transformed the societies in which they were conceived: water-powered mechanisation, steam-powered mechanisation, electrification, the automobile and the digital computer. So we have an interesting paradox: incremental change is the kind of innovation with which we feel most comfortable, but disruptive innovation is what drives major economic and social change.

2.6.3 The end-to-end principle

What has all this to do with the internet? Well, simply that the network was designed in such a way as to maximise the possibility of disruptive change – at least in the area of ‘information goods’ and services. The architects of the network were conscious that they could not predict the future; they could not foresee what kinds of applications people would one day conceive for the system they were designing. So they designed the network to be as simple as possible, allowing all the ingenuity to be concentrated at the ‘ends’: in the applications that programmers and entrepreneurs would one day invent.

This design principle later became known as the ‘end-to-end’ principle, and what it boiled down to was that the network should be as generic and simple as possible: all it did was to take data packets in at one end and do its best to deliver those packets to their destination at another end. This principle, plus the fact that the network was not ‘owned’ by anyone who could deny access to it for some applications while welcoming others, provided an unprecedented foundation for disruptive innovation. It meant, basically, that if your application could do something with data packets, then you could use the internet for it. The network didn't care whether those packets contained fragments of messages or images or music or video or digitised voice signals. As long as they came in packets, the internet would deliver them. It had become what Lessig called an ‘innovation commons’.

One consequence of this architecture was a wave of ‘explosive innovation’ in information goods and services, driven by people who had thought of novel ways of using the Net. Much of this innovation was of the disruptive variety. For example, internet telephony (using the Net for digitised conversation) poses a challenge to established telephone networks. The Napster file-sharing system posed a challenge to record companies accustomed to making music available only via plastic discs. Online banking posed a challenge to traditional ‘bricks and mortar’ banks. And so on. We are still working our way through these challenges to the established order – and the old order's response to them.

But more of that later.

"There's a way to do it better … find it."

(Thomas A. Edison)

2.7 Activity 1

The aim of this activity is for you to:

  • consolidate some of the ideas you have been reading about so far

  • test your powers of observation and compare your perception of an interview or a lecture with that of someone else.

Activity 1

Watch the ‘Architecting Innovation’ lecture at http://realmedia.oit.duke.edu/ramgen/law/frey/lessig.rm; it provides a good summary of the arguments in The Future of Ideas. This lecture was delivered by Lessig as the Inaugural Meredith and Kip Frey Lecture in Intellectual Property at Duke University on 23 March 2001, and is just under an hour long.

Please note that you will need to use RealPlayer to watch this video. If you do not already have it, you can download and install the free RealPlayer software from real.com.

You might prefer to read the transcript of the ‘Architecting innovation’ lecture. You can also use this transcript to revisit the activity without having to watch the whole lecture again.

As you watch or read, make a list of the arguments that Lessig is making in the lecture. Rank these points in order of the importance you think Lessig gives them.

Discussion

My list of key points is:

  • Free vs controlled resources

  • Time marked by ‘taken for granted ideas’ like ‘property is good’

  • Neglect to ask – should some resources be controlled at all

  • Benkler's communications system layers – physical, code/logical, content – each can be free and/or controlled

  • The internet was a commons – free speech not free beer

  • Physical controlled, logical end-to-end free, content mix of free and controlled

  • Network stupid, applications smart, network owners can't discriminate against innovators

  • Law and technology changing the Net as a commons because of ideology about property

  • Broadband cable networks not end to end – allow control through architecture

  • Content layer control through changes in copyright law cf acorn and oak tree

  • Groucho and Warner Bros cf code protected by DMCA

  • Lessig bets on a future of control

One way to rank and integrate these points is:

  1. Ideology about property rights can lead us to misunderstand why the internet was a successful medium for innovation and fail to ask fundamental questions about whether a resource should be free or controlled.

  2. A commons is a resource that is free (free speech not free beer), and if it does have an access cost it is neutrally or equally applied, e.g. Central Park, Fermat's last theorem, open-source software. Resources protected by a liability rule not a property rule.

  3. Benker models communications systems using three layers: physical, code/logical and content.

  4. The Net was an ‘innovation commons’ because of the end-to-end architecture at the code layer.

  5. The network is stupid and the applications smart, hence the network owners can't discriminate against innovators.

  6. The physical layer was controlled, the code layer free (as in free speech), and the content layer a mix of free and controlled.

  7. In broadband cable networks the code layer is controlled so network owners can discriminate.

  8. As technology changes the balance of control at the code layer, copyright law changes expand control at the content layer (acorn to oak tree); and e.g. the DMCA protects the digital fences surrounding the content.

  9. These changes are also a function of an ideology about property.

  10. Lessig – the future will be control.

Did your list contain the same points?

Do you agree with the the ranking given in my sample answer?

If you came up with a different answer it doesn't mean you got it wrong. Nor does it mean that I got it wrong. But the differences can tell us something about the process of observation. They arise because we have different perspectives on the interview or the lecture.

Can one person's account of a complex situation ever provide a truly accurate record?

Consider a child who doesn't do very well in an exam. The teacher might see the ‘problem’ as the child being too lazy to study or not capable enough or having a bad attitude and not caring. The child, on the other hand, might argue that she had severe hay fever that day or that the exam was unfair or that the teacher dislikes her personally.

Different people interpret what they see or hear differently. It is important to remember this, since observation and perception are the basis of understanding. If we do not observe clearly, then it doesn't really matter how strong our reasoning might be; our thinking about the subject under consideration will be flawed.

"Chance favours the prepared mind."

(Louis Pasteur)

2.8 Summary and SAQs

In this section we have described, in basic terms, the social and economic significance of changes in law and technology triggered by the internet. We have outlined how, until very recently, the thinking in this area was very piecemeal and explained why we have based the unit on a book by a US constitutional lawyer, Lawrence Lessig.

We have identified two distinctive types of innovation: incremental, which society absorbs; and disruptive, which society is often not sure how to deal with.

Finally, we have briefly identified some of the key ideas in the unit:

  • the internet is an ‘innovation commons’ – free resources that encourage innovation and creativity;

  • two of the main influences or constraints on behaviour are law and architecture;

  • the importance of critical thinking.

2.8.1 Self-assessment Questions (SAQs)

These questions should help you test your understanding of what you have learnt so far. If you have difficulty with the questions it would be a good idea to go back over Section 2.

1. What do we mean by ‘commons’?

Answer

Many will think of ‘the commons’ as meaning the UK's houses of parliament or possibly a village green or common land. Lessig talks about ‘the commons’, in Chapter 1, as being related to ‘free’ resources. For the purposes of of this unit, a commons is essentially a resource which is open for anyone to use. There are no gatekeepers who can control access to that resource. The original design of the internet is an example. The network had no intelligence. The network could not decide which kinds of innovations would be permitted and which would not. The right to innovate on the internet was open to everyone equally, and network owners (originally the phone companies), for example, could not control that innovation. ‘The commons’ will be defined more fully in Section 3.

2. Why is it important to think critically about the material we read in the set book?

Answer

Lessig has produced a very powerful, cogently argued book. It is important to remember, though, that he also brings a particular perspective to this important subject area. His values are those of a committed American constitutional lawyer and our personal values have a big influence on how we see the world. He may have done the most joined-up thinking about law, the Net and society, but some of that thinking may be controversial. The very title of the set book, The Future of Ideas, suggests that he is at least partly in the business of predicting the future, so he may even be wrong in places. Hence it is worth critically analysing the ideas put forward.

‘Sleep faster; we need the pillows.’

(Old Polish saying)

The cartoon caption above suggests that some activities just cannot be rushed, so don't worry if it is taking you some time to get to grips with the concepts. Just remember the important point that free resources are valuable, and the rest will eventually fall into place.

Make sure you can answer this question relating to Section 2: Why is this subject important

  • for me?

  • for society?

and record your thoughts on this in your Learning Journal.

2.9 Study guide

On completing this part of the unit, you should have:

  • read Sections 1 and 2;

  • read Chapter 1 of The Future of Ideas;

  • completed Activity 1;

  • answered the self-assessment questions;

  • recorded your notes in your Learning Journal.

Well done on getting to this point! There was a lot to get to grips with. At this stage, many students find it a bit overwhelming. However, they tend to find that Section 3, scheduled for study over two weeks, begins to break down Lessig's ideas into more manageable chunks.

A well-known economics professor tells students in his introductory classes that they are going to learn a new language. It sounds like English, but it's economish, and they have to learn to translate the terms because ordinary people don't speak the language even though they use the same words. This unit is a bit like that. Once you get used to the peculiar way some words are used, the rest is relatively straightforward and you'll be amazed at how fluently you'll be throwing around terms like ‘commons’ by the end of the unit.

3 Commons, layers and learning from history

3.1 Commons and layers

In this section we introduce the two main concepts in The Future of Ideas: commons and layers. We also cover some relevant history of the internet and discover how the right to tinker is important for innovation. The three-layers model of the internet will be used as the framework for viewing the history.

George Santayana said, ‘Those who fail to learn the lessons of history are doomed to repeat it.’ Lessig's corollary to this is that those who fail to learn the lessons of history are doomed not to repeat its successes. In other words, since we do not understand how the internet bred innovation (for example email and the World Wide Web) on such a huge scale, we will not be able to do it again or even build on what has been done.

We learn from history that we do not learn from history.

(Georg Wilhelm Friedrich Hegel (1770–1831))

Chapters 2 and 3

Read Chapters 2 and 3 of The Future of Ideas, linked below.

Click 'View document' to open Chapters 2 and 3 of The Future of Ideas.

There is some network theory in Chapter 3. This is relatively straightforward to follow if you are already familiar with the basic operation of the internet. It may seem difficult if you are not. It is important that you understand the basics of the technology. A wonderful animated movie, Warriors of the Net, produced by Gunilla Elam, Tomas Stephanson and Niklas Hanberger originally at Ericsson Medialab, tells you most of what you need to know at this stage about the inner workings of the internet. This 12-minute animation, linked below, is worth watching even if you are familiar with how the internet works.

You can visit the Warriors of the Net website at www.warriorsofthe.net.

Discussion

The book plays out the story outlined in Chapter 1 in the context of cases like Napster, following through with three recurrent ideas:

  1. ‘commons’, which are essentially ‘free’ resources available to a community;

  2. the ‘layers’ model of the internet;

  3. the four constraints on behaviour, which will be described in Section 6.1.

The first two of these ideas are introduced in Chapter 2.

Figure 2: mindmap of Chapter 2

3.2 Commons

As ‘commons’ is the core concept in The Future of Ideas, it is worth taking a bit of time to understand exactly what the author means by the term. The commons is a concept that many, in affluent western society, have trouble understanding. If something is free it can't be worth anything, can it?

Read pages 19 and 20 of The Future of Ideas again (linked below). Think of Lessig's examples of commons:

  • public streets

  • public parks

  • Einstein's theory of relativity

  • writings in the public domain, such as an 1890 edition of Shakespeare, whose copyright has expired.

Click 'view document' to open Chapters 2 and 3 of The Future of Ideas.

Think of some other examples which might be considered commons. The village green or the local common are examples that you might be familiar with if you live in the UK – patches of grass that local people can walk on or children can play freely on. What about derelict waste ground in an inner city? Or the air, water or local ecosystem? Shared peer-reviewed scientific or other academic research? Cultural traditions, such as arranged marriages or the tendency to speak quietly or not at all when we enter a church?

Might any of the examples in the preceding paragraph not be a commons? The waste ground is one illustrated by the case of New York?s community gardens. Small groups of volunteers slowly turned forgotten wasteland and abandoned properties into hundreds of community gardens. This helped to lead to the economic and social revival of the local communities. Technically the city had become the legal owner of the sites, as the previous owners had stopped paying taxes on them. As the economic value of the sites increased as a direct result of the work of the volunteers, they became visible again to the City as a potential source of revenue. When the then Mayor of New York, Rudolf Giuliani, proposed selling off over one hundred of the gardens in the late 1990s, the volunteers and local communities were angry and the New York newspapers widely publicised the resulting controversy.

Whether Mayor Giuliani was right to sell the gardens is a question of perspective. They were legally owned by the City and he was entitled to sell. They were not a commons. But they had come to be perceived as such because of the efforts of the volunteers and local communities. Their efforts had revitalised the localities concerned and increased their economic value so that the City came to notice them again. But their value to the local communities transcended their economic worth.

Let's now consider a very different example. The Disney character Mickey Mouse first appeared in the cartoon Steamboat Willie in 1928. Steamboat Willie was based on the Buster Keaton film Steamboat Bill, which had been released earlier the same year. The script notes for the cartoon say: ‘Orchestra starts playing opening verses of Steamboat Bill.’ The cartoon was thus clearly and openly based on the earlier film. Does that mean the original film was a commons or part of a commons? Steamboat Bill was protected by copyright laws, but copyright laws are designed to be leaky, so the film might be considered to partly reside in a commons. In 1928 copyright laws leaked to the extent that the production of parodies such as Steamboat Willie were allowed. Walt Disney did not need Buster Keaton's permission to produce his cartoon.

Today, however, if someone were to produce a parody of Mickey Mouse or any other Disney film, or copy the opening music, there is a good chance that Disney's lawyers and the courts might shut it down, as happened in the 1970s with the Air Pirates case. Ironically, the Open University Rights Department, quite correctly given the nature of UK copyright law, advised against the inclusion on the T182 course CD of an Electronic Frontier Foundation animated parody that included two four-second clips of Mickey Mouse.

The Air Pirates case involved some cartoonists who had produced a series of comics in which Disney characters were involved in storylines that included sex and drugs. Disney had the comics impounded and destroyed, but the whole saga dragged on right through the 1970s. Disney finally reached an out-of-court settlement with the main protagonist, Dan O'Neill, in 1978.

Does that mean that copyrighted works today receive more protection than their earlier counterparts? Lessig would argue that this is exactly the case. On the other hand, copyright laws in the US do still permit the making of parodies of original copyrighted works. A poster of Leslie Nielson advertising the film Naked Gun 33 1/3: The Final Insult, for example, was found by the courts to be an acceptable parody. It was a clear spoof of the photograph of a well-known actress, clearly pregnant, on the cover of Vanity Fair magazine in 1991. (Note: Parodies are not explicitly permitted in the UK.)

Parody is perhaps not the best example of how copyright has become less leaky since the early part of the 20th century, since parody, theoretically, still does constitute ‘fair use’ of copyrighted materials in the US. Disney, however, has been one of the prime lobbyists encouraging Congress to extend the term of copyright – eleven times between the early 1960s and 1998. As such they have been criticised by advocates like Lessig of refusing to allow future creators to do to Disney what Disney did to others. The Disney company has based many of its creations, such as Steamboat Bill, Snow White and Jungle Book on the work of earlier authors but apparently wants to restrict the production of creative works based on its own output.

‘Common’ is defined in the Oxford English Dictionary as ‘belonging equally to more than one’ and ‘free to be used by every one’. ‘The commons’ is defined as ‘a common land or estate; the undivided land held in joint-occupation by a community.’

For the purposes of this unit, a commons is a resource freely (or neutrally) available to the community. There may be a nominal cost or access fee that needs to be paid, but once that fee is paid individuals have access to the resource. Most importantly, there are no gatekeepers who can refuse access because they don't like the look of someone or because they may fear how someone might use the resource.

The New York community gardens had most of the features of a commons. They were a resource apparently enjoyed by the local community without restrictions. They did, however, have a gatekeeper, a legal owner: the City. The gatekeeper was dormant for many years because the sites had been forgotten or perceived to have no economic value. The City, in the guise of Mayor Giuliani, was able to step in and sell the sites, though, and could have done so at any time.

Deciding whether and to what extent works protected by copyright, such as Steamboat Bill, are part of a commons can be more complicated.

It is very important to understand this notion of a commons, to be able to get a real handle on the arguments in The Future of Ideas. When Lessig talks about the commons, he is not claiming that every resource should be free. He does say, however, that before we decide who gets to own a valuable resource, we should ask whether the resource would be better preserved as a commons.

Lessig sees the internet as an ‘innovation commons’. What is that? It means that the internet is an accessible source of ideas, information, software and hardware that anybody can use to create and innovate.

So when businesses try to control the innovation commons that is the internet, should they be criticised? Well, you could say that this is rather like criticising a fox whose metabolism needs a chicken diet. The objective of a business is to make money. However, according to Lessig, putting businesses in charge of deciding how a resource like the internet gets divided is not always the best option, any more than putting the fox in charge of the henhouse. This is a controversial viewpoint, particularly in the US.

The main thing to take from this discussion, then, is that freely accessible resources can be valuable.

Commons or not?

You were asked above to consider which of a series of examples could be considered to be a commons.

Discussion

Here are my suggested answers:

  • Inner city waste ground is not a commons, as the New York City gardens case demonstrated. The air is a commons.

  • In the UK, the water companies and the Environment Agency are effectively gatekeepers, so water is not a commons. Rainwater might still be a commons.

  • The local ecosystem may or may not be a commons. At its simplest, if it is on private land it is not a commons; if it is on common land it is a commons.

  • Scientific or other academic research may or may not be a commons. Research shared openly and freely is a commons. That which comes with intellectual property or contractual restrictions may not be a commons.

  • I would argue that cultural traditions are a commons. Like the ecosystem and research example above, however, this is debatable. It is difficult to use or participate in certain cultural traditions if we are not aware of or don't understand them. Likewise, if certain elements of culture come with intellectual property or other restrictions then there are gatekeepers who control access. The more complex the system the more complex the question of whether or not that system might constitute a commons.

3.3 Rivalrous and non-rivalrous

The ‘commons’ is dealt with in Chapter 2, along with a crash course in economics – ‘rivalrous’ and ‘non-rivalrous’ resources and incentives to produce. The following diagram should help you to separate out the various issues.

Figure 3: rivalrous and non-rivalrous resources and incentives to produce

Economists see resources as ‘rivalrous’, which means they can run out, or ‘non-rivalrous’, which means that they cannot run out. If someone eats an apple, the apple is not available for someone else to eat. That makes it rivalrous. A poem cannot be worn out or used up, even if someone reads (consumes) it a thousand times. That makes the poem non-rivalrous.

Economists are also concerned with who is going to produce useful resources like apples or oil or poetry books. They believe that people must have economic incentives before they will produce anything. The economists' traditional solution to the problem is to grant property (ownership) rights. If people can own the (rivalrous) oil that they pump out of the ground, they have an incentive to produce that oil. Likewise, if people can have intellectual property rights over (non-rivalrous) poetry, they will have an incentive to write poetry. ‘Copyright’ is the relevant intellectual property right in the case of a poetry book. Other types of intellectual property rights include patents, trademarks and designs.

It takes thirty leaves to make the apple.

(Thich Nhat Hanh (Vietnamese monk))

3.4 An introduction to intellectual property

Lessig refers to intellectual property throughout The Future of Ideas. Here is a brief reference; you should not think of it as a comprehensive guide to the rules of intellectual property, which is a vast subject.

What do we mean by the term ‘intellectual property’? It is a label given to a range of legal constructs, including:

  • copyrights

  • patents

  • trademarks

  • designs

  • trade secrets.

Lessig emphasises that intellectual property is not ‘property’ in the conventional sense of a car or a bicycle. For example, a person or an organisation can hold the copyright on a book for a limited period of time. During that time they can have the limited right to prevent others copying or printing or selling that book. Copyright effectively gives the copyright holder exclusive control of the market for the book, which is why copyrights are sometimes described as monopolies. A copyright is a limited monopoly with a limited life.

If my ‘limited monopoly’ description seems a bit complicated, you can try another way of thinking about intellectual property – that it is like a tax. It is a tax that society pays to people who create, innovate and generate ideas that they are prepared to share with the rest of us. Although there are complicated rules about who gets paid, for how long and under what conditions, this tax provides an incentive for creators to create and inventors to invent.

3.5 Copyright

Copyright exists as an incentive for authors to create works, which can then be used by the general public. Theoretically its main purpose is to promote learning. Authors can be compensated for their work and society is enhanced by having access to this work. Copyright law theoretically balances the rights of the author with those of the public.

Copyright is really a bundle of rights, the basic one being the right to prevent others from copying one's creative work, for example, a book or painting.

In the UK, according to the Intellectual Property Office, ‘Copyright gives rights to the creators of certain kinds of material to control the various ways in which their material may be exploited. The rights broadly cover: copying; adapting; issuing; renting and lending copies to the public; performing in public; and broadcasting. In many cases, the author will also have the right to be identified on his or her work and to object to distortions and mutilations of his work.’ These last two rights are sometimes called ‘moral rights’.

In the United States, according to Terry Carroll's Copyright FAQ, copyright covers:

  1. the reproductive right: the right to reproduce the work in copies;

  2. the adaptative right: the right to produce derivative works based on the copyrighted work;

  3. the distribution right: the right to distribute copies of the work;

  4. the performance right: the right to perform the copyrighted work publicly;

  5. the display right: the right to display the copyrighted work publicly;

  6. the attribution right (sometimes called the paternity right): the right of the author to claim authorship of the work and to prevent the use of his or her name as the author of a work he or she did not create;

  7. the integrity right: the right of an author to prevent the use of his or her name as the author of a distorted version of the work, to prevent intentional distortion of the work and to prevent destruction of the work.

Source: 17 U.S.C. 106, 106A

We can see that the UK rights broadly coincide with those in the US, although in the UK or European context the moral rights are slightly broader than the attribution and integrity rights (6 and 7 above) in the US. ‘Moral rights’ are historically concerned with protecting the reputation of authors. In the US, ‘moral rights’ receive little protection in practice.

The rights exist for a limited time. The simple question ‘How long does a copyright last?’ turns out to have a surprisingly complicated answer. The length of time depends on the type of copyrighted work, the type of copyright holder, the jurisdiction and the date the work was created or published. In the case of literary or artistic work the copyright lasts for the life of the author plus 70 years. In the US, where the copyright is in a ‘work made for hire’ and held by a company it now lasts 95 years from the date it was first published.

Sound recordings are protected for 50 years in the UK and a number of other EU countries, though not all. There are currently moves to harmonise this to the longer US term. There have been eleven extensions to the term of copyright in the US since 1962 and many changes to legislation in other jurisdictions in relation to copyright terms, so the creation or publication date is also a complicating factor.

The good news is that you don't need to remember all this. If you're looking for a rule of thumb, think of ‘life plus seventy’, but if you really want to know for sure about a specific case, the best bet is to check with an authoritative source, such as the Patent Office in the relevant jurisdiction.

What exactly does copyright cover? The UK Intellectual Property Office again:

The type of works that copyright protects are:

  • original literary works, e.g. novels, instruction manuals, computer programs, lyrics for songs, articles in newspapers, some types of databases, but not names or titles (see Trade Marks pages);

  • original dramatic works, including works of dance or mime; original musical works;

  • original artistic works, e.g. paintings, engravings, photographs, sculptures, collages, works of architecture, technical drawings, diagrams, maps, logos;

  • published editions of works, i.e. the typographical arrangement of a publication;

  • sound recordings, which may be recordings on any medium, e.g. tape or compact disc, and may be recordings of other copyright works, e.g. musical or literary;

  • films, including videos; and broadcasts and cable programmes.

You do not have to remember all these rights or this list of things that copyright covers in the UK.

Copyright does apply to the internet, despite the commonly held belief to the contrary. Copyright does not protect ideas – it protects the way the idea is expressed, but not the idea itself. For example, suppose someone wrote an article about painting a wall black, and this had not been thought of before. The article would be copyrighted, but nobody would be prevented from using the idea about painting walls black. Another example which may be more familiar is a recipe in a cookbook. It is permitted to follow the recipe to cook the food, but photocopying the recipe is likely to be an infringement of copyright. The US Copyright Office, for example, says: ‘A mere listing of ingredients is not protected under copyright law. However, where a recipe or formula is accompanied by substantial literary expression in the form of an explanation or directions, or when there is a collection of recipes as in a cookbook, there may be a basis for copyright protection.’

3.6 Patents

The UK Intellectual Property Office says:

A patent is an exclusive right granted by government to an inventor, for a limited period, to stop others from making, using or selling the invention without the permission of the inventor. When a patent is granted, the invention becomes the property of the inventor, which – like any other form of property or business asset – can be bought, sold, rented or hired. Patents are territorial rights; UK Patent will only give the holder rights within the United Kingdom and rights to stop others from importing the patented products into the United Kingdom.

Effectively, then, a patent is a limited monopoly with a limited life.

Patents are concerned with inventions and processes – how they work, what they do, what they are made of or how they are made. To be patentable your invention must:

  • be new

  • involve an inventive step (i.e. not obvious)

  • be capable of industrial application (i.e. be useful) and

  • not be amongst a list of things that the law says cannot be patented. This list of things that are excluded differs depending on jurisdiction. In the US, pure software patents are allowed, whereas, theoretically, they are not allowed in the UK.

An invention is theoretically not patentable in the UK if it is:

  • a discovery (but in the US, R. Schlafly (1994) obtained US Patent 5,373,560 on two large prime numbers);

  • a scientific theory or mathematical method (but in an attempt to test the system in the US someone did manage to win a patent on ‘Kirchoff's law’, a long-established scientific theory about the flow of electric current);

  • an aesthetic creation such as a literary, dramatic or artistic work, because this is covered by copyright;

  • a scheme or method for performing a mental act, playing a game or doing business;

  • the presentation of information;

  • a computer program.

Note: Business process patents have been granted in the US and Europe, however, the most widely known is Amazon's ‘one-click’ patent (US Patent No. 5,960,411, ‘Method and system for placing a purchase order via a communications network’). In 1998 the US Patent and Trademark Office issued 125 business process patents for ways of doing business on the internet. In 1999 there were 2,600 applications for ‘computer-related business method’ patents, and the numbers have been increasing year by year since then. Online ordering with a credit card can only be done in a limited number of practical ways, so granting someone a monopoly right over such a process could create a problem for commerce.

Note: Computer programme patents are also the subject of much controversy. Software patents are allowed in the US (despite a 1972 Supreme Court decision which stated that a computer program was not patentable, Gottschalk, Acting Commissioner of Patents v Benson et al., 409 U.S. 63 (1972)) and have been granted in the EU. Article 10 of the 1994 international TRIPS (Trade Related Aspects of Intellectual Property) agreement declares that computer programs ‘shall be protected as literary works under the Berne Convention’. This suggests that software is protected by copyright, not patents.

Here's another quote from the UK Intellectual Property Office:

In addition, it is not possible, in the UK, to get a patent for an invention if it is a new animal or plant variety; a method of treatment of the human or animal body by surgery or therapy; or a method of diagnosis.

In the US, patents on medical treatments are allowed. In the mid-1990s a doctor sued a fellow eye surgeon over a medical procedure he claimed to own (Pallin v Singer, March 1996). This led to a limited change in the law after the public outrage it created. Doctors can still patent surgical procedures but other doctors are not liable for infringing the patent if they use the procedure in the course of their practice.

The UK statute, however, also includes a get-out clause regarding the list of things that cannot be patented. It reads: ‘… the foregoing provision shall prevent anything from being treated as an invention for the purposes of this Act only to the extent that a patent application relates to that thing as such.’

The result of this is that about 15 per cent of patent applications in the UK are for software-based applications, and the proportion of these which are granted is the same as the proportion of all patent applications granted. The situation with regard to software patents is quite fluid in Europe at the moment. The European Commission is currently considering a proposal to allow the patenting of software or, as it is called in the proposal, ‘computer implemented inventions’.

As with copyright, a patent does not mean that someone has a monopoly on an idea. What the patent holder theoretically owns is the practical application of the idea, not the idea itself. (Remember that with copyright it is the expression of the idea that the copyright holder owns.) A criticism of the current patent system is that it goes beyond the notion of a patent as a practical application of an idea and allows ownership of broad concepts, such as the composition of the CCR5 gene (a gene that is involved in HIV infection), which might be useful in some future medical treatment.

The distinction is so subtle that even the experts cannot agree. Just think of it as the difference between a patent on a new fishing rod and a patent on the idea of catching fish.

Patenting has created a lot of controversy in the areas of biotechnology, communications technologies (e.g. computers and the internet, hardware and software) and business processes. Biotechnology, chemical and drugs companies (some long established and some startups) have been filing patents on tens of thousands of genes and gene sequences without necessarily fully understanding what they do or how they might be used to treat disease (the most publicised incentive for doing the gene research). Computer, communications and retailing companies have successfully filed patents on computer programs and ways of doing business that have been criticised as not involving any inventive step. Tim O'Reilly, a prominent critic of such patents, believes ‘It defies common sense, that just because we can do it on the internet, it's a new invention’. He runs O'Reilly & Associates, a publishing and information company dealing in leading-edge computing technologies.

You do not have to remember all these details on patents.

3.7 Trademarks and licences

3.7.1 Trademarks

The main area of litigation in relation to trademarks and the internet has been domain names, where typically the holders of a web address are accused of being ‘cybersquatters’, deliberately exploiting the good name associated with a well-established trademark. One of the longest-running disputes in this area is over the ownership of the ‘sex.com’ domain name. Although it is important and although, for example, one young UK computer science graduate found himself in prison as a result of a dispute over domain names related to well-known companies such as Sainsbury's, Virgin, Marks and Spencer, and Boots, we will not be looking at that or any other trademark cases in detail.

3.7.2 Licences

A legal licence is just authority to do or use something, when if you didn't have the licence it would be wrong or illegal or infringing someone's rights to do or use that thing. Think of it as a right to use someone's property, granted by the owner. The licence sets out the conditions of use almost like a contract. An ordinary licence can be revoked but, just to confuse things, we may or may not be able to revoke a ‘contractual licence’. So we can license the use of software or copyrighted content (e.g. a play) or patented technology or a trademark or a design. Licences are not just applicable to intellectual property, though. We can license someone to occupy a flat or house or land, for example. A licence granted by the government, or its delegated regulatory authority, can also give permission to own (e.g. a dog) or sell (e.g. alcohol) something.

3.8 So what is intellectual property?

Intellectual property rights are limited monopolies, granted for a limited period, in order to ‘promote the progress of Science and useful Arts’, thereby acting as a benefit to the public. The quoted phrase comes from Article 1, Section 8 of the American Constitution. In other words, if we understand we can make money from intellectual property, we are encouraged to create intellectual property and exclusively make it available to the public, at a price, for a limited time. When the time limit on the monopoly expires, the intellectual property reverts to the public domain, i.e. public ownership. The author or creator or inventor can thus make a living and the wider public is enriched by having access to an ever-increasing vat of ideas. It's a win-win situation, in theory. Of course, in ‘real space’ (and cyberspace), things are a bit more complicated than that.

Note: There is a complex series of developments on intellectual property currently taking place at European level. These developments may be referred to in the media from time to time during your study of this unit. It is worth looking out for stories on three areas:

  1. European-level decisions about software patents.

  2. Cases or developments arising from the UK government's implementation (in the autumn of 2003) of the 2001 EU directive on the Harmonisation of Certain Aspects of Copyright and Related Rights in the Information Society.

  3. Implementation at member state level of the European Commission's 2004 directive on the enforcement of intellectual property rights and any cases arising from this.

Only one thing is impossible for God: to find any sense in any copyright law on the planet.

(Mark Twain)

3.9 The three-layers model of the internet

The three-layers model is nicely described in The Future of Ideas. Re-read the section on layers on pages 23–25 (linked below) and think about the examples given.

Click 'view document' to open Chapters 2 and 3 of The Future of Ideas.

3.9.1 The Concept of a Model

Much of the argument in Lessig's book revolves around Yochai Benkler's idea that a communications network can be understood in terms of three ‘layers’ – physical, code/logical and content. Lessig uses this insight as a tool for communicating some of his key ideas and as a tool for analysis. It helps him communicate because it gives his readers a framework for thinking about something – the internet – that seems like an amorphous mass to most people: this is why on page 25, for example, he seeks to relate the three layers to more familiar everyday concepts like Speakers' Corner, Madison Square Garden and cable TV. And the Benkler notion provides him with a way of analysing the issue of which aspects of the Net ought to be treated as a commons and which ought not.

But the Benkler insight, though powerful, is not all-powerful. It doesn't map exactly onto all cases of real-world communication systems. This is because it is only a model. I want to explore here what that means.

A model is a simplified representation of some aspect of reality constructed for a particular purpose. A model can exist in a person's head only, invisible to everyone else. Or it can also exist external to the modeller, for example, as a diagram on paper or a piece of text or mathematics.

The key features of a model are (i) that it is a simplification, and (ii) that it has been constructed or articulated for a particular purpose.

To illustrate this, let's look at the famous map of the London Underground, which can be downloaded from www.tfl.gov.uk.

This is a model because it's a simplified representation of an aspect of reality (London and its tube network) constructed for a particular purpose. It's simplified in various ways – for example, it represents the lines as mainly straight segments of line, whereas in reality they curve and wind their way underneath the capital's streets. And it exists for a particular purpose, to help travellers find their way from one tube station to another; it's of very limited use to anyone trying to navigate through the actual streets of the city. The simplification is helpful because it clears away the complexity of the city's topography. And it's brilliantly useful for its designated purpose, but not much use for other purposes.

Benkler's layer model is similar. It provides a comprehensible simplification of a very complex reality. There are hundreds of other ways of representing the internet. For example, some scientists at University College London constructed and maintained maps of the network. You can find them at www.cybergeography.org/atlas/atlas.html.

These maps are fascinating and useful models for some purposes. But they are of little use if your purpose is to understand issues relating to intellectual property and regulation on the Net. For that we need something like the Benkler model. And if you find that it's not perfect or it's difficult to relate to other aspects of communication, then that's simply a reflection of the fact that it is a model and therefore, by definition, simplified and specific.

Let's look at another example. Suppose I want to give one of my neighbours a box of books because my children have outgrown them. This involves another simple communications system. Is it possible to decipher the three layers in this transaction?

The content layer would be the print and pictures in the books.

The physical layer would be the pages and covers of the books, as well as the box.

What about the code layer? This would be governed by the social protocols (unwritten rules of behaviour) involved in the relationship I have with my neighbour. How well do I know my neighbour? How often do we see each other? How do we greet each other (with a ‘hello’ or a handshake or both)? How will my neighbour react to the offer? and so on. This is a complex series of norms worked out over a period of time and in accordance with locally accepted social behaviour. These dictate the nature of the transaction or negotiation. It might be possible for me to do something as simple as running into the neighbour and saying ‘Would that grandson of yours like to have a look through a box of books my gang have outgrown to see if he'd like any of them?’ Or something much more involved, being careful not to upset the sensibililties of someone who may perceive the offer as an insulting handout or an attempt to fleece them for some cash. Either way, it is the code layer which is the most difficult of the three to see and understand, even when giving away a bunch of Dr Seuss books.

Since the middle code layer is the most complex to understand, primarily because it is beyond most people's experience, it should also be the layer that we are most careful about tampering with. Arguably, it is easier to see whether changes at the content or physical layers are effective or operating according to the stated intentions for putting those changes in place. When controls are introduced at the code layer, it is more difficult for us to determine whether they are effective or whether they are generating unintended consequences which need to be addressed.

Take the box of old books as an example. Suppose my neighbour phones me at the office to say that her grandson is coming round at the weekend. I've asked my secretary to hold all calls for a couple of hours because I'm working on this unit and need to do it without interruption. I've told my neighbour she can call at any time and now she feels insulted because I'm refusing to take her phone call without knowing, at that point, that she is trying to contact me. I've introduced a temporary control at the code (social protocol) layer, which has the unintended consequence of upsetting my neighbour. When I get the message that she called, I may or may not find out that I have upset her when I talk to her again. I may just be puzzled because she is slightly frosty towards me. It could get even more complicated if my secretary happens to be taken ill shortly after speaking to my neighbour. I don't get the message that the neighbour has called. Unaware that she has tried to contact me I then don't make an effort to get in touch promptly in return, adding insult to injury. The unintended consequences come from introducing an apparently simple control to the code layer.

In the case of the internet, the middle code layer and a large part of the original top content layer were free. This was sufficient to facilitate massive innovation. The ‘end-to-end’ rules of the code layer and the sharing of knowledge at the content layer are at the heart of the ‘innovation commons’ that Lessig repeatedly tells us about. It is not necessary for all the layers to be free, or part of a commons, in order to allow for innovation. It is the mixing of free and controlled resources that allows innovation to thrive.

When you are face to face with a difficulty, you are up against a discovery.

(Lord Kelvin)

3.10 Phone networks, monopolies and allowing innovation

As I mentioned earlier, there is some network theory in Chapter 3. This is relatively straightforward to follow if you are already familiar with the basic operation of the internet. It will seem somewhat opaque to those who are not. It is important that you do understand the basics of the technology. The animated movie Warriors of the Net tells you most of what you need to know. If the Chapter 3 material on networks still seems a bit out of reach it would be worth viewing the animation again at this point.

The internet is a network of networks that sometimes runs on the telephone wires. Warriors of the Net demonstrates the ‘end-to-end’ (e2e) nature of this network of networks. Intelligence is kept at the ends, an example being ‘Mr IP’. The networks' routers and switches just move packets around indiscriminately.

Chapter 3, ‘Commons on the Wires’, tells how the internet was built on top of a controlled physical network layer – primarily the telephone wires owned by AT&T.

AT&T, a powerful gatekeeper, controlled innovation by controlling access to the resources needed to innovate – the wires – the physical layer of the telephone network. AT&T's view of Paul Baran's packet-switching design was: ‘It can't possibly work, and if it did, damned if we are going to allow the creation of a competitor to ourselves.’

There was a conflict between the interests of the gatekeeper and the interests of innovation generally – in this case, the development of a more efficient network.

How did we get from the point where AT&T were refusing to allow an alternative network to the creation of the internet? There were three reasons:

  1. Network designers chose to use this e2e design at the code layer. This meant that the network owners could not discriminate against any data packets that happened to be travelling on their networks.

  2. The legal regulations governing the telephone networks changed in such a way that AT&T and other phone companies could no longer control what people could connect to their wires.

  3. Shared code and knowledge at the content layer provided people with the resources to innovate.

This e2e design and government regulation (law) together kept the wires of the telephone system open for innovation. It's the same with roads and with the electricity grid (imagine what it would be like if we had to get the permission of the electricity company every time we wanted to plug something new into a socket). In the previous section we noted that architecture and law would be the two most important regulators of behaviour in relation to the internet. We can now begin to see why. Architecture and law acting together meant that anyone could connect their invention to the network and the network would run it. The inventor just had to pay the neutral access fee to the telephone company. Recall that by ‘neutral’ we mean that the fee is affordable and applies equally to everyone.

The telephone network and the networks connected to it, which together made up the internet, became an ‘innovation commons’, a ‘free’ resource available to anyone with a modem, a neutral access fee and a bright idea.

"We are moving into a world in which the information is controlled increasingly by those who are not totally disinterested in the outcomes produced by the system."

(Stephen Carter)

Figure 4: mindmap of Chapter 3

3.11 Activity 2

In this activity we will consider the idea of a commons and non-rivalrous resources in more detail. First you will be asked to visit a few websites, via links provided, to get a taste of the kinds of non-rivalrous commons that are available on the internet. Then you will assess two of the sites under a series of headings to help you think through the idea of a commons.

Activity 2

Take a quick look at the websites below. They each operate in a manner which suggests that they are part of the internet's commons at the content layer. Select two of the sites to explore in more depth and make notes about the following aspects of the two sites that you have chosen:

  • Summarise what the site is about and assess whether it is indeed a non-rivalrous commons.

  • Note your view about the reliability of the material and whether the site has any kind of bias.

  • Decide whether the material would exist as a commons if the internet did not exist.

  • If you think it would, how would it affect the scale of the commons community?

  • RFC 1958

  • Center for the Public Domain

  • Internet Archive

  • Project Gutenberg

  • Eldritch Press

  • Creative Commons

  • H2O Project

  • An atlas of cyberspace

  • Freedom to Tinker

We have done a sample analysis of one of the sites to give you an idea of how to analyse the ones you have chosen.

Discussion

RFC 1958 is located on the Reguest for Comments Archive of the internet Engineering Task Force.

Summary

This is the Request for Comments (RFC) document mentioned by Lessig in The Future of Ideas (pp. 36–37). It aims to re-state the principles behind the end-to-end architecture of the internet.

RFCs were invented when a group of graduate students from different universities who were all working on the initial internet host sites met in 1968 to discuss how the finer details of the system would work. One of them, Steve Crocker, wrote up the gist of what they had discussed and sent it round with a label ‘Request for Comments’ (RFC). This became the style and format for the development of all the subsequent networking protocols, and the group began calling itself the Network Working Group (NWG). Vint Cerf and Jon Postel were members of this original group. Postel was later to play a pivotal role in the development of the Net, as the editor of its RFC (working papers) archive and the architect of the Domain Name System. The working methods established by the students of the NWG are significant because they laid down the governing ethos of the internet.

  • Firstly, the discussions were open in practice as well as in spirit – anyone within the ARPA (Advanced Research Projects Agency) community could comment on the NWG's working papers, and they were freely available online as soon as that became practicable. This tradition continues to the present day.

  • Secondly, the ideas outlined in RFCs were tested through the process of peer review, which is common in science but less so in technology. (Companies don't usually make their technologies freely and easily available to competitors for review.) This meant that errors were quickly discovered and equally quickly remedied.

  • Thirdly, protocols were arrived at by consensus, not by decree or the directive of some central controller. Dave Clark of MIT, one of the pioneers of the Net, once said: ‘We reject: kings, presidents and voting. We believe in: rough consensus and running code.’ The ultimate arbiter of whether ideas were accepted by the NWG was not who proposed them but whether they worked.

Reliability of the material and potential bias

The RFC was written by Brian E. Carpenter, Group Leader, Communications Systems, Computing and Networks Division, CERN, European Laboratory for Particle Physics.

CERN is a well reputed international research organisation and the home of the key developer of the World Wide Web, Tim Berners-Lee, so the information would appear to have considerable authority.

The RFC archive is maintained by the IETF (The Internet Engineering Task Force). As explained on the IETF site:

The Requests for Comments (RFCs) form a series of notes, started in 1969, about the internet (originally the ARPANET). The notes discuss many aspects of computer communication, focusing on networking protocols, procedures, programs, and concepts but also including meeting notes, opinion, and sometimes humor. For more information on the history of the RFC series, see 30 Years of RFCs.

The early RFCs include a trove of history about the early development of computer communication protocols, from which modern internet technology was derived.

As publisher of RFCs, the RFC Editor is responsible for the final editorial review of the documents and attempts to maintain the standards of the series. The RFC Editor function is funded by the Internet Society.

The RFC editor maintains a master repository of all RFCs, which can be retrieved in segments. The RFC editor also maintains a master RFC index that can be searched online.

From the IETF site again:

The Internet Engineering Task Force (IETF) is a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the internet architecture and the smooth operation of the internet. It is open to any interested individual.

So we would expect the information from both of these sources to be neutral and unbiased.

Does RFC1958 represent a non-rivalrous commons?

Yes – the RFCs were developed as a means of sharing information and ideas. Each document could be read and commented upon by anyone with access to the internet, so they form a commons of ideas, solutions to problems and a historical record. No matter how many times the documents are read they still remain there for others to read, so they are a non-rivalrous resource.

Would the material exist as a commons without the Net, and how would the potential community be affected?

It's possible that such documents could be retained in and distributed widely amongst libraries. In such a form they would not be as accessible, and the level of awareness about them and the potential community would be significantly smaller. I might suggest that as part of this unit you visit your local library and ask if they had copies of the RFCs and, if not, ask them to order them for you; but the numbers of students who would actually make that trip to the library would be significantly less than the number who clicked on the RFC link.

Imagination is the beginning of creation. You imagine what you desire, you will what you imagine and at last you create what you will.

(George Bernard Shaw)

3.12 Summary and SAQs

In Section 2 we looked at the social and economic significance of the internet and outlined some of the important ideas in the unit.

In this section we have looked in more detail at two key ideas:

  1. the commons – a resource freely or neutrally available to a community;

  2. the three-layers model of the internet.

In addition, you have learned about the importance of the commons in the evolution of the internet. Shared resources at the content and code layers, as well as neutral laws, meant that there were no gatekeepers who could control innovation. The result was explosive innovation, leading to developments such as the World Wide Web.

We briefly considered the differences between rivalrous and non-rivalrous resources, and you read a short introduction to intellectual property (copyrights, patents and trademarks).

3.12.1. Self-assessment Questions (SAQs)

1. What do we mean by ‘commons’?

Answer

A commons is a resource where nobody gets to decide how it gets used or who gets to use it. It is open for anyone to use. There are no gatekeepers who can control access to that resource. The original design of the internet is an example. The network had no intelligence. The network could not decide which kinds of innovations would be permitted and which would not. The right to innovate on the internet was open to everyone equally and network owners (originally the phone companies), for example, could not control that innovation.

2. What are the three layers of the internet?

Answer

The content, code and physical layers.

3. What does Lessig mean by the term ‘architecture’?

Answer

It can mean lots of things depending on the context in which he uses the term. Generally he thinks about architecture as the structure or built environment of the internet (you can think of it as the laws of nature of the internet). Frequently he uses the term interchangeably with the term ‘code’.

4. How did we get to the point where the internet was created, when AT&T were refusing to allow an alternative network?

Answer

Three reasons:

  1. Network designers chose to use this e2e design at the code layer. This meant that the network owners could not discriminate.

  2. The legal regulations governing the telephone networks meant that phone companies could not control what people could connect to their wires.

  3. Shared code and knowledge at the content layer.

Oliver Wendell Holmes' definition of history: One damned thing after another.

Make sure you can answer these three questions relating to Section 3 before you continue:

  • What is a commons?

  • What is the three-layers model?

  • What does the history of the internet teach us about facilitating innovation?

Record your thoughts on this in your Learning Journal.

3.13 Study guide

On completing this part of the unit, you should have:

  • read Section 3;

  • read Chapters 2 and 3 of The Future of Ideas;

  • completed Activity 2;

  • answered the self-assessment questions;

  • recorded your notes in your Learning Journal.

In this section you have dealt with two of the most important ideas in the unit: the commons and the three layers. You have also been coping with information economics, the importance of the commons in the history of the internet, the neutral regulations of phone networks and a crash course in intellectual property!

‘Commons’ might still feel like a complicated or even somewhat awkward concept. However, the examples coming up in Section 4, of innovation arising out of this ‘innovation commons’, should help to reinforce your understanding. Right from the start of the unit we have been referring to the explosive innovation that arose out of the internet, but up to now we have not talked in terms of specific examples. Section 4 and Chapter 8 of The Future of Ideas are devoted to the examples, such as Napster. When I first read The Future of Ideas, I found Chapter 8 to be one of the most interesting in the book. I hope you find it equally enjoyable.

Well done again for making it this far – you've already got through a lot of work. The next section looks at innovations that have been enabled by the Net.

4 The revolution: disruptive innovations enabled by the Net

4.1 Creativity and innovation before and after the Net

The real world (‘real space’ as it is called in The Future of Ideas) is a world of physical things. These are governed by the laws of nature (or the laws of physics). When Lessig talks about ‘real space architecture’, he is referring to the laws of nature, not just buildings and bridges. The laws of nature limit what can be done in real space. We cannot travel faster than the speed of light, for example. Gravity prevents us from jumping from a standing start over the top of a tall tree, without some kind of artificial aid such as a jet-propelled backpack. The laws of nature – the ‘architecture’ or the code layer – of the internet are different and impose different constraints and freedoms. This internet architecture is entirely artificially created and can, therefore, be changed.

This section looks at the kinds of thing that restricted creativity before the internet existed and how some of those constraints were released when the Net came along. It also looks at some examples of the explosion of innovation facilitated by the Net. It is based on Chapters 7 and 8 of The Future of Ideas, plus a small part of Chapter 3.

Chapters 7 and 8

Chapter 7 looks at creativity in the arts and commerce during Lessig's ‘dark ages’ – before the internet came along – and whether copyright laws provided a balance between free and controlled resources.

Chapter 8 looks at some of the important innovations arising out of the Net, such as Napster and peer-to-peer technologies.

Read Chapters 7 and 8 of The Future of Ideas, linked below.

Click 'view document' to open Chapters 7 and 8 of The Future of Ideas.

Any sufficiently advanced technology is indistinguishable from magic.

Arthur C. Clarke

4.2 Creativity in the ‘dark ages’

Chapter 7 analyses creativity in the arts and commerce prior to the internet, imagining the real world divided up into content, code and physical layers. Along the way we learn of the author's concerns about the expansion of copyright, an expansion which he believes is undermining the traditional balance between the rights of the copyright holder and those of the general public (and hence a special subgroup of the general public – future creators).

At the content layer, some of the history of copyright's interaction with new technology is described. The stories of how copyright law coped with the advent of player pianos and cable TV are good examples of how balance was maintained in the face of new technology. In the case of the player pianos, for example, compulsory licensing meant that the rights of the sheet music publishers were balanced with those of the people who had created new markets with new technology. The sheet music publishers got paid for the use of their work in an innovative way but did not get to control the new market.

Despite admitting balance in those two cases, Lessig still believes that the trend is towards giving more control to copyright holders. He is concerned about the increase in scope (the number of things covered), but especially vexed about the increase in the term (timespan) of copyright.

We should note that Lessig was the lead counsel in a case challenging a 1998 law extending the term of copyright by 20 years. The case, Eldred v Ashcroft (see below) was heard by the US Supreme Court in the Autumn of 2002. The decision of the Supreme Court was handed down in January 2003. Lessig's client lost.

Eldred v Ashcroft

This case is outlined on pages 196–199 of The Future of Ideas. There have been a number of developments since the book was published, the most significant being that the US Supreme Court heard the case in the Autumn of 2002 and made a decision in January 2003.

Lessig and Eldred claimed that the 1998 Copyright Term Extension Act was unconstitutional. The US constitution gives Congress the power to grant authors an exclusive right to their writings for ‘limited times’. Lessig argued before the Court that Congress had extended the term of copyright eleven times in forty years and that allowing Congress to repeatedly extend copyrights undermines the ‘limited times’ provision of the Constitution.

The Supreme Court rejected that argument by a 7:2 majority. Justice Ginsburg, who wrote the majority opinion, said that the Supreme Court was ‘not at liberty to second-guess congressional determinations and policy judgments of this order, however debatable or arguably unwise they may be’.

Eric Eldred has a short web page about the case, and Harvard University's Berkman Center OpenLaw facility, which has done the legal legwork supporting Eldred, maintains a web page with the details of the case, if you are interested in looking into it further. The Department of Justice also have the legal documents available at their website. Lessig's immediate reflections on the loss were recorded in his weblog at Stanford on 16 January 2003:

So I've got to go get onto a plane to go to my least favorite city (DC). My inbox is filling with kind emails from friends. Also with a few of a different flavor. It's my nature to identify most closely with those of the different flavor. David Gossett at the law firm of Mayer Brown wrote Declan, ‘Larry lost Eldred, 7–2’. Yes, no matter what is said, that is how I will always view this case. The constitutional question is not even close. To have failed to get the Court to see it is my failing.

The Wind Done Gone case, which is also mentioned in this section of The Future of Ideas, was settled out of court in May 2002, with the Mitchell Trust representatives agreeing to let Alice Randall's book go ahead, labelled as an unauthorised parody. The details of the settlement are confidential, but included Randall's publishers, Houghton Mifflin, making a financial contribution to Morehouse College. The publishers maintain a website on their perspective of the case.

The statistics quoted on Lessig page 117 illustrate that the control of the media has become concentrated in a few hands. This is important in relation to Lessig's arguments in a later part of the book.

Chapter 7 concludes that, owing to the laws of nature – that is, the ‘architecture’ or code layer of the real world – creativity was largely controlled before the Net existed. It was controlled at the code and physical layers because only a few have the resources to market and distribute books, papers and CDs, for example. Those with such resources act as or employ gatekeepers such as editors, and they decide what creative work gets published and distributed.

Figure 5: mindmap of Chapter 7

Click to view larger version of the mindmap

… before we sing ‘Happy Birthday’ in a large crowd we had better call a lawyer. Lawrence Lessig, referring to the fact that the Birthday song is still copyrighted and, according to the 21 February 2001 edition of the Orlando Sentinel, still earns nearly $1 million a year in royalties.

4.3 Innovation from the internet

Recall from Section 2 that the three layers stacked up like this in the controlled v commons stakes:

Right from the start of the unit we have been referring to the explosive innovation that arose out of the internet, but up to now we have not talked in terms of specific examples. Chapter 8 is devoted to the examples.

It is suggested that specific innovations such as Napster even cross over into free or uncontrolled parts of the physical layer, through using the ‘dark matter’ of the internet. Clay Shirky has written a useful essay on this, ‘PCs are the dark matter of the internet’, in which he says:

PCs are the dark matter of the internet. Like the barely detectable stuff that makes up most of the mass of the universe, PCs are connected to the internet by the hundreds of millions but have very little discernable effect on the whole, because they are largely unused as anything other than dumb clients (and expensive dumb clients to boot). From the point of view of most of the internet industry, a PC is nothing more than a life-support system for a browser and a place to store cookies.

The internet essentially offered a platform for new technologies (MP3) and products (audio and video streaming), new means of distribution and a vast new marketplace. It opened new markets for things like poetry and allowed new entrants like My.MP3 and Napster into the music distribution business (just like the player piano innovators of an earlier era). It offered all these in the thick of a mix of free and controlled resources. Lessig feels that originally the balance between free and controlled resources was right for innovation, but that has now changed.

The My.MP3 and Napster stories are probably the most important in Chapter 8, and I would like to focus on Napster here.

4.3.1 Napster – pirates' tool or celestial jukebox

We will look at the legal arguments in the Napster case later in the unit. Lessig argues that Napster has a number of features that make it a useful technology beyond what many people see as its core function – to share music files. It can be used to exchange preferences about music, helping to increase the overall demand for music. The increased demand could then be satisfied by Napster itself or by the usual retail outlets.

Lessig says:

But the extraordinary feature of Napster was not so much the ability to steal content as it is the range of content that Napster makes available … A significant portion of the content served by Napster is music that is no longer sold by the labels. This mode of distribution – whatever copyright problems it has – gives the world access to a range of music that has not existed in the history of music production … What Napster did more effectively than any other technology was to demonstrate what a fully enabled ‘celestial jukebox’ might be. Not just for the music that distributors want to push … but also for practically any recording with any fans using the service anywhere, the music was available.

I'm quoting directly from page 131 of the book here because it was one of the passages that made the biggest impact on me when I first read it. Napster seemed a pretty clear case where copyright holders had a legitimate complaint. There is no denying that Napster had significant copyright problems or that lots of people did use it to get Madonna's or Britney Spears' latest songs. However, this notion that it provided universal access to an unlimited range of music is a strongly reasoned one. Advocates, including Lessig, often use unfair tactics, some of which we will explore later, to get their point across. This, however, is a good example of using reason, through a story, to persuade someone to the advocate's point of view.

Let's look at another technology – guns. Gun ownership is protected by the US Constitution even though guns clearly have uses that infringe the law – e.g. shooting people. The employees of Smith & Wesson don't go to bed fearing that they might be jailed because somebody has used one of their guns to shoot a police officer. Crowbars, kitchen knives and a huge range of everyday objects have uses that infringe the law. Yet Napster was shut down by the courts because it is a tool that can be, and has been, used to infringe the music copyrights. It does, however, have positive features and functions that do not break any laws.

Whatever the detailed legal merits of the Napster case and the problems Napster created for the music industry, it is a technology that has ‘substantial non-infringing uses’.

Chapter 8 includes a similar analysis of P2P technologies. There is also a case study of P2P later in this section.

Figure 6: mindmap of Chapter 8

Click to view larger version of the mindmap

I would rather be exposed to the inconveniences attending too much liberty than those attending too small a degree of it.

Thomas Jefferson

4.4 Case study 1: the Web

Read the following extract from Chapter 3 of The Future of Ideas, describing the birth of the World Wide Web: from ‘The consequences of this to e2e are many. The birth of the World Wide Web is just one...’ on page 41 to the end of the second paragrah on page 44, ending ‘It would be a ‘good idea’ if people used it, and people were free to use it because the interenet's design made it free.’

Click 'view document' to open extract from Chapter 3 of The Future of Ideas.

4.4.1 The World Wide Web as an example of internet-enabled disruptive innovation

Most great technological innovations are made over substantial periods of time by large groups of individuals. The striking aspects of the Web are that:

  • it was created in a very short time by a small group of individuals in which one person played a dominant role;

  • it was invented by taking earlier ideas about hypertext, which were freely available, and building on them; and

  • once invented, it was disseminated freely and with astonishing rapidity via the internet.

It that sense, the Web is an excellent example of Lessig's argument about how the internet-as-commons enables innovation.

4.4.2 Origins of the Web

The prime mover in the creation of the Web was an English physicist named Tim Berners-Lee, who was employed as a software engineer by CERN, the multinational particle physics research laboratory in Geneva. CERN is a vast organisation, doing research of great complexity. Much of its experimentation is done by teams of visiting physicists who come and go. Maintaining coherent documentation in such circumstances was an almost impossible task.

The task that Berners-Lee set himself was to design a radically new approach to the structuring of information that would meet the needs of such an idiosyncratic organisation. At the beginning of 1989 he put forward a proposal on how this might be achieved.

The problem was that CERN, by its very nature, had a very high turnover of people because physicists from participating countries were continually coming and going. With two years as the typical length of stay for researchers, information was constantly being lost. The introduction of the new people demanded a fair amount of their time and that of others before they had any idea of what went on at CERN. The technical details of past projects were sometimes lost forever, or only recovered after a detective investigation in an emergency. Often, the information had been recorded but simply could not be located.

If a CERN experiment were a static, once-and-for-all event, all the relevant information could conceivably be contained in one enormous reference book. But, Berners-Lee observed,

… as it is, CERN is constantly changing as new ideas are produced, as new technology becomes available, and in order to get around unforeseen technical problems. When a change is necessary, it normally affects only a small part of the organisation. A local reason arises for changing a part of the experiment or detector. At this point, one has to dig around to find out what other parts and people will be affected. Keeping a book up to date becomes impractical, and the structure of the book needs to be constantly revised.

Examples of the kinds of information needed were:

Where is this module used? Who wrote this code? Where does this person work? What documents exist about that concept? Which laboratories are included in that project? Which systems depend on this device? What documents refer to this one?

What was needed, he argued, was some kind of linked information system.

4.4.3 Principles of the design

But what sort of system? After criticising the types of linking which were then in vogue – for example, the hierarchical tree-structures exemplified by the Help systems of mainframe computers, or those which relied upon indexed keywords – Berners-Lee went on to propose a solution to CERN's problem. It was a relatively old idea called ‘hypertext’ – i.e. non-linear text in which one part of a document contained links that could take one to other parts of the same document or to other documents.

The special requirements of a hypertext system for CERN were, he believed, that it should: allow remote access across networks; be heterogeneous (i.e. allow access to the same information from different types of computer system); be non-centralised; allow access to existing data; enable users to add their own private links to and from public information, and to annotate links as well as nodes privately; and enable ‘live’ links to be made between dynamically changing data.

The proposal was accepted by CERN management. In November 1990, Berners-Lee wrote a program called a browser which provided a virtual window through which the user could see the resources held on various internet computers as a ‘web’ of linked information sources. It refracted, as it were, a world of disparate information sources in such a way that they appeared as a uniform whole.

He then wrote another program – a hypertext ‘server’ which would hold documents and dispense them as requested by browsers.

4.5 Case Study 1: release of the WWW software and protocols

By Christmas 1990 demo versions of both browsers and a prototype Web server were available, enabling users to access hypertext files, articles from internet news (i.e. discussion) groups and files from the help system of one of the CERN computers. By March of the following year, the line-mode browser was released to a limited audience to run on a variety of different minicomputers. On 17 May 1991 the WWW software was generally released on central CERN machines. In August, information about the project and the software were posted in relevant internet news groups. In October gateways were installed enabling the browsers to access Help files on the VAX VMS operating system and WAIS servers on the internet. Finally, in December, the CERN computer newsletter announced the Web to the world of high energy physics. It had taken just over a year from the moment Berners-Lee had typed the first line of code.

Berners-Lee's underlying model of the Web was what is known in computer-speak as a ‘client-server’ one. That is to say, he envisaged a system in which information would be held on networked computers called servers, and that these would be accessed by client programs (browsers) running on other networked computers. Servers, in this model, are essentially givers, while clients are always takers (though they give some information about themselves to servers at the moment of interaction). The central tasks in building such a system were to design and write programs which would enable computers to act as servers and clients, to create a common language in which both could converse, and to set up some conventions by which they could locate one another.

The client-server model was already well established in the computer business when Berners-Lee started work. There were innumerable computers on the Net that operated as servers, and there were several ways of extracting information from them. At the lowest level, you could use the primitive TELNET facility to log onto a remote computer and (if you had the necessary permissions) run programs on it. Or you could use the FTP program to log on remotely and download files. (This indeed was – and remains – the standard way to transfer programs and data across the Net.) And there were search facilities like GOPHER and WAIS for locating information.

In order to make use of these facilities, however, users needed to know what they were doing. Accessing the Net before Berners-Lee was akin to using the MS-DOS operating system – you could do almost anything provided you knew the lingo. The trouble was that the lingo was user-hostile. The pre-WWW Net was, wrote Robert Reid,

an almost militantly egalitarian and cooperative community. Nobody owned the network. Virtually nobody made money from it directly. Almost every piece of software that governed or accessed it was free (the people who wrote it generally did so from the goodness of their hearts, or to make names for themselves, or as parts of funded projects). But its egalitarianism aside, the internet's tight de facto admission requirements of technical acumen, access and pricey tools also made it a very elite realm.

One of the central tasks that Berners-Lee faced in creating the Web was the lowering of this threshold. He achieved it partly by inventing an interface – a program which stood between the user and the vast and disparate information resources of the Net.

In seeking a model for this interface he drew heavily on ideas which had been circulating since the mid-1960s in the academic hypertext community, where the problem of navigating through a virtual space of linked texts had been addressed through the notion of a ‘browser’, i.e. a virtual window which displayed the structure of the space.

Creating the Web was not just a matter of writing the code for browsers, however. Because the Net, with its incredible diversity, was central to the project, Berners-Lee had to invent a way of ensuring that publicly available information resources held on any networked computer anywhere in the world could be accessed through the browser.

The only way to do this was to create a set of protocols by which different computers could talk to one another and exchange information. One protocol (analogous to the IP convention for internet addressing) had to specify the location where information was held. For this Berners-Lee invented the ‘Uniform Resource Locator’ or URL.

Another protocol was needed to specify how information exchange between computers should be handled. For this he created the ‘Hypertext Transfer Protocol’, HTTP (analogous to FTP). And finally he had to invent a uniform way of structuring documents. For this he proposed ‘Hypertext Mark-up Language’, or HTML, as a subset of the Standard Generalised Mark-up Language (SGML) tagging system which was already established in the electronic publishing business.

URLs and HTML are thus pretty straightforward. HTTP, the protocol which controls how computers issue and respond to requests for information, is more complicated. Berners-Lee summarised it as ‘a generic stateless object-oriented protocol’. In non-technical language, what HTTP essentially does is to prescribe how the four stages of a Web transaction – connection, request, response and close – should be conducted.

Looking back, it is not so much the elegance of Berners-Lee's creation which is striking, but its comprehensiveness. In just over a year he took the Web all the way – from the original conception, through the hacking out of primitive browsers and servers, to the creation and elaboration of the protocols needed to make the whole thing work.

The Web went public on 15 January 1991 when the line-mode browser developed at CERN was made available by the process known as ‘anonymous FTP’. That is to say, anyone with a Net connection and a copy of the FTP program could call up the CERN site, log in without having to use a password, and download the browser code. Other researchers – many not based at CERN – were already busy developing graphics-based browsers. In April a Finnish client for UNIX called ‘Erwise’ was released, and this was followed in May by the release of Pei Wei's Viola graphical browser (also for UNIX computers). By July, CERN was distributing all the code (server and browsers, including Viola) from its servers.

In November 1992 there were 26 known WWW servers in existence, including one at the US National Centre for Supercomputer Applications (NCSA) at the University of Illinois at Champaign-Urbana. Two months later, the number of servers had almost doubled, to 50. By internet standards this was encouraging growth, but nothing spectacular: the Web was still just one of the many applications running across the network. Compared with email, FTP and other traffic, the HTTP traffic was still pretty small beer. And then there was another spike of innovation – in the Spring of 1993 a student in Illinois launched a browser that triggered the explosive growth of the Web. It was called Mosaic.

4.6 Case study 1: the second wave of innovation – the graphical browser

Mosaic was created in just under three months by an undergraduate, Marc Andreessen, and a programmer, Eric Bina, working round the clock. Bina wrote most of the new code, in particular the graphics, modifying HTML to handle images, adding a GIF (graphics interchange format) decoder and colour management tools. Like all good programmers, he did it by adapting software tools that already existed – particularly the UNIX Motifs toolkit – and were freely available. Andreessen's contribution was to take apart the library of communications code provided, again freely, by CERN and rewrite it so it would run more quickly and efficiently on the network. Between them the two wrote Mosaic's 9,000 lines of code (compare that with the 11 million lines of Windows 95), in the process producing the most rapidly propagated piece of software written up to that time.

On 23 January 1993, Andreessen released the first version of Mosaic onto the Net. In a message posted to internet discussion groups, he signalled to the internet community that the software was now available for downloading across the network. Having posted the message, he then sat back to monitor the log automatically kept by the NCSA (National Center for Supercomputing Applications) server as it responded to download requests. Within ten minutes of first posting the message, someone downloaded Mosaic. Within half an hour, a hundred people had it. In less than an hour Andreessen was getting excited email from users all over the world. It was the Net equivalent of that moment when Mission Control says ‘We have lift-off’.

Thereafter the UNIX version of Mosaic spread like a virus through the worldwide computing community. The Mac and PC versions followed shortly afterwards. Within a few months it was estimated (nobody at that stage was keeping precise records) that the downloads numbered hundreds of thousands.

Objective measures of the impact of Mosaic also began to emerge. For example, in March 1993 – just a month after the official release of the Alpha version of the UNIX browser, HTTP traffic accounted for just 0.1 per cent of the traffic on the part of the Net known as the NSF backbone. By September, there were over 200 known servers and HTTP accounted for one per cent of backbone traffic – i.e. it had multiplied tenfold in a little over five months.

Mosaic was not the first browser, but it was the one that captured the market and shaped the future. This was partly due to the fact that it ran on simple desktop computers rather than fancy UNIX workstations. It also had something to do with the fact that it was the first browser that looked like a piece of modern, personal computer software: it had things like buttons and scroll bars and pulldown menus.

But perhaps the most significant thing about Mosaic was that it was designed to interpret a new HTML element – <IMG>, the image tag. In doing so it allowed Web pages to include images for the first time, thereby making them potentially much more attractive to the legions of people who would be turned off by slabs of hypertext.

The addition of the <IMG> tag was an extension to Berners-Lee's HTML and a precursor of the various attempts that have been made since then to extend the standard in various ways. At the time, the decision to extend HTML to handle images alongside text was controversial in some quarters, mainly because image files tend to be much bigger than text files. A full-colour A4 picture, for example, runs to dozens of megabytes.

Interestingly, Berners-Lee was also very critical of the addition of the <IMG> tag. Andreessen recalls being ‘bawled out’ by him in the summer of 1993 for adding images to the thing. The frivolity that the visual Web offered worried its inventor because ‘this was supposed to be a serious medium – this is serious information’. What this exchange portended was the change in perspective that was to fuel the Web's phenomenal growth from that point onwards. Berners-Lee and his colleagues saw their creation as a tool for furthering serious research communications between scientists. The programmers at NCSA were more pragmatic, less judgmental. They knew that the facility for adding images to pages would make the Web into a mass medium.

After Mosaic appeared, the Web entered a phase of explosive growth. The program spread very rapidly across the world. As it did so, the numbers of people using the Net began to increase exponentially. As the number of users increased, so also did the numbers of servers. And as people discovered how simple it was to format documents in HTML, so the volume of information available to Web users began to increase exponentially. It was a classic positive feedback loop.

The fallout from this explosion is clearly visible in the statistical data collected by Matthew Gray at the Massachusetts Institute of Technology (MIT), which show the traffic over the National Science Foundation (NSF) internet backbone broken down by the various protocols.

NSFNET Backbone Usage Breakdown

Date % ftp % telnet % netnews % irc % gopher % email % web
Mar 93 42.9 5.6 9.3 1.1 1.6 6.4 0.5
Dec 93 40.9 5.3 9.7 1.3 3.0 6.0 2.2
Jun 94 35.2 4.8 10.9 1.3 3.7 6.4 6.1
Dec 94 31.7 3.9 10.9 1.4 3.6 5.6 16.0
Mar 95 24.2 2.9 8.3 1.3 2.5 4.9 23.9
Source: www.mit.edu

What this table shows is that in two years, the volume of internet traffic involving Web pages went from almost nothing to nearly a quarter of the total.

The spread of the Web was like the process by which previous communications technologies had spread – but with one vital difference. It's a variant on the chicken and egg story. In the early days of the telephone, for example, people were reluctant to make the investment in the new technology because there were so few other people with telephones that it was hardly worth the effort. The same was true for electronic mail. ‘Who would I send email to?’ was a common lament from non-academics in the early days of electronic mail. But once a critical mass of users in one's own intellectual, occupational or social group had gone online, suddenly email became almost de rigueur.

A great difference between the Web and the telephone was that whereas the spread of the telephone depended on massive investment in physical infrastructure – trunk lines, connections to homes, exchanges, operators, engineers and so forth – the Web simply built on an existing infrastructure (the internet, which itself was built originally on the physical layer of the telephone network). And because the internet was an ‘innovation commons’ in Lessig's sense, with an ‘end-to-end’ architecture, there was no agency which could have stopped the innovation – no gatekeeper who could have argued that sending web pages was not an ‘appropriate’ use of the network.

"Anyone who has lost track of time when using a computer knows the propensity to dream, the urge to make dreams come true and the tendency to miss lunch."

Tim Berners-Lee

4.7 Case Study 2: Peer-to-Peer (P2P) networking

Re-read the following extract from Chapter 8.

Click 'view document' to open extract from Chapter 8 of The Future of Ideas.

The history of disruptive technologies – think of the automobile – is often one of ongoing struggle between technical innovation and social control. New developments create new possibilities and, with them, new threats to the established order. There follows a period of chaos, innovation and change while the old order is thrown into apparent disarray; then, after a burst of institutional reform and adaptation, a measure of social control is reasserted over the disruptive technology. And so the process goes on.

Looking at the internet from this perspective, we can see a similar pattern. Because it is an innovation commons, new technologies continually emerge, and some of them prove disruptive. In recent years, some of the most disruptive newcomers have been grouped under the heading of ‘peer-to-peer’ or P2P networking. The purpose of this case study is to:

  • explain the P2P concept;

  • describe several types of P2P technology;

  • discuss the implications of P2P.

4.7.1 The P2P concept

P2P is shorthand for ‘peer-to-peer’. Strictly speaking, a ‘peer’ is an equal – someone of the same status as oneself. In ordinary life, this is why trial by jury is sometimes described in terms of ‘the judgement of one's peers’. In computer networking, peers are computers of equal status.

To understand P2P, it's helpful to analyse the development of the internet in three phases. Let us call them Internet 1.0, 2.0 and 3.0.

Internet 1.0

The internet as we know it today dates from January 1983. From its inception in 1983 to about 1994 the entire internet had a single model of connectivity. There were relatively few dial-up (modem) connections. Instead, internet-connected computers were always on (i.e. running 24 hours a day), always connected and had permanent internet (IP) addresses. The Domain Name System (DNS) – the system that relates domain names like www.cnn.com to a specific internet address (in this case 207.25.71.30) – was designed for this environment, where a change in IP address was assumed to be abnormal and rare, and could take days to propagate through the system. Because computers had persistent connections and fixed addresses, every computer on the network was regarded as a ‘peer’ of every other computer. It had the same status and functions. In particular, each computer could both request services from another computer and serve resources (e.g. files) on request. In the jargon of the business, each computer on the early internet could function both as a client and as a server. The system was a true peer-to-peer network.

Internet 2.0

This situation changed after the World Wide Web appeared. As mentioned previously, the Web was invented at CERN by Tim Berners-Lee in 1990, and the first popular Web browser was Mosaic, created at the NCSA at the University of Illinois in 1993. With the appearance of Mosaic, and the subsequent appearance of the Netscape browser in 1994, Web use began to grow very rapidly and a different connectivity model began to appear. There was suddenly an explosive demand from people outside the academic and research world for access to the internet, mainly because they wanted to use the Web. To run a Web browser, a PC needed to be connected to the internet over a modem, with its own IP address. In order to make this possible on a large scale, the architecture of the original peer-to-peer internet had to be distorted.

Why? Well basically because the newcomers could not function as ‘peers’. There were several reasons for this:

  • Personal computers were then fairly primitive devices with primitive operating systems not suited to networking applications like serving files to remote computers.

  • More importantly, PCs with only dial-up connectivity could not, by definition, have persistent connections and so would enter and leave the network ‘cloud’ frequently and unpredictably.

  • Thirdly, these dial-up computers could not have permanent IP addresses for the simple reason that there were not enough unique IP addresses available to handle the sudden demand generated by Mosaic and Netscape. (There is a limit to the number of addresses of the form xxx.xxx.xxx.xxx when xxx is limited to numbers between 0 and 255, as stipulated in the original design of the Domain Name System.) The work-around devised to overcome the addressing limit was to assign Internet Service Providers (ISPs) blocks of IP addresses which they could then assign dynamically (i.e. ‘on the fly’) to their customers, giving each PC a different IP address with each new dial-up session. A subscriber might therefore be assigned a different IP address every time she logged on to the Net. This variability prevented PCs from having DNS entries and therefore precluded PC users from hosting any data or net-facing applications locally, i.e. from functioning as servers. They were essentially clients – computers that requested services (files, web pages, etc.) from servers.

Internet 2.0 is still basically the model underpinning the internet as we use it today. It is essentially a two-tier networked world, made up of a minority of ‘peers’ – privileged computers (servers within the DNS system with persistent, high-speed connections and fixed IP addresses) providing services to a vast number of dial-up computers which are essentially second-class citizens because they cannot function as servers and only have an IP address for the duration of their connection to the Net. Such a world is, as we will see later in the unit, potentially vulnerable to governmental and corporate control, for if everything has to happen via a privileged server, and servers are easy to identify, then they can be targeted for legal and other kinds of regulation.

Internet 3.0: a distributed peer-to-peer network?

Internet 2.0 made sense in the early days of the Web. For some years, the connectivity model based on treating PCs as dumb clients worked tolerably well. Indeed it was probably the only model that was feasible. Personal computers had not been designed to be part of the fabric of the internet, and in the early days of the Web the hardware and unstable operating systems of the average PC made it unsuitable for server functions.

But since then the supposedly ‘dumb’ PCs connected to the Net have become steadily more powerful, and the speed and quality of internet connections have steadily improved – at least in the industrialised world. Figures released in September 2004 suggested that 41 per cent of the UK population had broadband internet connections. On the software side, not only have proprietary operating systems (e.g. Microsoft Windows and Apple Macintosh Mac OS) improved, but the open-source (i.e. free) software movement has produced increasingly powerful operating systems (for example Linux) and industrial-strength web-server software (for example the Apache Web server, which powers a large proportion of the world's websites). As a result, it has become increasingly absurd to think of PCs equipped in this way as second-class citizens.

It is also wasteful to use such powerful computers simply as ‘life-support systems for Web browser software’. The computing community realised quickly that the unused resources existing behind the veil of second-class connectivity might be worth harnessing. After all, the world's Net-connected PCs have vast amounts of under-utilised processing power and disc storage.

Early attempts to harness these distributed resources were projects like SETI@Home, in which PCs around the globe analysed astronomical data as a background task when they were connected to the Net. More radical attempts to harness the power of the network's second-class citizens have been grouped under the general heading of ‘peer-to-peer’ (P2P) networking. This is an unsatisfactory term because, as we have seen, the servers within the DNS system have always interacted on a peer-to-peer basis, but P2P has been taken up by the mass media and is likely to stick.

The best available definition of P2P is ‘a class of applications that takes advantage of resources – storage, cycles, content, human presence – available at the edges of the internet. Because accessing these decentralised resources means operating in an environment of unstable connectivity and unpredictable IP addresses, P2P nodes must operate outside the DNS system and have significant or total autonomy from central servers’.

The two most widely known P2P applications are Instant Messaging (IM) and Napster. IM is a facility which enables two internet users to detect when they are both online and to exchange messages in ‘real time’ (i.e. without the time lag implicit in email(. Napster was a file-sharing system that enabled users to identify other users who were online and willing to share music files. I use the past tense because Napster was eventually closed down by litigation instituted by the record companies, as mentioned in Section 4.3, but the file-sharing habits that it engendered have persisted, so the phenomenon continues.

The problem that both IM and Napster had to solve was that of arranging direct communication between two ‘second-class citizens’ of the internet, i.e. computers that have non-persistent connections and no permanent IP addresses. The solution adopted in both cases was essentially the same. The user registers for the service and downloads and installs a small program,called a ‘client’, on their computer. Thereafter, whenever that computer connects to the Net the client program contacts a central server – which is inside the DNS system and running special database software – and supplies it with certain items of information.

For IM the information consists of:

  • notification that the user machine is online;

  • the current assigned IP address of the PC.

The database then checks the list of online ‘buddies’ registered by the user to see if any of them are currently online. If they are, the server notifies them and the user of this fact, enabling them to set up direct communications between one another.

For Napster the information supplied by the client consisted of:

  • notification that the user computer is online;

  • the current assigned IP address;

  • a list of files stored in a special folder on the user's hard drive reserved for files that he or she is willing to share.

A typical Napster session involved the user typing the name of a desired song or track into a special search engine running on the Napster server. The server would then check its records to find the IP addresses of logged-in computers which had that file in their ‘shared’ folders. If any records matching the query were found, the server would notify the user, who could then click on one of the supplied links and initiate a direct file transfer from another Napster user's computer.

Watch the short animation, linked below, of how Napster works.

Watch the animation by clicking on the ‘Start’ button in the diagram below.

4.8 Case study 2: The implications of Napster

Napster was significant for several reasons:

  • It demonstrated Lessig's point about the way in which an innovation commons facilitates technological development. Napster was the implementation in software of a set of simple but ingenious ideas. It was created by a few individuals who had access to few resources beyond their own ingenuity and a facility for writing computer software. Because it could be realised via the open internet, it did not have to be approved by any gatekeeper. And it was phenomenally successful, attracting a subscriber base of nearly 60 million users in its first eighteen months of existence.

  • It was disruptive in two senses. Firstly, it challenged the basic business model of a powerful and established industry. The record business was built around the provision of recorded music in the form of multi-track albums released in CD format. What Napster revealed was a huge potential market for tracks rather than albums. It also revealed the potential of the Net as a medium for the distribution of tracks, something that the industry had up to that point ignored. Secondly, Napster was disruptive because it undermined conventional ways of protecting the intellectual property embodied in recorded music. Most of the music files shared via the system were copyrighted, which led the music industry to attack Napster as a vehicle for wholesale piracy.

  • It provided a glimpse of how the internet could evolve from the server-dominated Internet 2.0 into something rather different – a network in which the hitherto underprivileged computers at the edges of the network (‘the dark matter of the internet’) could serve content as well as request it. Napster showed that PCs on the periphery of the Net might be capable of more ambitious things than merely requesting web pages from servers. In that sense, Napster can be seen as the precursor of what we might call Internet 3.0 – a new system of distributed peer-to-peer networking.

  • Finally, it overturned the publishing model of Internet 2.0 – the idea that content had always to be obtained from the magic circle within the DNS system. Instead Napster pointed to a radically different model that the internet commentator Clay Shirky calls ‘content at the edges’. The current content-at-the-centre model, Shirky writes, ‘has one significant flaw: most internet content is created on the PCs at the edges, but for it to become universally accessible, it must be pushed to the center, to always-on, always-up Web servers. As anyone who has ever spent time trying to upload material to a Web site knows, the Web has made downloading trivially easy, but uploading is still needlessly hard.’ Napster relied on several networking innovations to get around these limitations: it dispensed with uploading and left the files on the PCs, merely brokering requests from one PC to another. The MP3 files did not have to travel through any central Napster server; PCs running Napster did not need a fixed internet address or a permanent connection to use the service; and it ignored the reigning paradigm of client and server. Napster made no distinction between the two functions: if you could receive files from other people, they could receive files from you as well.

In the end, the legal challenges to the original Napster led to its demise. Its critical vulnerability was the fact that the system required a central server to broker connections between its dispersed users.

The record companies succeeded in convincing a Californian court that the service was illegal and Napster was finally shut down in 2001.

But although Napster the company was quashed, the idea that it had unleashed has continued to thrive (and indeed Napster was relaunched as a legimate subscription-based music downloading company by new owners towards the end of 2003). So many users (especially young people) had acquired the file-sharing habit that Napster-like services have continued to proliferate, and it is said that more internet users are sharing music files now than at the height of the Napster boom.

4.9 Case study 2: beyond Napster – the new P2P technologies

Because of its reliance on a central server, Napster proved vulnerable to legal attack. But other, genuinely distributed, P2P technologies now exist which may be less susceptible to challenge. Freenet and Publius, for example, are file-distribution systems that use the resources of computers at the edge of the internet to store and exchange files without relying on any centralised resource.

Watch the short animation, linked below, of how P2P software works.

Watch the animation by clicking on the ‘Start’ button in the diagram below.

In thinking about these P2P technologies it is important to remember that a file-sharing system does not just exist for illegally sharing copyrighted material. The files that are shared can be perfectly legitimate documents. And in a world where ideological censorship is rife and where conventional Web publication is vulnerable to political and legal attack, it may be very important to have methods of publishing that ensure the free dissemination of ideas. From this perspective, Publius is a particularly interesting development. It is a Web publishing system designed to be highly resistant to censorship and provide publishers with a high degree of anonymity. It was developed by programmers working for AT&T. Publius was the pen name used by the authors of the Federalist Papers, Alexander Hamilton, John Jay, and James Madison. This collection of 85 articles, published pseudonymously in New York State newspapers in 1787–88, was influential in convincing New York voters to ratify the proposed United States constitution.

Publius encrypts and fragments documents, then randomly places the pieces, or keys, onto the servers of volunteers in a variety of locations worldwide. The volunteers have no way of knowing what information is being stored on their server. Software users configure their browser to use a proxy which will bring the pieces of the document back together. Only a few keys out of many possibilities are needed to reconstruct a document. Avi Rubin (Shreve, 2000), the lead inventor of the system, claims that even if 70 per cent of the websites are shut down, the content is still accessible. Only the publisher is able to remove or alter the information.

It is impossible to know at this stage whether P2P technologies will indeed ‘turn the internet inside out’, as one commentator has put it. But they already offer powerful tools to groups interested in the free exchange of ideas and files online, especially if those ideas or files are likely to be controversial. Rubin, for example, has declared that his greatest hope is that Publius will become an instrument for free speech, a tool that could enable dissidents living under oppressive governments to speak out without fear of detection or punishment. Libertarianism may have discovered a new tool kit.

4.9.1 Some P2P legal developments

In April 2003 a US federal judge held that Grokster and Streamcast Networks, owners of Morpheus, could not be held liable for copyright infringement (Morpheus is based on Gnutella). The judge said that unless the P2P companies were involved actively and substantially in participation in the infringement, ‘Grokster and Streamcast are not significantly different from companies that sell home video recorders or copy machines, both of which can be and are used to infringe copyrights.’

The RIAA and MPAA appealed the ruling and the US Supreme Court heard arguments in the case in March 2005. A transcript of the oral argument before the court is available at the EFF's MGM v Grokster archive that provides up to date developments in the case. This archive includes the oral argument, linked below, heard on 3 February 2004 in the Appeal Court.

Oral argument from MGM v Grokster (69 minutes).

Download this audio clip.Audio player: t182_1_002s.mp3
Interactive feature not available in single page view (see it in standard view).

The US Copyright Office also have a page devoted to the Grokster case.

Note: The Supreme Court decided the Grokster case largely in favour of MGM on 27 June 2005, referring it back to lower district court to consider the question of damages and operating injunctions. You can read my initial analysis of the decision. In October 2007 Judge Stephen Wilson ordered the last remaining defendant in the case, Streamcast Networks, to install filters to inhibit copyright infringement. See http://b2fxxx.blogspot.com/2007/10/grokster-rides-again.html.

In June 2003, another US federal judge ruled against two other P2P companies, Madster and Aimster.

There are ongoing proposals for laws in the US which could lead to the jailing of file sharers. In June 2004 US lawmakers introduced the INDUCE (Inducement Devolves into Unlawful Child Exploitation) bill. By August 2004 the name of this proposed law was changed to the Inducing Infringement of Copyrights Act of 2004. Whilst the intentions behind the INDUCE act are to regulate P2P copyright infringement, critics say that it could be used to sue any computer or electronics manufacturer for selling PCs, CD burners, MP3 players or mobile phones.

The passing of the European Union intellectual property rights enforcement directive in March 2004 could lead to raids on alleged filesharers' homes.

Also in March 2004, California Attorney General Bill Lockyer (also president of the National Association of Attorneys General) circulated a letter to fellow state attorneys general calling P2P software a ‘dangerous product’ that facilitates crime and copyright infringement. The letter appears to have been drafted by a senior vice president in the MPAA, however. In August 2004 an updated version of this letter was sent to P2P United, the trade body for P2P companies, urging the companies to ‘take concrete and meaningful steps to address the serious risks posed to the consumers of our States by your company's peer-to-peer (“P2P”) file-sharing technology.’

With rapidly changing technologies and laws, all we can really say is that the impact of P2P in the copyright arena is evolving. Also that P2P is not just about file sharing but cuts across a wider spectrum of issues such as civil rights and computer crime.

4.10 Case Study 3: My.MP3

MP3.com bought 40 000 CDs and copied them into a big database. They then gave anyone who could prove they had a copy of any of those CDs access to the music over the internet in return for a fee.

Lessig explains the My.MP3 service on pages 127–129 of The Future of Ideas and concludes, ‘This service by MP3.com made it easier for consumers to get access to the music they had purchased. It was not a service for giving free access to music … MP3.com's aim was simply to make it easier to use what you'd already bought.’

Arguably MP3.com had increased music sales by 40 000 CDs. They had, however, copied a large number of CDs for commercial gain.

Lessig argues in Chapter 11 that the My.MP3 service:

  • did not facilitate theft;

  • increased the value of individual CDs because it allowed buyers to listen to their music from anywhere; and

  • since anyone could create a database of their own CDs in order to access via the Net, it did not really change anything. The incentive for people to create such databases would increase if a service such as My.MP3 were not allowed. This could lead to increased copying. This presupposes, of course, that large numbers of music lovers have the technical skills and the patience of Lessig's former colleague Jonathan Zittrain, who did just that.

When the music industry sued MP3.com, Judge Rakoff, who decided the case in New York, said, ‘The complex marvels of cyberspatial communication may create difficult legal issues; but not in this case.’ He imposed massive damages on MP3.com because he felt the copyright infringements were ‘clear’ and ‘wilful’, and shut down the My.MP3 service.

In February 2005, MP3.com founder Michael Robertson launched a new music downloading service, MP3tunes.com, offering a catalogue of 300 000 songs. The service will not employ digital rights management or copy protection technologies, which have become the standard with other downloading services such as Apple iTunes.

The light bulb was not invented by the candle industry looking to improve output.

(Joab Jackson)

4.10.1 Films and copyright

The My.MP3.com case is about where we listen to music and whether ‘space shifting’ is allowed, i.e. if I can prove I own the CD, I can listen to it from anywhere that I have internet access.

Another more recent case deals with a conflict over how we watch a film. The Directors' Guild of America (DGA) has sued a number of companies that make DVD playback software. The software runs as the film is played and allows the viewer to skip certain scenes they may want to avoid, such as those containing violence or bad language. The software acts as a sort of automatic remote control.

Yale's Ernest Miller criticises the DGA: ‘Ultimately, the issue is one of control. Technology has given consumers the ability to control how they watch movies in their homes, and the DGA wants to take that control away by banning the technology.’

The DGA has a different view: ‘The Directors Guild of America will vigorously protect the rights of its members, and we are confident that any efforts to legitimize the unauthorized editing and alteration of movies will be resoundingly defeated.’

Some of these companies are offering their services over the internet. The DGA sees it as a straightforward case of infringing copyright by making so-called ‘derivative works’ for commercial gain. Miller and other critics of the DGA suggest it is all about control, and that these software makers could even increase the market for the films by allowing them to be viewed by people who would otherwise shun them. Note: In July 2006 a federal court judge agreed with the DGA, despite the US Congress having enacted the Family Movie Act of 2005 in the interim, which basically enabled filtering as long as no fixed copy of the edited film was made.

4.11 The elusiveness of Lessig's concept of code

We have come to a point in the unit and The Future of Ideas where the use of the terms ‘code’ and ‘architecture’ might be getting a bit confusing. Surely ‘code’ is one of the three main layers of the internet, as described in Chapter 2? So how can ‘code’ also appear in the ‘content’ layer? What's this ‘code of cyberspace’ on page 35? Is it architecture? Software? Hardware? Law? In other parts of the book there is talk of ‘real space code’.

Unfortunately, Lessig's use of the term ‘code’ is often ambiguous, or at least he uses the term to mean different things at different points. So it is important to be alert to the context in which the term is used. For the purposes of the unit, think of ‘code’ or ‘architecture’, which are often used interchangeably, as:

  • the architecture – structure or built environment – of the internet;

  • the middle ‘logical’ layer of the internet;

  • law;

  • hardware and software technology;

  • software programmes and applications, such as word processors, web browsers or email systems – these run at the ‘content’ layer.

So if ‘code’ is used in a confusing way, just go over that sentence or paragraph again, whilst asking the question:

Does he mean

  • architecture

  • layer

  • law

  • technology or

  • software applications?

You don't need to remember all these different uses of the term ‘code’. However, if it is something you are finding confusing, it might be worth printing out a copy of this page and keeping it nearby as you read.

The first rule of business is: Do other men, for they would do you.

(Charles Dickens)

4.12 Activity 3

Many children use the internet to help with their homework. They download images and documents to help with, and be incorporated into, the projects they are working on. We have already seen in Section 3 that copyright law applies to the internet. Children may be infringing copyright when they borrow things from the Net that other people have created.

Using the Net for educational purposes is surely a good thing though?

So how should we advise children about using information found on the internet?

Activity 3: The parent's dilemma

Assume that a friend has asked you by email how to advise her children about using information found on the internet in homework. Your task is to compose a response.

Focus on:

  1. the reliability of the information and how to check it;

  2. how to deal with the copyright issues.

Discussion

1. The first thing to tell children about information on the internet is that it is not always reliable. So pretending that an essay we found on the internet about the sinking of the Titanic is our own work is not only cheating, the claims in it may also be wrong. How is anyone to check whether the information they find on the Net is reliable, then? We will look at this again in the next section of the unit, but there are some basic questions we can ask to check reliability:

  • Do the website owners have a track record of credibility?

  • Is the site well regarded by credible, reliable people and institutions?

  • Who is the site aimed at?

  • Is the site well documented, are the sources of information given, and are these sources reliable?

  • Is it a government (.gov or .gov.uk), commercial (.com or co.uk), educational (.edu or .ac.uk) or non-profit organisation (.org or .org.uk) or a personal website?

  • Is the page kept up to date?

  • Can you verify the information on the site with other reliable sources?

This links back to Activity 1, where the importance of care in observation was noted. It is important for all of us, not just children, to realise that we must observe clearly in order to understand. Blindly copying an essay about the Titanic for a homework assignment will not lead to understanding and may get the child into trouble. Thinking about the content of such an essay, including whether or not it is reliable, may help the child to change their perceptions and absorb new knowledge of an old tragedy. It may also help the child to new insights about things they already know.

2. So using the internet for homework can lead to better understanding. There would seem to be a public interest in allowing children to use the Net for education. Millions of children searching Google for images to include in their homework is potentially a large amount of copyright infringement, however. So how do we resolve the dilemma?

Is there a difference between what it is fair to do (in the public interest) and what it is legal to do? In the UK the law says that use of copyrighted works for educational purposes is ‘fair dealing’ and not an infringement of copyright.

Children can be encouraged, then, to use material from the internet for homework, but they should not usually make multiple copies or publish them on their own websites. They should also be encouraged to acknowledge the source of any material found on the internet that they include in their work.

"It is the mark of an ineperienced man not to believe in luck."

(Joseph Conrad)

4.13 Summary and SAQs

This section has looked at the need for balance, both in intellectual property law and in the deployment of valuable resources. Intellectual property should balance the needs of the creators with those of the consumers of their work, the public. There should also be a balanced mix of privately owned (controlled) and free (commons) resources to allow innovation to flourish. There is an implicit idea in the material in Chapter 7 of The Future of Ideas, borrowed from the earlier part of the book, that before we ask who should control valuable resources we should be asking whether those resources should exist as a commons.

We also looked at some examples of those revolutionary innovations that we referred to earlier in the course – the World Wide Web, My.MP3 and Napster being just three. We briefly touched again on the importance of thinking critically about the ideas being put forward in the unit, and the importance of using reason rather than unfair tactics to persuade. Lessig's perspective on Napster as a celestial jukebox was a good example.

4.13.1 Self-assessment Questions (SAQs)

1. Does the apparent expansion in the scope and term of copyright law raise as many concerns as Lessig suggests? Say why you think it does or doesn't.

Answer

Copyright law is an immensely complicated area. It is impossible to say what the long-term outcome of increases in scope and term will be. The Eldred and The Wind Done Gone cases do raise important questions about the downside of increasing the term of copyright.

For example, what incentive can extending the term of copyright of Gone with the Wind provide to the now-dead author, Margaret Mitchell?

It is clear that large copyright holders have been very concerned about the development of technologies such as Napster, which they see as tools for undermining their businesses. One response has been to encourage legislators, through intensive lobbying, to increase scope and term in order to compensate for their potential losses.

So there are concerns on both sides – those in favour and those against copyright expansion.

2. Did Napster have any real value other than as a tool for stealing music? Say why you think it did or didn't.

Answer

The Future of Ideas makes a strongly reasoned argument that Napster had substantial non-copyright infringing uses. Technologies like Napster could facilitate a celestial jukebox.

3. Do you agree with Lessig that decentralised innovation, facilitated by P2P technologies, will be more productive than controlled innovation? Explain your answer.

Answer

This again is not an easy one to answer. Lessig's analysis of the history of the internet demonstrates the value of decentralised innovation as a result of the internet's innovation commons. Beyond the internet, there is a vast amount of empirical evidence about corporations that started out as small innovative ventures in somebody's garage, to demonstrate the value of decentralised approaches. The basic argument is the old one we saw in relation to the value of a commons: if there are gatekeepers, they will stifle innovation that they don't perceive to be in their best interests. This means incremental innovation but especially disruptive innovation.

Established actors not recognising their own best interests was demonstrated in the movie industry's original response to the Video Cassette Recorder (VCR). Universal sued Sony for copyright infringement and eventually lost on a split vote in the US Supreme Court in 1980. Then at congressional hearings in 1983, the head of the Motion Picture Association of America, Jack Valenti, testified that the VCR's effect on the industry would be equivalent to that of the Boston strangler to a woman home alone. It turned out that sales of video cassettes of films became the industry's biggest revenue generator.

Too many billions of dollars are invested in research every year by large companies, however, to assume that the decentralised approach is the only one.

Which is the more productive, centralised or decentralised? You'll need to decide for yourself, but I would say that we probably need both.

The right to swing my fist ends where the other man's nose begins.

(Oliver Wendell Holmes Jr)

Make sure you can answer these questions relating to Section 4 before you continue:

  • What kinds of innovation has the internet enabled, and why?

  • Why do we need balance in intellectual property law and the deployment of valuable resources?

Record your thoughts on this in your Learning Journal.

4.14 Study guide

On completing this part of the unit, you should have:

  • read Section 4;

  • read Chapters 7 and 8 and pages 41–44 of The Future of Ideas;

  • completed Activity 3;

  • answered the self-assessment questions;

  • recorded your notes in your Learning Journal.

Well done – you've made it to the half-way mark.

Section 5 is a bit different from the previous sections and focuses on critical thinking, a very important skill to help cut through some of the rhetoric seen in the (still limited) public debate on the issues covered in the unit.

I also write an occasional blog at http://www.B2fxxx.bloqspot.com that covers a range of internet law stories that are beyond the scope of this unit. This is written in a particular style and from a specific perspective. Some students like to look at the blog and try to work out the bias, in addition to doing the Section 5 activity.

We will now continue to Section 5, which examines the ‘copyright wars’ and the different tactics and strategies the protagonists used.

5 Copyright wars: rhetoric and facts

5.1 How to critically analyse what you read

This section is largely about how to spot the tactics that people use when they are trying to persuade us to their way of thinking.

Much of Part III of The Future of Ideas is about the reaction of established industries to the innovation produced by the internet. One part of those developments has been popularly labelled the ‘copyright wars’ – essentially battles about changes in the law and technology to protect digital content from piracy.

Note again that the tools on which the protagonists are concentrating to influence behaviour are law and architecture.

Some of the arguments on both sides of these copyright wars provide a rich source of persuasive tactics. We are going to identify these tactics by looking at Web material on the copyright wars.

The lyrics of a Bush/Blair parody song can be found at Bush and Tony.

Here are two questions about the song:

  1. Did you like the song, or at least find it slightly amusing?

  2. Do you like the politicians who are being mocked?

The first and most important rule of critical analysis is to realise that we are all conditioned and predisposed to believe certain stories more than others. This is because of our individual prejudices and values. If we don't like George Bush or Tony Blair, a song that makes fun of them will appeal to us. If we do like them, however, we might find the songs offensive. We will return to the issue of values later in this section.

We will also look at the influence of values, at examples of tactics of persuasion, and how to determine whether an information source is reliable.

Chapter 9

Read Chapter 9 of The Future of Ideas, linked below.

Click 'view document' to open Chapter 9 of The Future of Ideas.

The movie industry is under siege from a small community of professors.

(Jack Valenti)

"When I read that, I had a Monty Pythonesque image of a siege of this massive castle by a tiny number of individuals armed only with insults. ‘Now open your gates,’ they were yelling, ‘or we shall taunt you once again.’ "

(Professor James Boyle, responding to the above)

Further reading: Copy Fights: The Future of Intellectual Property in the Information Age (2002) edited by Adam Thierer and Wayne Crews, Washington, DC, Cato Institute.

5.2 Facts, values and beliefs, or why some issues are controversial

As you will already have detected, the arguments surrounding intellectual property on the internet can be very heated. What I would like to do in this page is to explore why this might be so, and to suggest some concepts which are useful in analysing public debates about controversial subjects.

Most public arguments are conducted in terms of claims that a particular view of an issue is the ‘right’ or ‘correct’ one. Arguments are constructed by assembling ‘facts’ that supposedly prove the truth or viability of a particular line of thought. Often statistics are cited as if they were facts.

The other aspect that you might notice about heated public arguments is that participants are convinced about the validity of their views, even when they cannot cite facts or statistics in support of them. They passionately believe they are ‘in the right’. On the other hand, their opponents believe equally passionately that they are in the right.

What's going on?

In thinking about this, it is useful to distinguish between two kinds of beliefs – facts and values.

A fact is something that one believes to be objectively true. A piece of gold weighs more than a similar sized piece of aluminium. Water boils at 100 degrees centigrade (at a certain pressure). A floppy disk can hold 1.44 megabytes of data. These are all things that can be observed and measured by processes that are not subjective and are agreed upon by all reasonable people.

Values are different. A value is a belief that something is good or bad. That the music of the Beatles is better than Coldplay's; that all convicted murderers should be executed; that private schools are socially divisive and ought to be banned; that abortion is morally wrong under all circumstances.

You could say that facts are beliefs about what is, and values are beliefs about what ought to be.

How is this distinction between facts and values helpful? Well, because many public arguments involve a mix of the two. They are presented as if they are disputes only about facts, whereas they are really about conflicts in values. And this is significant because disputes that are about facts can, in principle, be resolved by some objective process that can establish which assertions are factually correct. One can imagine a kind of impartial court that could adjudicate between the rival claims and reach a judgement acceptable to all.

But conflicts about values cannot be resolved in this way. There is no purely objective process by which the dispute can be resolved. There is no rational process by which someone who believes in capital punishment can convince someone who is opposed to it.

But although value conflicts cannot rationally be resolved, our societies must have ways of settling them. Usually this is done via politics or the legal system. In the UK, for example, capital punishment was abolished by a free vote in Parliament, and abortion is allowed under legislation (also passed on a free vote of MPs). But this does not mean that those in favour of hanging or opposed to abortion are convinced that their values are wrong.

In public debate about social issues, clear-cut ‘facts’ are often in short supply, so people use statistics as proxies for facts. But statistics are not facts in the same way that a belief that gold is heavier than aluminium is a fact. That's why we always need to be very careful in relying on statistical arguments. This is partly because statistics can be used in ways which are very misleading or, at any rate, need very careful interpretation. This gives us the second rule of critical analysis which is:

Be sceptical. Question and interpret with care statistics used to support a particular perspective or set of values.

What does all this mean for us in this unit? Well, first of all, when we examine public controversies we should try to distinguish between their factual content and their value-laden contexts, because the balance may determine whether or not they are resolvable by argument. Secondly, we should be careful about treating statistical data as if they were facts. And thirdly, we should remember that the disputes between the protagonists in the copyright wars arise mainly out of differences in values.

In The Future of Ideas, Lessig worries about ‘fundamental values’ being lost through the evolution of the internet and the manner in which it is regulated. He is mainly concerned about values that he believes are protected by Article 1 of the US Constitution – the freedom to innovate and be creative in order to promote progress in science and arts. Yet many of his opponents, including the US Government in the Eldred case, use the same part of the Constitution to support their arguments. A majority in the Supreme Court resolved that particular values-based dispute in favour of the government in January 2003. If a similar case comes before the Supreme Court in the future there may be a different outcome, depending on the values of the presiding justices in the court at that time.

For the sake of simplicity, we will assume that there are two sides in the copyright wars: those in favour of an expansion in copyright (and intellectual property more generally), and those against the expansion of copyright. Lessig falls firmly into the latter camp.

There are three kinds of lies: lies, damned lies and statistics.

(This quote has been attributed to both Mark Twain and Benjamin Disraeli. Who do you (want to) believe said it first?)

5.3 In defence of statistics

There are a lot of statistics thrown around in the copyright wars. Take this example from the International Federation of the Phonographic Industry (IFPI):

In total, 3.1 million more people were using peer-to-peer networks in March 2002 than in February 2001 when Napster was at its peak. CD burning has also badly hit the European music sector. In Germany, the number of blank CDs used to burn music was estimated at 182 million in 2001, compared to 185 million CD album sales, according to a survey from March 2002 by market research firm Gfk. In Spain, 71 million albums were sold in 2001 compared to an estimated 52 million blank CDs used to burn music, according to a survey by Millward Brown/Alef.

Most people's eyes glaze over at a passage like this, but we still generally accept the statistics without question. Alternatively, we resort to labelling them ‘lies, damned lies and statistics’ if they do not support our point of view.

In defence of statistics, however, it is just number crunching according to pre-determined, long-established rules. The ‘lies’, as perceived, tend to come from unreliable surveys producing unreliable data; also from the interpretation or selective use of the results once the numbers have been crunched. So the lies tend to follow from abuse of, rather than use of, statistics.

Not all ‘lies’ arise from a deliberate manipulation of statistics. Sometimes it happens by accident as a result of a misunderstanding. Suppose that a survey says, ‘The number of artists signed each year by the major music labels has doubled since 1970.’ Somebody else interprets that as, ‘Every year since 1970, the number of artists signed by major labels has doubled.’ In the first case, if there was one artist signed in 1970, there would have been two signed in 2002. In the second case, if there was one artist signed in 1970, there would be two in 1971, four in 1972, eight in 1973 and so on, to a huge number in 2002. The statistic has been distorted but people still accept it.

In the original T182 course, by kind permission of the BBC, we showed students a couple of short sequences from the television series Yes Prime Minister by Jonathan Lynn and Antony Jay, which gave a useful illustration of the abuse of statistics:

In the first sequence, part of the episode entitled ‘The Grand Design’, the two civil servants, Sir Humphrey Appleby and Bernard Wooley, are discussing surveys. Through a series of cleverly loaded questions, Humphrey tricks Bernard into agreeing that he is both for and against reintroducing national service, hence demonstrating that we should be careful about relying on surveys.

In the second sequence, part of the episode called ‘The Smokescreen’, the Prime Minister is proposing to pursue a campaign to reduce smoking. Sir Humphrey is attempting to dissuade him from such a course of action. The Prime Minister for once gets the upper hand by demonstrating that Sir Humphrey refers to any numbers that support his argument as ‘facts’ and any numbers that undermine it as ‘statistics’, implying that such numbers are unreliable.

I used another word above when talking about statistics: ‘data’. My thesaurus suggests that data is another word for facts. That does not really fit here. Thinking about Bernard's answers to Sir Humphrey's questions in the first clip as data may help. Each ‘yes’ is a piece of data. We can consider data as the building blocks for ‘facts’. So if we use data selectively or put it together in a particular way, the building that becomes our facts can be made to look as we want it to look – another tactic in the copyright wars.

None of this changes the second rule of critical analysis:

Be sceptical. Question and interpret with care statistics used to support a particular perspective or set of values.

Although the underlying mathematics may be sound, statistics tend to be wielded to support particular agendas, so treat them with care.

"Carlyle said ‘A lie cannot live’; it shows he did not know how to tell them."

(Mark Twain)

5.4 Tactics of persuasion

‘Spin’ has become a familiar term in recent years, in the context of employing publicists to ‘spin’ stories in order to present a political party, the government, an organisation or an individual in the best light. There has even been a successful comedy television series starring Michael J. Fox called Spin City based on this. The following is a list of some of the tactics used in the process of trying to spin stories that you should look out for in the copyright wars.

  • Appealing to emotion and prejudice: Remember the first rule of critical analysis; if someone tells us a story we want to hear, we are more likely to believe it. There are a huge number of ways of using this tactic. One example is appealing to nationalism, as in the following quote from Jack Valenti, the President of the Motion Picture Association of America, in his testimony to a congressional sub-committee on the ‘Home recording of copyrighted works’ (i.e. the use of video cassette recorders) in 1982.

    • The US film and television production industry is a huge and valuable American asset. In 1981, it returned to this country almost $1 billion in surplus balance of trade. And I might add, Mr Chairman, it is the single one American-made product that the Japanese, skilled beyond all comparison in their conquest of world trade, are unable to duplicate or to displace or to compete with or to clone. And I might add that this important asset today is in jeopardy. Why? … Now, I have here the profits of Japanese companies, if you want to talk about greed. Here, Hitachi, Matsushita, Sanyo, Sony, TDK, Toshiba, Victor, all of whom make these VCRs. Do you know what their net profits were last year? $2.8 billion net profit.

  • Extrapolating opposition argument to the absurd and then refuting the absurd: For example:

    • ‘Napster is a tool to steal music. You think Napster is a good thing? You think something that destroys the livelihood of artists is a good thing? How can you be in favour of stealing food off an artist's table?’

    • ‘The internet is a haven for the four horsemen of the infocolypse – drug dealers, money launderers, terrorists and child pornographers. How can you be opposed to shutting it down? You are opposed to tackling the evils these people visit upon society. How can you be a friend to such horrendous criminals?’

  • Using sarcasm, innuendo, denigration and other forms of humour to belittle opponents: The Bush/Blair song is a good example of this. And there is some name calling in Chapter 9 when Lessig talks about ‘dinosaur firms’ and ‘old Soviets’; he admits that ‘Soviets’ is an unfair term. It is easier to get a low opinion of the opposing advocate if you are funny – the humour diverts attention from the potentially poor basis of your argument. There are lots of variations on this tactic. One of the most common is where the advocate groups all opponents under one general heading. Once there, they can be labelled, on a spectrum from ‘loonies’ to ‘nice people who just don't understand’. You then conclude that their arguments are not worth taking into consideration.

  • Using jargon to confuse: With intellectual property being such a complex subject, the copyright wars are ripe for this tactic.

  • Making appeals to ‘experts’: Later in the unit I refer to Bruce Schneier and Ross Anderson as security experts, but if you were not familiar with the security profession or these individuals, you would have to take my word for that.

  • Using rhetorical questions: if you get your audience to subconsciously supply the answer invited by the question, they become more receptive to the views that follow as a consequence of the answer; to appreciate this, test the effect of taking the opposite answer to the one implied. A variation on the rhetorical question is the use of words and phrases which suggest that the audience should accept without question, e.g. ‘Obviously …’ or ‘Of course …’ or ‘It is clear that we all agree …’ or ‘There is no need to explain to this audience why …’

  • The dominant metaphor: US intellectual property scholar Jessica Litman says if you're not happy with the way the rewards get divided, change the rhetoric. She suggests that copyright used to be seen as a bargain between the author and the public, and the focus was on maintaining the balance between the two. The idea of a bargain was then replaced by the notion of copyright as an incentive to create, the assumption being that as (financial) incentives increased, creative authorship would increase. The previous balance or bargain did not matter any more except to the degree that it affected incentives for authors. Then in recent years copyright became less about incentives for authors and more about copyright as a property that copyright owners are entitled to control; balance and incentives get subsumed except to the degree that they affect copyright owners' right to control what is rightfully theirs, leaving aside, Litman says, the question of what exactly ought to be rightfully theirs. She reveals her distaste for these developments at the end of Chapter 5 of her book Digital Copyright: ’… this evolution in metaphors conceals an immense sleight of hand.’

  • Presenting evidence or apparent evidence to make it appear to point to a particular conclusion: This includes using carefully selected evidence while omitting contrary evidence.

  • Taking what someone says out of context: For example, a Times newspaper theatre critic might denigrate a show by saying, ‘This play set out to be a sophisticated, clever comedy and failed miserably.’ A poster outside the theatre might quote the critic and her newspaper by saying ‘“… a sophisticated, clever comedy …” – the Times’.

  • Avoiding giving evidence while suggesting that evidence is being given: e.g. creating the impression that evidence will be presented later to back up what has been claimed. This is a favourite tactic with politicians – they put out a vague policy statement, saying the details will come later, then when asked about the details at a later date they claim all the details were clearly included in the original policy statement and there is nothing further to add.

  • Non-sequitur – ‘it does not follow’: essentially drawing an illogical conclusion from sound data. Since the data are credible the conclusion which follows closely is also accepted. The subtle exponent of the art will embed the illogical conclusion between two logical ones. A variation on this is the extended chain of non-sequiturs, where a string of consequences is proposed from acceptable premises; the sequence will usually be one that might arise out of the premises but not a necessary one. An example is David Blunkett's; stance on the introduction of a national identity card. Mr Blunkett claims that although it will be compulsory for everyone to have a card, the card cannot be considered compulsory, since it will not be compulsory to carry it around all the time. Mr Blunkett also says that although it will have all the features of a national identity card, it cannot be called a national identity card because it is really an ‘entitlement card’. Get the rhetoric right if you want to win over your audience … ‘identity card’ = bad, ‘entitlement card’ = good.

  • Repetition: Repetition of a claim, periodically and frequently, over a long period of time can often lead to general acceptance of the claim as fact, even though it may have been been discredited on numerous occasions. This is a tactic used extensively by so-called ‘historical revisionists’ – people who want to rewrite history. An example is those who deny the existence of the holocaust.

One final note, which is particularly important with respect to the Net: check your sources. Relying on websites, institutions and individuals with a track record of credibility is a sound (though not infallible) start. Do not neglect your own knowledge and experience – just because a normally credible source says that something is red doesn't mean that the source is right, especially when you've seen the thing and know it is blue. Here are some questions to ask when considering using information from a website:

  • Does the person or organisation running the website have a track record of credibility?

  • What are their credentials?

  • Is the site well regarded by credible, reliable people and institutions?

  • Do the people running the site have an agenda, e.g. who is the site aimed at?

  • Where did the information on the site come from and is the source reliable?

  • Is the site well documented and the sources of information given?

  • Is it a government (.gov or .gov.uk), commercial (.com or co.uk), educational (.edu or .ac.uk) or non-profit organisation (.org or org.uk) site?

  • Are the claims on the site verifiable?

  • Are any of the above unfair tactics in evidence?

  • Can you verify the information on the site with other reliable sources?

  • Are the links on the site up to date and to other reliable sources?

  • Is the page kept up to date?

A layman's constitutional view is that what he likes is constitutional and that which he doesn't like is unconstitutional.

(Justice Hugo Black)

5.5 Fair use or dealing

‘Fair dealing’ gives legal immunity to someone in the UK who makes unauthorised use of a copyrighted work, provided they have a good reason for doing so. ‘Fair use’ is the equivalent in the US, although it is not technically identical.

We have already come across ‘fair use’ in The Future of Ideas. In Chapter 7, page 105, Lessig explained that copyright owners' control of their work is constitutionally limited in the US: ‘While a poet or author has the right to control copies of his or her work, that right is limited by the rights of “fair use”. Regardless of the will of the owners of a copyright, others have a defense against copyright infringement if their use of the copyrighted work is within the bounds of “fair use”. Quoting a bit of a poem to demonstrate how it scans, or making a copy of a chapter of a novel for one's own critical use – these are paradigmatic examples of use that is “fair” even if the copyright owner forbids it.’

‘Fair use’ and ‘fair dealing’ are terms that are much used and abused in the copyright wars. They are rather complex concepts legally and are evaluated on a case-by-case basis, which makes them wide open to interpretation and spin.

There are no fixed rules on what proportion of a work can be used – for example, what percentage or how many words or paragraphs. You are not allowed to use a ‘substantial part’ of a copyrighted work but the law does not define exactly what this means. It has been interpreted by the UK and US courts to mean ‘a qualitatively significant part’ or ‘a qualitatively substantial part’ of a work, even if that is only a small part of the work.

Despite the fact that there are no fixed legal rules on how much can be used, publishers will sometimes have their own rules for how much a person can use before they need to ask the publisher's permission, e.g. no more than 150 words from a novel or 300 words from a text book. This does not mean that the publisher's interpretation will be accepted if a case should come to court.

Terry Carroll's frequently asked questions on copyright contains four pages of small dense type on fair use. And that just covers fair use in the US. It will be different in other jurisdictions. In the UK the Intellectual Property Office has a number of web pages explaining the exceptions to copyright, including fair dealing.

One of the most famous US Supreme Court cases, Sony v Universal City Studios, dealt with the legality of video cassette recorders. Effectively, the question was whether it was fair use for consumers to tape TV programmes. The court made a split decision, with five justices for and four against. The original District Court had ruled in favour of Sony. The Appeal Court overturned this decision. Then the Supreme Court narrowly overturned the Appeal Court ruling in favour of fair use and Sony.

If legal experts at the highest levels cannot agree on what constitutes fair use, ordinary people don't have much chance. To quote Terry Carroll, ‘If all this sounds like hopeless confusion, you're not too far off. Often, whether a use is a fair use is a very subjective conclusion.’

This unit does not require you to become an expert in the legal interpretation of fair use or fair dealing. We do expect you to be aware that they are complex legal concepts that are open to interpretation and spin. They do, therefore, get abused in the copyright wars; sometimes deliberately, sometimes through genuine misunderstanding. We also expect you to have an informed lay person's idea of what fair use/dealing are, e.g. part of the copyright ballpark that copyright owners do not control.

The minute you read something you can't understand you can be almost sure it was drawn up by a lawyer.

(Will Rogers)

Further reading: ‘Copyright as Cudgel’ is an article in the Chronicle of Higher Education written by someone who agrees with Lessig's perspective on the copyright wars. It includes, in the bullet points towards the end of the piece, an accessible description of fair use, which may help to reinforce your understanding of the points made above.

5.6 Activity 4

In this section we have looked at some of the tactics that people use to persuade us of the merit of their arguments.

This activity should help you to:

  • appreciate some of the different opinions on both sides of the copyright debate;

  • get some practice in looking for the tactics of persuasion and the hidden values underlying an argument.

Activity 4

Below is a series of links to articles associated with the copyright wars, four in favour and four against expanding copyright to deal with the intellectual property issues thrown up by the internet.

Click on each link in turn and quickly decide if you think the websites where these articles are located might be credible sources of information.

When you have done this choose two of the articles and identify some examples of the tactics of persuasion you can find in each of them. The articles you choose should be from opposing sides of the debate.

Against the expansion of copyright:

  • The LawMeme Guide to Spider-Man and Star Wars Bootlegs

  • The Right to Read

  • The Internet Debacle – An Alternative View

  • Could Hollywood hack your PC?

In favour of the expansion of copyright:

  • Piracy hurts everyone both online and offline LINK DOES NOT WORK

  • RIAA CEO Hilary Rosen's speech at Peer-to-Peer and Web Services Conference LINK DOES NOT WORK

  • A ‘music for free’ mentality is challenging the future of the European recording industry LINK DOES NOT WORK

  • Valenti testifies to studios desire to distribute movies online to consumers LINK DOES NOT WORK

When you have completed the exercise, you might like to compare it with my answer.

Answer

Very quickly, looking at each of the websites in turn, against the questions for evaluating a source on the Web:

  • Does the person or organisation running the website have a track record of credibility?

  • What are their credentials?

  • Is the site well regarded by credible, reliable people and institutions?

  • Do the people running the site have an agenda, e.g. who is the site aimed at?

  • Where did the information on the site come from and is the source reliable?

  • Is the site well documented and the sources of information given?

  • Is it a government (.gov or .gov.uk), commercial (.com or co.uk), educational (.edu or .ac.uk) or non-profit organisation (.org or org.uk) site?

  • Are the claims on the site verifiable?

  • Are any of the unfair tactics in evidence?

  • Is the page kept up to date?

  • Can you verify the information on the site with other reliable sources?

  • Are the links on the site up to date and to other reliable sources?

The LawMeme Guide to Spider-Man and Star Wars Bootlegs – Yale law school – potentially credible, sound credentials, well regarded, up to date and links to reliable sources.

The Right to Read – Appears to be the website of the legendary leader of the Free Software Foundation Richard Stallman; offers nine different translations of the article; potentially credible.

The Internet Debacle – An Alternative View – Janis Ian's site, industry insider for many years; potentially credible; claims verifiable – probably a reliable source.

Could Hollywood hack your PC? – A well-known online news site, CNet; and the author of the piece is a technology journalist with a good reputation for knowing the technology and the industry; potentially credible.

Piracy hurts everyone both online and offline – Another news site, author a known Democratic Party representative with an interest in the industry; potentially credible; article linked from RIAA site.

RIAA CEO Hilary Rosen's speech at Peer-to-Peer and Web Services Conference – Author CEO of the Recording Industry Association of America; knows the industry; potentially credible.

A ‘music for free’ mentality is challenging the future of the European recording industry – Author IPFI, knows industry; potentially credible.

Valenti testifies to studios’ desire to distribute movies online to customers – Jack Valenti, President of the Motion Picture Association of America for over thirty years; knows the industry; potentially credible.

Regarding the critical analysis of the individual articles, I have done a partial critique of the LawMeme Spiderman article as an example.

The author of the LawMeme piece makes some very important and well-argued points. All of the authors of the above articles have many legitimate issues that need to be addressed, for example:

  • Digital copies of movies shot surreptitiously with a camcorder in a cinema will be of poor quality.

  • The laws and practices to bring about Stallman's dystopian future have either been proposed or passed.

  • Artists' sales can go up when they make samples available on the Net.

  • Hollywood wants legal immunity for any damage done through employing ‘technological self help’ measures in order to protect their intellectual property.

  • Piracy does cause problems.

  • P2P technologies do present challenges to the music industry.

  • The IPFI have a right to represent their industry.

  • The movie studios do want to figure out how to make money out of the Net without imperilling their existing markets.

Yet all of them have succumbed to the use of unfair tactics in trying to get their message across. What does the LawMeme article suggest about the underlying values of the author, Ernest Miller? He seems to be a fan of new technologies. He also appears to have a number of disagreements with the film industry's perspective on piracy and how to deal with it. In the copyright wars, this article would suggest that his values seem to be broadly in line with Lessig's.

He uses statistics as a drunken man uses lamp posts – for support rather than illumination.

(Andrew Lang)

5.7 Summary and SAQs

This section has been about critically analysing the arguments in the copyright wars. There are a range of tactics employed in trying to persuade someone to a particular way of thinking. In summary, these include:

  • appealing to emotion and prejudice;

  • extrapolating opposition argument to the absurd and then refuting the absurd;

  • using sarcasm, innuendo, denigration and other forms of humour to belittle opponents;

  • grouping all opponents under one label, a category easy to dismiss;

  • using jargon to confuse;

  • making appeals to ‘experts’;

  • using rhetorical questions;

  • the dominant metaphor – if you're not happy with the way the rewards get divided, change the rhetoric;

  • presenting evidence or apparent evidence to make it appear to point to a particular conclusion;

  • using carefully selected evidence while omitting contrary evidence – this includes avoiding giving evidence whilst suggesting that evidence is being given;

  • non-sequitur – drawing an illogical conclusion from sound data.

On the Net, be confident of your sources.

Remember the first rule:

We are all conditioned and predisposed to believe certain stories more than others, because of our individual values.

So we need to be consciously aware of the inbuilt bias we bring to the material we are critically analysing and make allowances for it.

Section 5 has also covered Chapter 9 of The Future of Ideas, which briefly sets up the story of Part III of the book – how Lessig believes that established industries are using law, architecture and market power to launch a counter-revolution to control the internet.

5.7.1 Self-assessment Questions

These questions should help you test your understanding of what you have learnt so far. If you have difficulty with the questions it would be a good idea to go back over Section 5.

1. Who are the main protagonists in the copyright wars?

Answer

This is a question that was not explicitly covered in the material in this section. You will have spotted some of the main players, however, such as the movie, music and publishing industries on the copyright expansion side, and a loose coalition of the consumer electronics industries, civil rights groups, academics and librarian associations on the other side.

2. What do we mean by ‘facts’ and ‘values’?

Answer

A fact is something that one believes to be objectively true. A piece of gold is heavier than a similar-sized piece of aluminium. A fact can be measured by an objective measurement process.

Values are beliefs about what is good or bad. That one type of music is better than another; that thieves should be locked up; that everyone should attend religious services; that gifted children should be put on accelerated education programmes. Values deal with our perception of the way the world ought to be and have a very strong influence on what we are prepared to believe.

The Royal Commission on Environmental Pollution, in Chapter 7 of its 21st report, ‘Setting Environmental Standards’ (1998), defined values as follows:

We understand values to be beliefs, either individual or social, about what is important in life, and thus about the ends or objectives which should govern and shape public policies. Once formed such beliefs may be durable.

3. What is ‘fair use’?

Answer

It is a complex legal concept much quoted by the protagonists in the copyright wars. Jessica Litman, in her book Digital Copyright, defines fair use as: ‘a long standing legal privilege to make unauthorized use of a copyrighted work for good reason.’

"We must remember that in order for copyright to be the engine of free expression that its proponents so loudly claim, fair use to comment upon, criticize and annotate the works must be available. The authors of the copyright clause did not anticipate an understanding of copyright that only permits citizens to be passive consumers of copyrighted works."

(Ernest Miller)

Further reading: If you are interested in learning a bit more about the details of the copyright wars, two of the more readable books in the area, one from each side of the divide, are:

  • Digital Copyright by Jessica Litman, published by Prometheus Books, 2001.

  • The Illustrated History of Copyright by Edward Samuels, published by Thomas Dunne Books, 2000.

5.8 Study guide

On completing this part of the unit, you should have:

  • read Section 5;

  • read Chapter 9 of The Future of Ideas;

  • completed Activity 4;

  • answered the self-assessment questions;

  • recorded your notes in your Learning Journal.

Section 6 covers the model of constraints on behaviour and some of the mechanisms of how the internet works.

6 Constraints on behaviour

6.1 Law, social norms, market forces and architecture

In his first book, Code and Other Laws of Cyberspace, Lawrence Lessig suggests that our behaviour is regulated by four main forces: law, social norms, market forces and architecture.

Figure 7: constraints on behaviour

6.1.1 Law

Law and legal regulations provide the framework through which governments prescribe what is acceptable behaviour and what is not. Law acts as a threat. If we don't follow the law there is a risk that we will be found out and punished.

For example, law regulates smoking to the degree that cigarettes are not supposed to be sold to children. If retailers sell cigarettes to children they can be prosecuted.

6.1.2 Social norms

Social norms dictate that we don't light a cigarette in someone else's house without asking permission. When I first came to the south of England to work, I didn't know I was not supposed to say hello to a stranger on a train. I was subject to suitably disdainful and horrified looks from my fellow passengers, who tried valiantly to ignore me. Having been normalised after 20 years, I can dish out the dirty looks with the best of them.

Social norms, like the law, punish deviation after the event.

6.1.3 Market forces

Market forces also regulate behaviour. Markets dictate that we don't get access to something unless we offer something of value in exchange. The price of cigarettes is potentially a constraint on a child's opportunity to smoke.

Unlike the law and social norms, market forces regulate at the time of the transaction. If children have no money, retailers will not sell them cigarettes.

Under the law, retailers must first be caught selling cigarettes to children, then they can be prosecuted. The punishment happens after the event. Likewise with social norms. The disdainful looks – and my fellow passengers' classification of me as someone who should be shunned for the duration of the train journey – happened after I tried to engage someone in conversation.

6.1.4 Architecture or built environment

‘Architecture’ or the built environment – i.e. how the physical world is – also regulates behaviour.

Like market forces, constraints on behaviour imposed by architecture happen when we are trying to engage in that behaviour. For example, if a building has steep steps at the entrance and no other way in, it is difficult for a wheelchair user to enter the building unaided.

The notion of architecture as a regulator is not new: the founders of New England meticulously laid out their towns so that the relationship of buildings to each other and the town square meant that the Puritan inhabitants could keep an eye on each other. For practising Puritans, at that time, allowing friends, family and the rest of the community to pry into their private lives was routine. Good behaviour in private was considered to be essential for society. However, it was believed that good behaviour would only be forthcoming if people watched each other closely.

This practice has been brought into the internet age by a company called NetAccountability, who were concerned that software filters were not effective in preventing access to pornography on the internet. NetAccountability launched a service in the autumn of 2002 under which people can sign up to have a morally upstanding friend or family member monitor their web surfing habits. The monitor receives regular comprehensive reports of the websites that person visits. The thinking is that if people are aware they're being watched, they will think twice about visiting inappropriate sites.

Napoleon III had Paris rebuilt so that the boulevards would be wide and there would be multiple passages, making it hard for potential revolutionaries to blockade the city; the wide boulevards also provided the artillery and cavalry with room to shoot.

A major airline noticed that passengers on Monday morning flights were frustrated with the length of time it took to retrieve their bags, so it started parking these flights further away from the baggage reclaim lounge. By the time the passengers got there, their bags had arrived. The complaints stopped.

Prolific 20th-century New York City planner Robert Moses built highway bridges along roads to the parks and beaches in Long Island that were too low for buses to pass under. Hence the parks and beaches were accessible only to car owners – many of them white middle class. Poor people without cars, mainly African Americans and other minorities, would be forced to use other parks and beaches accessible by bus. Thus social relations between black and white people were regulated, an example of discriminatory regulation through architecture.

It should be noted that Moses vehemently denied that there was any racist intent on his part. In one sense, his intent is irrelevant. The architecture regulated behaviour whether he intended to or not. Recall that complex systems often have unintended emergent properties – I referred to this earlier as the law of unintended consequences. Changing things in complex systems results in unintended consequences, sometimes negative, sometimes positive. Irrespective of the intent of the architect, therefore, architecture can regulate behaviour in ways not originally envisaged.

Constraints of the context – the built environment or the architecture – change or regulate behaviour in all these cases. Lessig often refers to the built environment or architecture as ‘code’. Architecture is also self-regulating – the steep steps I referred to earlier get in the wheelchair user's way because they are steep and they are steps! Laws, norms and markets can only constrain when a ‘gatekeeper’ chooses to use the constraints they impose.

6.1.5 Law, norms, market forces and architecture all regulate behaviour

Law, norms, market forces and architecture together set the constraints on what we can or cannot do. These are four forms of regulation that determine how individuals, groups, organisations or states are regulated.

It is important to consider these four forms of regulation together because they interact and can compete. One can reinforce or undermine another. However, the regulators of behaviour we focus on in this course are architecture and law. Market forces also come into it implicitly, since The Future of Ideas argues that changes in architecture and laws are driven by commerce. Changes in internet architecture, backed up by changes in law, are the tools for the counter-revolution covered in Section 7.

I have every sympathy with the American who was so horrified at what he had read of the effects of smoking that he gave up reading.

(Lord Conesford)

6.2 Poisoned flowers – understanding architecture as a regulator

When Lessig uses the term ‘architecture’ in relation to the real world he is referring to more than just buildings and bridges. In this context ‘architecture’ means the ‘laws of nature’. The laws of nature – the architecture – of a world can constrain or enable behaviour. For example, gravity ensures we do not all get jettisoned off into space as the earth spins on its axis. Robert Moses' bridges are a clear example of more conventional architecture imposing constraints on behaviour. If you are not used to thinking of architecture as a regulator of behaviour you may find the following exercise useful. It will help you to:

  • appreciate the force of architecture as a regulator of behaviour;

  • understand how architecture can eliminate the need for law in certain circumstances.

Think of a world where we can make up the laws of nature as we go along, like they do in cartoons on television – gravity switches off when the Road Runner eats the grain under the heavy weight and switches on again when Wile E. Coyote moves under the weight. The good cartoon character draws a door on a wall and opens it to escape. When the bad guy tries to follow he goes splat into the wall. Now the poisoned flowers:

Imagine a cartoon world in which a woman, Martha, grows flowers. These are beautiful to look at and have a delightful scent. They are also poisonous and if touched will kill. Some petals fall onto a neighbour's garden. The neighbour's dog eats them and dies. The neighbour gets angry.

What can we do about it?

  • 1. Bring the dog back to life. We can make up the laws of nature as we go along.

  • Let's introduce another constraint here: although we can change the laws of nature we are limited in how much we can do and we cannot change the past. Therefore the dog stays dead. Very sad.

  • 2. Create a dog that can't be poisoned?

  • That protects the new dog but what about the neighbour or anyone else?

  • 3. Ok, then make the flowers lose their poisonous nature when they leave Martha's garden.

  • We have another problem here: Martha makes her living selling the flowers. Their poison is the primary feature that makes them attractive to her customers.

  • 4. Ok then, why not make the plants poisonous only when in the possession of the person who bought them or who legitimately owns them? If they're stolen or the petals are blown away then they lose their poisonous nature.

We have now solved two problems instead of one. The laws of nature, in this cartoon world, can be designed to mean that stolen flowers lose their value the instant a thief gets them. Theft is a change of possession but not a legitimate change of ownership. The dispute between Martha and her neighbour is resolved NOT by changing behaviour, but by changing the laws of nature.

This cartoon place is a world where problems can be programmed away. It is a bit like the internet in that regard. The important things to ask about such a place (especially cyberspace) are:

  1. What does it mean to live in a world where problems can be programmed away?

  2. When, in that world, should we program problems away?

As well as the questions and answers above, try to develop some of your own with colleagues and/or with readers in the unit forum. We also recommend that you discuss this with your friends and family. You can give one person the ability to decide which laws of nature you can change and which you cannot. It is amazing the number of different alleys this takes you down and it can be highly entertaining, particularly for those who enjoy a good argument!

It may be useful to think of an example of architecture controlling behaviour on the internet. Lessig has taught at a number of universities including Chicago and Harvard. In his first book, Code and Other Laws of Cyberspace, he tells the story of the different approaches that these two universities took to the internet. When he was at Chicago, between 1991 and 1996, academics could connect their computers to jacks throughout the university campus. When the computer was connected, the user had complete access to the internet – open, anonymous and free. Lessig gives the credit for this to the administrator who made the decision about the kind of network the University of Chicago should have. The administrator was a civil rights scholar who valued the ability to communicate anonymously.

At Harvard, however, computers had to be registered and approved. Communications were monitored and tracked to individual computers. There was a code of practice outlining acceptable use, which users had to sign up to, and only approved members of the university could get their computers connected. As at Chicago, this arrangement also arose from the decision of a university administrator, but one with a different set of values to the administrator at Chicago, and one who was perhaps more focused on management control of university resources.

In recent years Lessig has moved to Stanford, where there is yet another distinct set-up in relation to network access. On one occasion he was locked out of the network by a systems administrator who had detected file-sharing software on his machine. He was using the file-sharing software to distribute his own notes to his students.

Note: The ‘poisoned flowers’ story above is based on an extract from Chapter 2 of Lessig's first book, Code and Other Laws of Cyberspace.

As far as the laws of mathematics relate to reality they are not certain; and as far as they are certain, they do not refer to reality.

(Albert Einstein)

6.3 Circuit and packet switching

Here we will look in a little more detail at the mechanics of the internet end-to-end architecture that we have been suggesting is so important. If you wish to, you can review the Warriors of the Net movie now.

6.3.1 Circuit switching

The public telephone network is officially known as the public switched telephone network (PSTN). The function of the network is simply to connect the wires of two telephones (or compatible devices such as fax machines or modems), so that sounds coming from one end are transmitted to the other.

This is called a ‘circuit-switched’ (or more simply a ‘switched’) network architecture. The way in which this is done has changed over time – human switchboard operators were replaced by mechanical processes and later by computerised switching equipment; meanwhile, optical (glass) fibre has replaced much of the copper wiring.

Figure 8: circuit-switched (or ‘switched’) network architecture

While this system is very reliable – just think how rarely the system fails to connect you when you have dialled correctly – it is also extremely inefficient and expensive because the connection is made at the beginning of a conversation, fax transmission or modem session, and is maintained until the connection is terminated. This means that a certain portion of the network is reserved exclusively for that conversation whether or not communication is actually taking place at any given moment. If one party puts down the phone handset (i.e. without actually hanging up and breaking the connection) or is silent, or neither computer sends or receives data for a period of time (as is the case when using the internet), the circuit and the ports on the phone switches between the two devices are unavailable for other activity even though they are not being used at that particular moment. Since it is estimated that up to 50 per cent of a typical voice conversation is in fact silence, clearly a tremendous amount of network capacity is wasted. Put another way, a company must build double the network it really needs for a given number of simultaneous calls, at double the cost.

6.3.2. Packet switching

Instead of keeping a connection open for the entire length of the call, packet networks break the digital stream of ones and zeros into chunks of the same length. These chunks, or ‘packets’, are then put into the computer equivalent of an envelope, with some information such as the origin and destination, or ‘address’, of the packet, and a serial number that indicates the sequence number of the packet or its ‘place in line’. In place of switches, which merely connect and disconnect circuits, packet networks use routers – computers that read the address of a packet and pass it to another router closer to the destination. At the destination, a few thousandths of a second later, the packets are received, reassembled in the correct order and converted back into the original message. Here is an illustration of how it works.

Figure 9: packet switching

6.4 Circuit switching (phones) versus packet switching (internet)

The Future of Ideas (pp. 29–34) explains how Paul Baran developed the concept of packet switching. Note that Donald Watts Davies developed the concept independently in the UK.

Read pages 29–34 of The Future of Ideas, linked below, and note down what advantages packet switching holds over the conventional circuit-switching approach of the phone networks.

Click 'view document' to open the extract from The Future of Ideas.

6.4.1 Sending a message over a network as a series of packets

The routers in a packet-switched network are permanently connected via high-speed lines. This may seem expensive at first sight, but it makes sense economically (and technically) if the network is heavily used, i.e. if it is effectively flooded with packets.

Here is an animation that looks at the different types of switching, and the comparative advantages and disadvantages of each.

This file will not appear on this screen. Click on "launch in separate player" to view.

"An important scientific innovation rarely makes its way by gradually winning over and converting its opponents. What does happen is that its opponents gradually die out, and that the growing generation is familiarized with the ideas from the beginning."

(Max Planck)

6.5 Transmission control protocol and internet protocol

The Future of Ideas mentions transmission control protocol and internet protocol (TCP/IP) in a number of places.

We can see from the Warriors of the Net and packet-switching animations that the internet is just like an enormous game of ‘pass the packet’ played by many computers. TCP/IP essentially governs that game – the computers speak TCP/IP to each other.

6.5.1 TCP/IP key lessons

The key things to note about TCP/IP are that:

  • anyone can join in as long as they speak the lingo – TCP/IP;

  • the networks are dumb, the applications at the ends are clever;

  • the ‘end-to-end’ principle operates.

All the intelligence resides in the computers at the ends of the network. The network is basically dumb and indiscriminately playing pass the packet, hence the neutral end-to-end architecture which constitutes an innovation commons. It is this end-to-end architecture, built with TCP/IP protocols, that undermines the ability of anyone to control the internet.

We can contrast this with examples from the telephone industry such as the Hush-a-Phone (see page 30 of The Future of Ideas). In 1956 AT&T, squashed the distribution of the Hush-a-Phone, a bit of plastic which could be attached to a phone receiver to cut out background noise. Regulations did not allow any foreign attachments to the phone network without AT&T's permission.

Powerful gatekeepers can control innovation. End-to-end does not lend itself to central control.

I hate quotations.

(Ralph Waldo Emerson)

6.6 How TCP/IP works

What are protocols? And, more specifically, what is TCP/IP?

A protocol is a set of rules which determine how two or more entities interact and communicate. For example, in real life, when two English people are introduced to each other they will shake hands and say something like ‘How do you do?’ But if two Japanese people meet they will bow to one another. They have different protocols governing social behaviour. Similarly, in some societies it is considered aggressive to look people directly in the eye, while in others not looking them in the eye is taken as a sign of evasiveness. If people don't understand the protocols which govern a particular social interaction, all kinds of misunderstandings can result.

Much the same applies to computers. When two machines wish to communicate they also need a set of rules to govern how it is done. Protocols provide these rules. The Net is governed by scores of such protocols. To read this page, for example, you have used the Hypertext Transfer Protocol (HTTP) to request and receive it, and the TCP/IP to transfer the data that make up the request and the page.

Every computer connected to the internet has a unique IP address, or IP number, such as 131.111.163.59. Permanently connected computers, such as those in a university computer laboratory, will have permanent addresses. Those connected via a modem will have a temporary (but still unique) IP address assigned each time the user dials into their ISP. We've seen in the Warriors of the Net animation that the IP program (‘Mr IP’) sticks the address of the target computer on each packet. TCP tells the computer how the information transmitted gets disassembled into packets at the sending computer and reassembled at the receiving computer.

6.6.1 TCP/IP at the code layer

Engineers represent communications protocols (including internet protocols) in layers like Lessig's three-layers model of the internet. In reality the operation of the internet is slightly more complex than the three-layers model might lead us to believe. To look at TCP/IP, we are going to further refine Lessig's model and consider TCP and IP as operating at two different levels or sub-layers within the code/logical layer as follows:

Packets travel vertically up and down through the layers. The various protocols which operate at each level do their stuff with the packets as they arrive and then pass them on to the next layer/protocol. For this reason the TCP/IP software running on a computer is often called a ‘TCP/IP stack’.

The network (IP) layer figures out how to get packets to their destination. This is where IP lives. It gives no guarantees about whether packets will get through, it just decides where they will be sent. With incoming packets, on a receiving computer, it checks to see if a packet is corrupted and, if it is, discards it. If the packet is OK the network layer passes it on to the transport layer.

The transport layer is where TCP lives. Its job is to ensure the reliability and integrity of messages, and to process them to and from the adjoining layers. So, for example, on the receiving computer, TCP checks all the packets in, uses the information on their address labels to reassemble them in the correct order and asks for any that have been lost in the post to be re-transmitted then passes the assembled message to the application layer for display on the computer screen.

6.6.2 How a stack works

A TCP/IP stack works by passing packets up and down from layer to layer. Each layer does something to the packet in order to achieve its allotted purpose. Mostly this involves adding headers to, or stripping headers from, packets.

As a simple analogy, think of what courier services like DHL, FedEx or UPS do when you give them a parcel to deliver. They immediately put it into one of their specially designed envelopes or containers and then send it through their system to its destination. But suppose that for some crazy reason you specified that you wanted DHL, FedEx and UPS to transport your parcel in relays.

First DHL would place the parcel in one of their envelopes and deliver it to FedEx. They would then insert the (DHL) parcel into a FedEx envelope and pass it to UPS who would … well, you get the idea. And when the parcel eventually arrived at its destination, the whole process would go into reverse.

It might help you to understand more fully how TCP/IP works if you look at the following animation. This explains the TCP/IP protocols and, in particular, what happens within the transport and network layers.

This file will not appear on this screen. Click on "launch in separate player" to view.

6.7 The importance of end to end

The end-to-end (e2e) design was first described by David P. Reed, Jerome H. Saltzer and David D. Clark in 1981. Over 20 years on they still defend the importance of end to end as a design that leads to a more flexible and scalable architecture. The principle is to keep the intelligence at the ends of the network and keep the network itself simple.

We have already seen some of the mechanics of how this works. We have also seen that architecture matters. Robert Moses' low bridges regulated social interactions between different ethnic groups whether he intended them to or not.

Internet architecture matters. An e2e design means that:

  • Innovations only need to connect to the network to run. Permission from a network owner or other central authority is not required – any more than permission would be required from the electricity company to plug in a new kind of electrical device.

  • The network owner cannot discriminate against specific data packets, slowing some down and speeding others up, because the network is simple. It doesn't provide the facility to discriminate. It shunts packets from point to point roughly in the direction of their ultimate destination, until all the packets (or their replacements in the case of those that don't make it) arrive and get reassembled into the original file.

  • As David Reed has said, e2e does not ‘restrict our using some new underlying transport technology that turned out to be good in the future.’ Since e2e is not optimised for any existing use the network is open to innovation not originally conceived of, such as the World Wide Web.

End to end is a design choice. The original network architects chose it specifically because they did not want to restrict possible future uses. As Lessig says, the network would not control how it would grow, but innovators with new applications would. Tim Berners-Lee has also said of the World Wide Web that to be successful it needed to grow in an unlimited way. Allowing some form of central control would have led to a bottleneck as the Web got bigger.

The telephone network is optimised for phone use. It is an intelligent network to that extent. It also means that it can't easily be used for new applications not originally thought of because changes in one area have all sorts of knock-on effects on the system. The internet grew on the telephone wires. The physical layer was controlled. The telephone companies were prevented by law, though, from controlling internet applications run over their networks. Hence the end-to-end disabling of control was extended into the physical layer, in this case by law.

Lessig argues that e2e at the code layer makes the internet an innovation commons. It allows a broad range of development and facilitates the disruptive innovation we talked about in Section 2. Established interests, commercial or political, are not good at facilitating disruptive innovation, and concentrated control does not often produce disruptive technologies or ideas. End to end provides a route around established interests for creators and innovators.

"The person who says it can't be done should not interupt the person doing it."

(Chinese proverb)

6.8 Activity 5

The aim of this activity is to give you a personal feel for the technology of IP addresses and packets in action.

This activity will help you to:

  • see the unique IP address assigned to your computer when you are logged onto the internet;

  • look at a server which traces the route taken by a packet sent from your computer to another on the internet;

  • conduct a small experiment to see how long it takes to get a packet from your computer to others elsewhere on the internet.

There is some software that can provide a fascinating graphical illustration of the route taken by test packets during transmission from one end of a TCP/IP network to the other (in other words, from your computer to a chosen destination). This service is provided by Traceroute Web servers. You are going to connect to one and see what information you can find about the route taken by packets of information transmitted across the internet.

The one I want you to try is a particularly easy-to-use service providing a clear graphical traceroute – the VisualRoute server in Richmond, Surrey.

Activity 5

Connect to the VisualRoute server.

Note: when you use this service for the first time you have to register. There is no cost involved.

When you connect to the VisualRoute server website, the IP address of your own computer is displayed in the (to) box. It is a numeric address such as ‘131.111.163.59’. This is the unique internet address currently assigned to your computer. No other computer on the Net can have this address, and when you send out a packet, the IP layer in your TCP/IP stack places this number in the ‘Source Address’ field in the header which it places on the packet. Make a note of the IP address of your computer.

Note: If you have time, it would be worth repeating this activity the next time you log on to your computer. You may find that your computer now has a different IP address rather than a static one. Most ISPs have a batch of IP addresses assigned to them, and their server allocates addresses from this batch as computers dial in.

Next, click on the start button beside the box with your IP address in it.

The software then analyses your connection to the VisualRoute server. Clicking on the “Summary”, “Table” and other tabs gives you a range of useful information on your connectivity including:

  • the number and IP addresses of all the separate computers involved in passing on the test packets to their eventual destination;

  • the percentage loss of information;

  • the locations of the various nodes and networks involved ;

  • the time in milliseconds for each ‘hop’;

  • a map showing the route of the packets across the world.

Now enter the URL of a website in the UK or USA to which you would like VisualRoute to send test packets (for example, the New York Times server would be entered by typing ‘www.nyt.com’) and click on the start button to the right of the url entry box. The software analyses VisualRoute’s connection to the chosen website. This information can be accessed by clicking on the “Summary”, “Table” and other tabs as before.

If you have the time and inclination, it would be worth repeating this activity at different times of the day, especially early in the morning UK time (before the USA goes to work) and mid-afternoon in the UK (when the USA is busy). Compare the time taken and any differences in the routes used.

You could also repeat the process for other addresses across the world, or for a target address in the UK, such as the Open University or the BBC.

Note: The VisualRoute utility was tested and found to work in Microsoft Internet Explorer version 6. It did not work with the Firefox browser version 3.0.5”

6.9 Summary and SAQs

In this section we have looked at Lessig's model of the four contraints on our behaviour: law, social norms, market forces and architecture.

Figure 10

We have spelled out the importance of law and architecture in relation to the ideas in this course and looked at some stark examples of architecture as a regulator. The poisoned flowers story explained the wider meaning of architecture in the context of the internet. Essentially, the architecture of the internet determines the laws of nature on the internet.

An e2e ‘innovation commons’ at the content and code/logical layers made it impossible for anybody to control what anybody else did on the network. There was no centralised controller or gatekeeper who could stop someone from trying out their new ideas. People shared information and ideas, and innovation happened.

It happened because many of the resources used in internet innovation were shared freely and because the environment (architecture and law) was right. In the case of the architecture, it was an innovation commons because of its e2e design. e2e meant that intelligence was kept at the ends of the network and the network itself remained simple. This in turn meant that:

  • the network was open to innovation;

  • innovators didn't need permission to innovate – anyone could join in as long as they used TCP/IP;

  • network owners could not discriminate against certain users because the network does not provide the tools to do so – the networks are dumb; the applications at the ends are clever.

We have looked at circuit and packet switching and TCP/IP in order to understand more fully how the architecture actually works.

6.9.1 Self-assessment Questions

1. What do we mean by the term ‘architecture’ and why is it important?

Answer

‘Architecture’ is used to mean structure or built environment or the laws of nature of a world. Often when Lessig uses the term ‘code’ he is referring to the architecture of the internet. Architecture is important because it is one of the key regulators of behaviour, as demonstrated by Robert Moses' highway bridges. In the case of the internet the end-to-end architecture is the central feature which makes the internet an innovation commons.

2. What is TCP/IP?

Answer

TCP (transmission control protocol) manages the disassembly and assembly of messages into managable sized packets. It ensures the reliability and integrity of messages. IP (internet protocol) does the address labels at the sending end and checks for corrupt packets at the receiving end.

3. What are the four constraints on behaviour?

Answer

Law, social norms, market forces and architecture. Lessig believes that law and architecture are the most important of these constraints in the case of the internet.

"The bird of paradise alights only on the hand that does not grasp."

(John Berry)

Make sure you can answer these three questions relating to Section 6 before you continue:

  • What are the contraints that affect behaviour?

  • What does Lessig mean by ‘architecture’?

  • What is the architecture of the Net?

Record your thoughts on this in your Learning Journal.

6.10 Study guide

On completing this part of the unit, you should have:

  • read Section 6;

  • completed Activity 5;

  • answered the self-assessment questions;

  • recorded your notes in your Learning Journal.

Things start to get slightly more heavily loaded again in the next section. Section 7 deals with Lessig's revolution and counter-revolution through law and architecture. There's quite a lot of material, but don't be put off by the volume, as much of it is optional reading that gives extra information on some of the legal cases.

7 The counter-revolution through law and architecture

7.1 Introduction

The Future of Ideas refers, from the early pages, to the revolutionary threat that internet innovation has created to established industries, and the industries' reaction to this threat in order to protect their interests – the counter-revolution. This section will look in some detail at how law and architecture have been used as the tools of this counter-revolution. It will include specific examples and cases in the battle for control of the Net, some of which are noted in the layers diagram below.

The counter-revolution changes the balance, at each layer, between resources that are controlled and those that are available as a commons. Some control, for example, gets introduced at the code layer, which initially existed as a commons.

Section 6 covered a model of the four constraints on behaviour – law, social norms, market forces and architecture – focusing mainly on architecture. This section will focus on the influence of law and architecture, but first we will examine generally how the four constraints interact with each other.

No one promised us tomorrow.

(Hawaiian proverb)

7.2 Law and the interactions of the regulators of behaviour

We looked at the model of the constraints on behaviour shown below in Section 6. We also noted in the context of innovation and the Net that law and architecture are the most important of the regulators.

Figure 11: constraints on behaviour

We considered a government's power to regulate behaviour through law and looked at the example of a retailer constrained from selling cigarettes to children. The law forbids such sales, and retailers comply to the extent that they fear being caught and prosecuted.

This is a little simplistic, though. Surely many retailers also comply with the law because they think it is wrong to sell cigarettes to children. Social norms apply here too. The four regulators of behaviour interact and can reinforce or undermine each other.

Figure 12: interaction between the four regulators of behaviour

When trying to model this our diagram gets much more messy. Yet even this diagram does not demonstrate the complexity of the interactions between the regulators. Depending on the situation the degree of influence of each of the regulators changes. Let's take some examples of where law is the most powerful regulator.

Figure 13: law as the most powerful regulator

The diagram is now less complicated but only models one situation, where law is the most influential. Neither does it consider the complex interactions of all four regulators. Let's look at some examples of where the law influences the other regulators.

Education is one example. It is compulsory for children in the UK and the US to be educated. It is possible for parents to teach their children at home, but most go through a formal schooling system. This schooling system values certain behaviours above others and hence children get taught social norms. Those children who do not conform to these norms have a hard time. Here the law is indirectly influencing behaviour by directly influencing social norms.

Can the law influence market forces? One example is the anti-trust laws which led to Microsoft being convicted of anti-competitive practices. Critics of the power of such laws might suggest that the remedy decision by Judge Colleen Kollar-Kotelly had little effect on Microsoft's practices. Critics might also suggest that governments are better placed to influence market forces through their spending decisions than through law. Nevertheless, the anti-trust laws are designed to influence market forces.

Can the law influence architecture? Yes. One example in relation to conventional architecture is the legislation in the UK outlawing discrimination on disability grounds. Under the Special Educational Needs and Disability Act 2001 (SENDA), local educational authorities will have a duty to improve ‘the physical environment of the schools for the purpose of increasing the extent to which disabled pupils are able to take advantage of education.’ Educational institutions will need to be physically accessible, e.g. via lifts and ramps if necessary.

What about technical architecture? The Communications Assistance for Law Enforcement Act (CALEA) was passed in 1994 in the US. Telecommunications companies are obliged under the Act to change the architecture of their systems to make phone tapping easier for law enforcement agencies. In the UK the Regulation of Investigatory Powers Act 2000 means that internet service providers can be forced to install equipment to facilitate surveillance by law enforcement authorities.

In March 2004 US law enforcement agencies petitioned the Federal Communications Commission (FCC) to expand CALEA-like surveillance powers to broadband internet networks. As with the 1994 Act this would mean redesigning networks to be more surveillance friendly. Companies such as AOL Time Warner have already developed these kinds of networks.

In August 2004 the FCC ruled that broadband and internet phone services need to be CALEA compliant. This means further architectural changes to networks that are already constructed to facilitate central control and may have implications for the kind of innovation that Lessig attributes to the end-to-end features of the original Net.

Further reading:

‘Intellectual property is theft. Ideas are for sharing.’

‘Politics, language and the Eldred decision.’

7.3 Curbing the Web

The World Wide Web is a disruptive technology. It enabled a phenomenal proliferation of publications on a global basis, vastly increasing the ‘information space’ in which we live. Nobody knows how big the Web is, but at the time of writing (Autumn 2004), the most popular search engine (Google) is claiming to index over four billion pages. More importantly, this explosion in the volume of publication has bypassed, and in many cases undermined, the traditional methods by which access to publication was controlled. In fact the only historical parallels to Web publishing were pamphlets, samizdat (government-banned literature in the Soviet Union) and other ‘unofficial’ methods of circulating information via paper in the past. But these older methods of ‘unofficial’ publication were intrinsically vulnerable to censorship and limited to small circulations and specific geographical locations.

All of this changed with the Web. Suddenly, anyone who was capable of writing a Web page could, in principle at least, become a global publisher. Similarly for any institution that wanted to make information available to the public. The editorial, publishing and official ‘gatekeepers’ who hitherto had controlled access to publication media could be bypassed. The result was an explosive growth in online publication across a very wide spectrum ranging from innocuous public information and news through commercial, industrial, governmental and civil society publications to pornographic and criminal publications of various kinds. The British Sunday newspaper the News of the World used to boast that ‘all human life is here’. But only the Web can truly make that claim.

Depending on one's point of view, the astonishing volume and diversity of the Web could be a matter for celebration or a cause for concern. Either way, it represented a sea-change in society's communications environment. And it posed a threat to the established order. Societies that had been accustomed to controlling certain kinds of publications found that the prohibited or undesirable material could suddenly be accessed on the Web by their citizens. News organisations that had hitherto catered only for local or national audiences found themselves competing with foreign and international providers that had previously been unavailable to their audiences. Companies that engaged in polluting activities in one part of the world found that their environmental transgressions were now being revealed to consumers in their ‘home’ markets. US automobile dealerships that had been accustomed to charging what the local market would bear found themselves confronted by customers who were aware of the prices being charged by dealers in other regions. Parents discovered that if their children typed some innocent word into a search engine the results might include some nasty pornographic websites. And so it went on: everywhere one looked, the Web appeared to be undermining older certainties, assumptions and ways of doing things.

For a brief period (1994 to 1996) the established order appeared to falter, unsure about how to tackle this new threat. There were some voices at the time arguing that the Web was uncontrollable, echoing the internet pioneer John Gilmore's famous dictum that ‘the Net sees censorship as damage and routes around it.’ But this turned out to be a naive view. The Web is indeed an unruly and vigorous medium, but it is not entirely uncontrollable. The next subsection is about some of the ways in which it has been – or might be – controlled.

7.4 Curbing the Web: control

7.4.1 The weakest link

The key insight for anyone interested in regulating or controlling the Web is that anyone wishing to use the internet has to go through an internet service provider (ISP). This is an organisation, usually a company, that operates servers which have a permanent broadband connection to the Net and that arranges internet access for its customers/subscribers. Up to now, ISPs have traditionally also been the organisations that provided Web-hosting services, i.e. that enabled customers to publish websites. These sites are held on the ISP's servers because most users' PCs have not had the persistent connections and fixed internet addresses required to serve Web pages.

So ISPs form a critical link in the chain between would-be Web publishers and their audiences. It follows, therefore, that if one seeks to control what people do on or with the Web, the logical tactic is to seek to control of or influence ISPs. And this is exactly what has been happening since 1997. Here I will examine how this has been accomplished by governments and corporations.

7.4.2 Government strategies

One control strategy favoured by authoritarian regimes (for example in some Asian and Middle Eastern countries) is to require all citizens to access the Net via ISPs that are owned or controlled by the government. This gives the authorities the ability to monitor what users do on the Net (in terms of emails sent and received, websites accessed and materials downloaded). It also gives them the ability to block access to selected sites, including those hosted in other countries. If citizens wish to circumvent these controls, they can – at least in principle – subscribe to an ISP in another country. But this involves the expense and inconvenience of making long-distance telephone calls, transferring money for subscription fees, etc., and so is probably beyond the reach of most users. The implication is that it is possible, in practice if not in principle, to exercise considerable control on how citizens use the Web.

Democratic regimes tend to avoid such direct methods of control, for political or constitutional reasons. But they have developed subtler methods of regulation based on special legislation and aimed again at ISPs. An example is the UK Regulation of Investigatory Powers Act of 2000, which you came across in Section 2. This gives the security authorities sweeping powers for surveillance of online activity. The Act enables the Home Secretary, for example, to compel an ISP to install special monitoring equipment which forwards a copy of every data packet passing through the ISP's servers to a special monitoring centre in MI5 headquarters in London. The Act also provides powers (under warrant) for intercepting and reading an individual's email messages, and the power (without warrant) to monitor any individual's ‘clickstream’ – i.e. the record of websites and pages accessed.

Finally, governments can compel institutions, like libraries, that provide public internet access to install filtering software which blocks access to certain kinds of site. This is the approach adopted by the US government in relation to federally funded libraries.

7.4.3 Corporate control strategies

Most corporate efforts are directed at controlling or eliminating websites that they feel are detrimental to their prosperity, reputation or security. Again, the main focus for attack is the ISP. ISPs are peculiarly vulnerable to this type of strategy mainly because they are concerned about being held liable for publishing defamatory materials which, though not authored by them, nevertheless appear on a website hosted by them.

At least in the UK, ISPs have good reason to be nervous on this score because of a legal precedent established some years ago when a British academic complained of being defamed in a discussion group hosted by Demon internet (then a major ISP). For various (apparently plausible) reasons Demon did not take the offending group offline, and were then sued by the academic – and lost. Inexplicably, Demon did not appeal – with the result that an important precedent was set. This says that an ISP may be held liable if it refuses to take a publication offline after a complaint about defamatory content has been made.

The Demon precedent has provided the corporate world with a powerful tool for subjugating awkward websites. In essence, what happens is that a company's lawyers will write to the ISP hosting an allegedly offensive site and demand that it be taken offline, otherwise proceedings for defamation may ensue. Most ISPs, faced with such a letter, will probably comply. This is because they see themselves as commercial companies providing a service, not as publishers with an interest in protecting the right to free expression.

The implication is that while the Web may in principle offer unparallelled opportunities for free expression, in practice it is much easier to censor than people have supposed. And until legislation is passed granting ISPs the same kind of immunity as ‘common carriers’ currently enjoyed by telecom companies, they are likely to continue to be cautious, and freedom of speech on the Web is likely to suffer.

Give the Wookie what he wants.

(Said by Han Solo in the film Star Wars)

7.5 Lessons from the cases: controlling Napster and others

Chapter 11 extract

Read the following extract from Chapter 11 of The Future of Ideas: from the beginning of the section “’Increasing control’ on page 180 to the end of the first paragraph on page 205, which ends with ‘And I have argued that we should be sceptical about just this sort of protectionism.’"

Click 'view document' to open the extract from Chapter 11 of The Future of Ideas.

The cases dealt with in Chapter 11 are mostly concerned with copyright law and the effect that this is having in practice at the content layer of the Net (highlighted in the mindmap below). Remember that the perspective on the cases in Chapter 11 is that of the author, who has clearly declared himself against the expansion of copyright laws. The courts, at least, have taken a different view. The objections and questions that Lessig raises are not always issues that the courts have to deal with directly, however.

Figure 14: mindmap of Chapter 11

Click to view a larger version of the Chapter 11 mindmap

7.6 Napster case

Napster is a case that makes both sides in the copyright wars spit venom. So I will offer a link to a useful resource on the case. The FindLaw Napster webpage is not compulsory reading for this unit, just a pointer if you would like to find out more about the details of the case and view some of the original documents.

In Chapter 8 Lessig makes an eloquent argument for Napster as a celestial jukebox – a technology giving access to a wider range of music than ever before in human history – not just a tool to steal music. In Chapter 11 he turns to the legal issues. In the Sony Corp of America v Universal City Studios Inc. case in 1984, the US Supreme Court decided that a technology that has the ‘potential’ for a ‘substantial non-infringing use’ – the video cassette recorder – could not be banned.

Some uses for Napster that could be substantial and non-infringing are:

  • listening to music freely released on the Net;

  • listening to music in the public domain not subject to copyright restrictions;

  • listening to other kinds of authorised works, e.g. recordings of lectures;

  • sampling, whereby people check out music to see whether they like it before buying;

  • ‘space shifting’, the US practice of downloading MP3 files of music that you already own on CD onto your computer for personal use. Note: making extra copies for personal use is not allowed in the UK.

These are no more than theoretical. The original Napster wasn't slowly turned off, as Lessig predicted, but turned off by the company until they could get a legally acceptable subscription service running. They were dealt a further blow in March 2002, when the Appeal Court rejected their appeal. Possibly more significantly, they suffered a serious commercial setback in July 2002 when Thomas Middelhoff resigned as Chief Executive of Bertelsmann AG, Napster's biggest investor. Middlehoff was Napster's biggest supporter at Bertelsmann. In September 2002 a bankruptcy judge in Delaware blocked the formal sale of Napster to Bertelsmann. By the end of November 2002, Napster's assets were bought by a company called Roxio, which produces CD burning technology. Roxio launched a new commercial fee-paying version of Napster in October 2003.

Apple had already been running their iTunes music downloading service for several months before that, and numerous competitors have appeared since. Music industry-backed initiatives such as PressPlay and MusicNet had been ongoing since about 2001 but had run into numerous problems, including intellectual property issues and an antitrust probe.

7.7 MP3.com case

The Recording Industry Association of America (RIAA) and the court considered MP3's service to be blatant copyright violation. Lessig considers that they had a point if the law is interpreted literally, but if the real issue was avoiding piracy of music:

  • My.MP3 did not facilitate theft – users had to demonstrate that they had the CD;

  • My.MP3 theoretically increased the value of a CD because users could play their music anywhere;

  • the closure of MP3 could lead to the production of more copies of the same CDs as individuals resort to copying their own.

MP3.com wanted to be allowed to ‘space shift’ content just as individuals could. This would allow people to get access to their music wherever they had access to the internet, without having to make their own copies of their CD collections. Sony had been allowed by the Supreme Court to ‘time shift’ content with their video cassette recorders (VCRs), in this context taping TV programmes and replaying them later. Lessig believes the court should have given more leeway to MP3.com's innovation.

7.8 DeCSS case

Type ‘DeCSS’ into Google and you will get about 800 000 hits. The four pages (pp. 187–190) in The Future of Ideas on this very complex story, which has raised the hackles of people on both sides of the copyright divide, provide one of the clearest short summaries of the case to be found anywhere.

The US Digital Millenium Copyright Act (DMCA) states:

No person shall circumvent a technological measure that effectively controls access to a work protected under this title.

This means that, in this case, no one is allowed to crack CSS (content scramble system), a code that limits the range of computers on which a DVD can be played.

The DMCA also states:

No person shall manufacture, import, offer to the public, provide, or otherwise traffic in any technology, product, service, device, component, or part thereof, that –

  • (A) is primarily designed or produced for the purpose of circumventing a technological measure that effectively controls access to a work protected under this title;

  • (B) has only limited commercially significant purpose or use other than to circumvent a technological measure that effectively controls access to a work protected under this title; or

  • (C) is marketed by that person or another acting in concert with that person with that person's knowledge for use in circumventing a technological measure that effectively controls access to a work protected under this title.

So, no one is allowed to create a tool (DeCSS) to crack CSS, or provide web pages or links to web pages which have DeCSS.

Lessig's concern is that the DMCA interfered with both fair use and free speech in the case. The district court judge, Kaplan, however, decided that the DMCA regulates code, not fair use; also that copyright owners had the right to employ technological self-help measures to prevent piracy. He inferred that the primary intent of the editor of the magazine Hacker 2600 was to help people crack CSS, so that there were no real free speech issues at stake in preventing him linking to sites containing DeCSS.

Lessig's other concern is that the movie studios have now demonstrated that they are prepared to resort to law when their control is threatened, and that this will have a ‘chilling effect’, making people wary of exercising fair use and free speech rights for fear of the consequences.

In November 2001, three judges in the Court of Appeals agreed with Judge Kaplan's decision. The EFF and Hacker 2600 have decided not to appeal further to the US Supreme Court.

The Californian Supreme Court made a ruling on its DeCSS case in September 2003. The court sent the case back to the lower Appeal Court for further consideration. The key part of the decision, though, was that ordering someone to remove DeCSS from their website ‘does not violate the free speech clauses of the United States and California constitution’ if publication of the code breaches a trade secret. The lower court was spared the need to decide if CSS was still a trade secret despite wide publication of DeCSS when the DVD Content Control Association dropped the case in January 2004. Nevertheless, in February 2004, when the court formally lifted the injunction against the publication of DeCSS, they also said that there was no evidence that CSS was still a trade secret even when the case was originally brought in 1999.

The focus of The Future of Ideas is on the DeCSS cases in the US. Jon Johansen, the teenager who originally posted the DeCSS code on the internet, has also been the subject of a criminal prosecution in his native Norway. It is a crime in Norway, punishable by up to two years in prison, to bypass technological controls to access data one is not entitled to access. On 7 January 2003, three judges unanimously acquitted Johansen of all charges. According to the judges, someone who buys a DVD which has been legally produced has legal access to the film. In essence, they said that someone cannot be punished for breaking into their own property. Johansen had used DeCSS to bypass the technological controls on a DVD he owned, so that he could watch it on his computer. The prosecution appealed against the decision and, on 28 February 2003, the Court of Appeals ordered a full retrial. The retrial took place in December 2003 and Johansen was again acquitted. On 5 January 2004 the Norwegian Economic Crime Unit (Økokrim) announced that they would not be appealing the case further. Johansen is now suing Økokrim for damages and legal costs.

Further reading: If you are interested in looking at further details of the DeCSS case, the OpenLaw: Open DVD forum at the Berkman Center for Internet and Society provides a huge range of material on this case and the DeCSS case in California, which was decided in favour of the defendants.

If you'd rather not get into all the detail covered at OpenLaw, here is some further information on the case that was originally prepared for another course, which presents the basic story and arguments. These links have been provided only because this is a case that is widely misunderstood and one about which many people get hot under the collar.

7.9 Eldred case

This case is outlined on pages 196–199 of The Future of Ideas. There have been a number of developments since the book was published, the most significant being that the US Supreme Court heard the case in the autumn of 2002 and made a decision in January 2003. Eldred's arguments were rejected by a majority of 7:2.

Lessig and Eldred are now calling for an ‘Eldred Act’, which would require US copyright holders to pay a nominal fee (e.g. $1) 50 years after publication of the copyrighted works to retain the copyright. Any works on which the fee was not paid would fall into the public domain. Details can be seen in the relatively short online petition at http://www.petitiononline.com/eldred/petition.html.

Meanwhile, some of Lessig's colleagues at the Stanford Center for Internet and Society are pursuing another Copyright Term Extension Act (CTEA) case. In Golan v Gonzales they are, so far successfully, specifically challenging the restoration of copyright status to works that had already passed into the public domain. In Kahle v Gonzales they are claiming that the CTEA, combined with the US implementation of the international Berne Convention, effectively make the term of copyright in the US unlimited. This would be in breach of the US Constitution.

7.10 Sony v Owen (UK anti-circumvention case)

This case came to court in January 2002, after The Future of Ideas was published. It involved the importation into the UK of a so-called ‘Messiah’ chip which could be used to circumvent Sony's PlayStation 2 copy protection codes. The copy protection system allows the PlayStation 2 to search CDs or DVDs for embedded codes identifying them as authorised games which the console will play. If it can't find the codes, it will not play the CD or DVD.

The defendants argued that the ‘Messiah’ chip could be used to facilitate making back-up copies – not just for infringing copyright on Sony's games. The judge ruled in favour of Sony and issued an injunction banning the importation of the chips, as well as awarding damages to Sony.

The European Union passed a copyright directive in the spring of 2001 requiring member states to implement laws similar to the DMCA by December 2002. However, the UK already had anti-circumvention provisions written into the Copyright, Designs and Patents Act 1988. Section 296 of this Act was the basis for the judge's decision in this case.

A transcript of the Sony v Owen UK anti-circumvention case was posted on the ukcrypto mailing list in May 2002.

The Playstation, the Xbox and the mod chips

There have been other so-called mod-chip developments in Hong Kong, Australia and the US. Kabushiki Kaisha Sony Computer Entertainment v Stevens in Australia was similar to the UK Sony v Owen case. Sony lost the first round in the Stevens case, in July 2002, with the court ruling that Playstation 2 mod chips could be imported and sold without breaching Australian Law. The judge decided that the combination of the game CD access code and the chip in the Playstation 2 machine that identifies these approved CDs did not constitute a ‘technological protection measure’.

By a majority decision, in July 2003 the Court of Appeal overruled that decision, ordered Stevens to stop selling the mod chips and referred the case back to the lower court to assess what damages Stevens should pay Sony. Stevens appealed and the case was heard by the Australian High Court in early February 2005. The court ruled in favour of Stevens in October 2005.

In April 2003, a US court jailed David Rocci for importing (from the UK) and selling ‘Enigmah’ mod chips for the Microsoft XBox game module. Rocci is one of the first people to be jailed for violating the DMCA. His sentence was five months and a fine of $28,500 (he had been facing up to five years and half a million dollars in fines).

7.11 Copyright Bots, OLGA and the power to monitor and police content on the Net

The basic lesson here is that the power to monitor and police copyright owners' content on the internet is vastly increased by comparison with Lessig's pre-internet dark ages. The Recording Industry Association of America (RIAA) has demonstrated this quite starkly in its recent campaign to identify individual peer-to-peer file sharers, through their ISPs, and then sue these people for copyright infringement.

7.11.1 RIAA v individual P2P music file sharers

When Napster came along it took the music industry by surprise. Napster very quickly developed a massive subscriber base, with some estimates suggesting that it had 60 million users at its height. Then the music industry sued and Napster was shut down. In some ways the case was easy because Napster relied on a central database that facilitated the copyright-infringing file sharing. Meanwhile, however, a number of other P2P file-sharing services with more distributed architectures than Napster have established themselves, and music swapping continues on a massive scale.

The RIAA are keen to control the amount of file sharing going on, as they believe it is having a direct impact on the market for music CDs. And although the internet provides the facility for the file sharing, it also provides the RIAA with the technology to track those file sharers using the very software that is facilitating the copying, as well as other established digital forensic techniques.

However, although the RIAA can track the IP addresses of the computers that contain the apparent infringing copies of music files, the individual users are still difficult to identify given the current architecture of the internet. So the RIAA need to get the ISPs, which provide the P2P users with their internet access, to identify these people. With this in mind the RIAA, in August 2002, took out a test case against a large ISP called Verizon. The RIAA requested that the court issue an order compelling the company to identify a Verizon customer who had allegedly swapped songs illegally using Kazaa. Verizon fought, because they were concerned about:

  1. the privacy of their customers;

  2. the possibility that they, and other ISPs, might become legally liable, in practice, for the online conduct of their customers, and by default be required to undertake the responsibility and costs of policing the Net;

  3. the operational costs associated with the potential need to respond to large numbers of requests from the music industry to identify alleged copyright infringers, which would be substantial;

  4. their customers holding them responsible and suing for any damaging consequences arising out of the disclosure of personal details.

Verizon lost its appeal in June 2003 and was obliged to hand over the customer details. This meant that Section 512(h) of the DMCA gave copyright holders the power to get subpoenas issued by court clerks, without judicial oversight, in order to identify alleged copyright infringers through ISPs. Verizon appealed again and this appeal came to court in mid September 2003. In December 2003 the Appeal Court sided with Verizon. The Appeal Court judges said that if Verizon was not actually storing copyright infringing material on their servers (but only acting ‘as a mere conduit for the transmission of information sent by others’), they did not have to hand over customer identities in response to Section 512(h) subpoenas. The RIAA are contesting the decision.

There is some evidence to suggest that the RIAA's campaign of pursuing individuals through their ISPs has led to a reduction in the overall volume of music swapping on P2P networks. The Appeal Court ruling, however, meant that they had further hurdles to overcome before being able to identify specific legal targets. Two other ISPs – SBC Communications (Pacific Bell) and Charter Communications – and the American Civil Liberties Union (ACLU) had, by this time, also decided to fight the RIAA subpoenas in court. The ACLU, under the auspices of protecting personal privacy, took up the case of an individual student accused of copyright infringement, and numerous other civil liberties groups, like the EFF, had been actively involved in supporting Verizon and Charter on similar grounds. The US Department of Justice and the Copyright Office, on the other hand, are supporting the RIAA, and in January 2004 they filed an amicus brief in the Charter Communications case, effectively saying that the appeal court judges got the Verizon decision wrong. Charter prevailed in the Appeal Court in January 2005.

By August 2004 the RIAA had sued thousands of individuals who, on average, allegedly had more than 1000 songs on their computer hard drives, and they settled with about 600 of those, out of court, for an average of about $3,000. They also had thousands of subpoenas issued, requiring ISPs to hand over customer details.

There has been a flurry of complex legal activity surrounding the RIAA P2P cases, and the situation continues to evolve. Neither is it limited to the US. Music industry trade associations in Australia and Canada have also been suing ISPs. File-sharing sites have been shut down in Spain and Taiwan, for example. The Canadian Copyright Board declared the downloading of copyright-protected music legal under Canadian law in December 2003, although they said that uploading copyrighted songs to file-sharing networks was illegal.

The situation in the EU is fluid, with the gradual implementation throughout the member states of the 2001 copyright directive (2001/29/EC) and the approval of a new intellectual property rights enforcement directive in March 2004. Arguably, the implementation of the 2001 directive into UK law in the autumn of 2003 could be interpreted as meaning that P2P song swappers could be liable to a maximum of two years in jail. However, a government minister from the Department for Trade and Industry, Stephen Timms, said in Parliament in February 2004 that:

In the UK, the British Phonographic Industry (BPI) launched a campaign in March 2004 against music sharing on the Net, and are also threatening court action. Also in March, recording industry associations in Denmark, Canada, Germany and Italy affiliated to the International Federation of the Phonographic Industry (IFPI) brought legal actions against 247 individuals. We are likely to see more of these cases.

Most of the people being sued by the music industry are alleged to be swapping music on a large scale and can be presented as blatant violators. But a 12-year-old New York girl was one of the defendants, and that did not go down too well from a public relations perspective. The girl's mother settled out of court within 24 hours of the case being widely reported, by apologising and paying $2,000. Another case against a pensioner who didn't own a computer was one of a number of cases, which the RIAA later dropped, where individuals have been wrongly accused.

By the end of 2008 the RIAA had sued, threatened or settled with more than 30,000 individuals. There have also been a variety of attempts to get the law changed to tackle copyright infringement via peer to peer networks. The music industry worldwide was also lobbying for “3 strikes” legal regimes whereby ISPs would be asked to send warning letters to suspected file sharers and then cut off their internet connection if the apparent file sharing continued.

Further reading: There is a huge amount of material available on the Web on this story. Just a few sites worth checking out are:

  • EFF

  • RIAA

  • BBC

  • IFPI

Update: In the autumn of 2007 the music industry won damages of $222,000 from a woman proven to have shared 24 songs via P2P network. That's $9,250 per song, and she will also be obliged to pay the industry's legal costs. She appealed the decision, technically on constitutional ‘due process’ grounds but essentially claiming the award is disproportionate to the damage caused by her (now) admitted infringing activity. In September 2008 the judge declared a mistrial in the case on the grounds that he had inadvertently misdirected the jury. The RIAA lost its appeal of that decision in December 2008. In June 2009 the jury in the retrial increased the damages to $1.92 million ($80,000 per song).

The OLGA story referred to in Chapter 11 of The Future of Ideas alludes to the power of the ‘cease and desist’ letter. The next subsection contains a bit more information on this common form of legal threat.

7.12 The cease and desist game

US District Judge Stewart Dalzell has called the internet ‘the most participatory form of mass speech yet developed’. Lessig thinks the architecture is evolving to change that, although he hopes First Amendment – free speech – values can be protected in the process. (We have seen in The Future of Ideas his pessimism about the potential of the Net to survive as an innovation commons.) In the meantime, how does someone deal with material posted on the internet that they may not like?

The most common approach, for those who can afford it, is to engage a lawyer to send a ‘cease and desist’ letter. The Net is viewed by many as a phenomenal ‘liberating’ force, but one of the lessons of the last few years is that law or the threat of legal action is extremely powerful. Most people want to obey the law, if only because they don't want to suffer the consequences of not doing so. This applies to online as well as offline behaviour.

The most common targets of these letters are website owners and ISPs. A letter will typically accuse the recipient of copyright or trademark infringement or of publishing defamatory/obscene/hate material and demand that they ‘cease and desist’ from such action immediately (i.e. remove the offending material or the entire website). Most people comply with the request because they fear the consequences of not doing so and don't have the resources to get involved in a court case. Some comply because they had not realised their site was illegal. Most of the cases don't get reported. Here are some illustrations that were.

  • HP and the security researchers

  • Harry Potter fans fall out with Warner Brothers

  • Did you know ‘bake off’ is a trademark?

  • Barney's cease and desist lawyers

  • Detailed EFF response to Barney's lawyer's threats

Laurence Godfrey has been vilified by cyber-libertarians for taking legal action for defamation against Demon. He pursued a legal remedy because defamatory material about him was distributed on the Net. In the UK the law was on his side: Demon were liable because he had formally asked them to remove the offending material and they failed to do so.

The Godfrey case starkly illustrates how important the role of ISPs will be, as regulators try to get to grips with controlling the internet. ISPs, as the gateways to the Net for most people, could be as important as applications architecture in regulating the Net.

The following Observer article illustrates the issue in relation to speech:

  • The site offends? We'll pluck it out

The strategy of targeting ISPs to protect intellectual property also extends to questions of privacy, obscene or hate material, and efforts to track criminals on the Net. Not only can litigators target ISPs, so also can regulators. Regulators could conscript ISPs to help implement, for example, the EU's Copyright Directive 2001. The FBI already install DCS1000 (Carnivore) devices at ISPs to monitor emails. The Regulation of Investigatory Powers Act 2000 allows UK law enforcement authorities to do likewise, although the practice is not, at the time of writing, as advanced as in the US.

Businesses have a legal duty to their shareholders to make money and they don't do that by running up unnecessary legal fees. So if an ISP is threatened with a lawsuit unless it closes down a website which is alleged to be violating copyright, the chances are it will shut the site down. It is cheaper than spending money on lawyers defending the case, even if it is likely to win. The law can be a powerful force in controlling the internet. ISPs, which provide the entry route for most people, could be one of the most effective groups for facilitating that control.

Further reading: Harvard's Berkman Center for Internet and Society, the EFF, the Boalt Hall Samuelson Center and Stanford's Center for Internet and Society run a chilling effects clearinghouse, which collects what they consider to be pernicious examples of cease and desist letters.

The latest large-scale cease and desist story is the case of DirectTV. The company has sent out hundreds of thousands of cease and desist letters and filed about 9000 lawsuits against people they allege to be intercepting its satellite signal. The EFF and Stanford Centre for Internet and Society have set up a website about it at www.directvdefense.org.

7.13 CPHack case

Mattel obtained a worldwide injunction against the distribution of the CPHack tool, which revealed the list of websites blocked by Mattel's Cyber Patrol internet content blocking software. Lessig has made some critical comments about the outcome of the case:

  • How can a US court assert worldwide jurisdiction?

  • Is it right to allow a company to use the law to prevent people finding out what gets censored or what criteria are used to censor web pages? Lessig, as a US constitutional scholar, worries about things like free speech (i.e. the First Amendment).

  • The law was effectively used as a tool to prevent people criticising Mattel about their specific censorship practices.

  • Is it right to allow a company to use contract law to circumvent copyright law? The Cyber Patrol licence disallows reverse engineering a product to find out how it works, something that is allowed under copyright law.

  • Were the facts of the case appropriately decided? Lessig alleges that CPHack was released under a general public licence (GPL), which would mean Mattel could not control its distribution.

There have been a number of controversies surrounding filtering or blocking software, some of which are discussed in the next subsection.

7.14 Filter software

The Children's Internet Protection Act (CIPA or CHIPA) was signed into law in December 2000 by then US President Bill Clinton. It is the most recent attempt, following the Communications Decency Act and Child Online Protection Act (COPA 1998), to protect children from materials on the internet that are ‘harmful to minors’. It required libraries and schools that get federal funding to install filtering software (‘blocking technology measures’) on their computers. A legal challenge to the Act was launched by the American Library Association (ALA) and others in March 2001.

In May 2002, a panel of three judges in Pennsylvania decided that the law was unconstitutional. They held that, under the First Amendment, public libraries may not filter access to the Net for adults but did not decide whether schools or public libraries could filter access for children. Included in the decision was a comment about the quality of filtering software:

We find that, given the crudeness of filtering technology, any technology protection measure mandated by CIPA will necessarily block access to a substantial amount of speech whose suppression serves no legitimate government interest.

The CIPA case reached the US Supreme Court in March 2003, and in June 2003 the Supreme Court upheld the Act on a 6:3 majority. The decision means that all libraries in the US are required to install filter software as a condition of receiving federal funding. In the wake of the CIPA decision the US government decided to try to reverse COPA. In August 2003 it appealed to the US Supreme Court again to have the law reinstated. A federal Appeals Court has twice ruled the law unconstitutional on the grounds that it restricts free speech, and in 2002 the US Supreme Court refused to overturn the ruling. The ALA maintain a web page on the case.

The 1998 COPA mentioned above has never come into force, having been challenged and repeatedly found unconstitutional. The case was heard by the US Supreme Court for the second time in March 2004. In June 2004 the Supreme Court found the law to be unconstitutional. (Update: Then in January 2009 the Supreme Court finally killed off COPA for good by declining to review it further.)

On 20 October 1999, IDT, a New Jersey-based ISP, blocked all email from the UK because some of its customers had received a large number of offensive unsolicited emails, apparently from a UK address. The junk emailer (spammer) had actually exploited a security hole in a UK university's system (University of Leeds). This made it appear as if the bulk emails were originating from there. The university claimed that IDT did not contact them before they took their action. Even if the emails had come from the university, it was a bit drastic to cut off an entire country in response.

It does demonstrate how crude filtering can be, though. This lack of precision goes to the heart of most of the objections that are raised to filtering. Artificial intelligence cannot yet come anywhere close to providing similar reasoning powers to the human mind. Given that people have great difficulty agreeing on what is acceptable or not, and that this varies depending on individual and cultural values, it is asking a lot to expect software to be able to make such complex decisions.

Another main objection is transparency. Firstly, most of the commercially available filter software is not transparent. We saw in the Cyber Patrol case the lengths that Mattel went to in order to protect their list of blocked sites. The user does not know the filtering criteria used by the software – why will it let one site pass and not another? Secondly there is a temptation to install the software invisibly, for example at an ISP, so that users are not even aware they are being filtered. The latter problem is more acute from a constitutional or regulatory perspective. With commercial products, the user still has the choice of whether or not to use them.

Finally, installing filter software can instil a false sense of security. It is easy to believe the problem of children getting access to inappropriate material is solved once the software is installed. Yet it has been demonstrated repeatedly that these filters do allow pornography, for example, to get through.

Further reading: Given the controversial issue surrounding filtering material on the internet, and many people's concern that they would like to know more about this, I offer the following as a small sample of materials you might like to dip into. This cannot be considered comprehensive. Neither can inclusion of these references or links be considered to represent an endorsement of the views expressed there. Many, you will note immediately, take a strong stance for or against filtering and/or other strategies for protecting young people on the internet.

(Websites accessed 17 September 2008)

  1. Filters & Freedom: Free Speech Perspectives on Internet Content Controls (1999) by the Electronic Privacy Information Center.

  2. Filters & Freedom: Free Speech Perspectives on internet Content Controls 2.0 (2001) by the Electronic Privacy Information Center.

  3. An interesting FutureTense radio interview with Ben Edelman regarding a study he and Jonathan Zittrain did on blocking of the internet in Saudia Arabia. They have done another study recently looking at internet filtering in China. In February 2004, Edelman and Zittrain at Harvard's Berkman Center, in partnership with the Citizen Lab at the Munk Centre for International Studies, University of Toronto and the Advanced Network Research Group at the Centre for Security in International Society at Cambridge University, launched the OpenNet Initiative to document filtering and surveillance practices worldwide.

  4. The Internet Under Surveillance: Obstacles to the free flow of information online.

  5. Internet Blocking in Public Schools LINK BROKEN.

  6. ‘Battling censorware’ is an article that Lawrence Lessig wrote for The Industry Standard in April 2000 about the CPHack case.

  7. ‘Fahrenheit 451.2: Is cyberspace burning?’ (ACLU).

  8. ‘Censorship in a Box: why blocking software is wrong for public libraries’ (ACLU).

  9. ‘Judicial Monitoring: the bureaucrat blinks, privacy wins’ (The Censorware Project).

  10. The Censorware Project.

  11. ‘Who watches the watchmen: internet content rating systems, and privatised censorship’ (Cyber-Rights & CyberLiberties report).

  12. ‘Who watches the watchmen: Part II’ (Cyber-Rights & CyberLiberties report).

  13. Filtering FAQ (Computer Professionals for Social Responsibility).

  14. ‘Mandated Mediocrity: blocking software gets a failing grade’.

  15. The Court Challenge to the Child Online Protection Act (COPA).

  16. Global Internet Liberty Campaign statement against ‘stealth blocking’.

  17. Joint Statement Opposing Legislative Reguirements (the Children's Online Protection Act – ‘CIPA’ or ‘CHIPA’) for School and Library Internet Blocking Technologies LINK BROKEN

  18. ‘FilterGate, or knowing what we're walling in or walling out’.

  19. ‘Massachusetts internet filtering technology company says mandatory filtering laws aren't needed’.

  20. American Library Association's CIPA website.

  21. ‘Positioning the public library in the modern state’ (First Monday article on CIPA).

If you are concerned about your children finding inappropriate material on the Web, there are a huge number of websites and books offering advice on safe surfing. For example:

  • Consumers for Internet Safety Awareness

  • Child safety on the information highway

  • ALA guide to safety on the Net

  • The Parent's Guide to Protecting Your Children in Cyberspace (2000) by Parry Aftab, published by McGraw-Hill

7.15 Edelman v N2H2

The combination of filtering software and the DMCA have allegedly provided particular problems for Harvard University student and researcher Ben Edelman. His concerns about personal liability relate directly to the issues in the CPHack and DeCSS cases we have covered. In July 2002 Edelman sued one of the largest filter software companies, N2H2, asking the court that he be allowed to conduct and publish his research on their software without fear of civil or criminal liability under the DMCA.

In April 2003, a US District Court judge in Boston dismissed Edelman's case, saying ‘… there is no plausibly protected constitutional interest that [plaintiff] Edelman can assert that outweighs N2H2's right to protect its copyrighted property from an invasive and destructive trespass …’ (Edelman and his backers, the American Civil Liberties Union, let the appeal deadline pass without lodging an appeal, so the case is now closed). That one sentence in the judge's decision has raised some controversy. Trespassing on copyright was not a previously recognised offence.

The OpenNet Initiative, launched in February 2004, in which Edelman is involved, is now operating as ‘a “clearinghouse” for circumvention technologies which assesses and evaluates systems intended to let users bypass filtering and surveillance’ and also actively developing ‘circumvention technologies in-house as a means to explore the limitations of filtration and counter-filtration practices.’

Edelman maintains a web page about the Edelman v N2H2 case at Harvard.

7.15.1 iCraveTV

Jurisdictional issues provide an incentive for commerce and governments to find technological ways to ‘zone’ the internet (e.g. to make sure Canadians don't leak free TV into the US).

7.16 P2P cases: controlling P2P

We have seen how the entertainment industries are pursuing thousands of individual users of file sharing networks and their ISPs through the courts. If they can ultimately control the ISPs it will be more effective than suing multitudes of individuals. What is happening with the P2P companies themselves?

7.17 P2P cases: KaZaA in the Dutch Supreme Court

The Supreme Court in the Netherlands was the first national high court to consider the question of whether providers of P2P technologies were breaking the law. The Dutch music rights society Burna/Sternra had taken out the case against the original creators and distributers of the then KaZaA BV (now just ‘Kazaa’) software, Niklas Zennstrbm and Janus Friis.

Having lost in the Court of First Instance, Zennstrbm and Friis sold KaZaA BV to Sharman networks in Australia. Ironically, about a month later, in March 2003, the Amsterdam Appeal Court overturned the original decision.

Burna/Sternra appealed to the Dutch Supreme Court, which made its ruling on 19 December 2003. The Supreme Court agreed with the Amsterdam Court of Appeal that it was legal to make P2P file-sharing software available. The decision did not deal with the question of copyright infringement by individual file sharers so left open the possibility of suing individuals, as the IFPI (International Federation of the Phonographic Industry) have done in Denmark, Germany, Italy and many other countries.

The music industry in Australia sued the current owners of Kazaa, Sharman Networks. The trial was in November 2004, and Judge Wilcox ruled against Sharman in September 2005. There were further appeals and negotiations, and the parties eventually settled out of court in July 2006.

Although the Dutch Supreme Court was the first to consider P2P technologies directly, it has been widely argued that the US and Canadian Supreme Courts have also effectively laid down clear precedents that cover P2P technologies. The Sony v Universal case decided by the US Supreme Court in 1984 ruled that any technology with ‘substantial non-infringing uses’ could not be considered illegal.

More recently, in March 2004, the Canadian Supreme Court decided, in Law Society of Upper Canada v CCH, that that ‘a person does not authorise copyright infringement by authorising the mere use of equipment (such as photocopiers) that could be used to infringe copyright. In fact, courts should presume that a person who authorises an activity does so only so far as it is in accordance with the law.’ Also, ‘even if there were evidence of the photocopiers having been used to infringe copyright, the Law Society lacks sufficient control over the Great Library's patrons to permit the conclusion that it sanctioned, approved or countenanced the infringement.’ Extending this to P2P software, the question becomes whether the P2P company can control the behaviour of users of its software.

When Grokster got to the US Supreme Court in 2005, the issue of 'substantial non-infringing' uses was side-stepped and the ruling on 27 June 2005 went against the P2P company.

7.18 P2P cases: MGM v Grokster

Review the information in Section 4.9 on the Grokster case (‘Some P2P legal developments’) then click on the ‘Back’ button on your browser to return to this page.

In August 2004, the Appeal Court ruled in favour of Grokster and Streamcast. The court basically applied the Sony v Universal (1984 case) precedent and said the main question was:

whether a technology is merely capable of a substantial noninfringing use, not the proportion of noninfringing to infringing uses.

To be liable, the P2P company would have to have:

  1. knowledge of specific infringments

  2. at a time when it could do something about those infringements.

Note that the question of control over users' actions was key here again – Grokster couldn't be held liable for the actions of its users if it had no control over those actions.

The court also said that the P2P companies could not be required to redesign their technologies to a specification acceptable to the entertainment industry and force customers to update to the approved version. Ultimately the judges came down quite hard on the copyright owners' interpretation of the law:

As to the issue at hand, the district court's grant of partial summary judgment … is clearly dictated by applicable precedent. The Copyright Owners urge a re-examination of the law in light of what they believed to be proper public policy, expanding exponentially the reach of the doctrines of contributory and vicarious copyright infringement. Not only would such a renovation conflict with binding precedent, it would be unwise. Doubtless, taking that step would satisfy the Copyright Owners' immediate economic aims. However, it would also alter general copyright law in profound ways with unknown ultimate consequences outside the present context.

Further, as we have observed, we live in a quicksilver technological environment with courts ill-suited to fix the flow of internet innovation. The introduction of new technology is always disruptive to old markets, and particularly to those copyright owners whose works are sold through well-established distribution mechanisms. Yet, history has shown that time and market forces often provide equilibrium in balancing interests, whether the new technology be a player piano, a copier, a tape recorder, a video recorder, a personal computer, a karaoke machine, or an MP3 player. Thus, it is prudent for courts to exercise caution before restructuring liability theories for the purpose of addressing specific market abuses, despite their apparent present magnitude.

Indeed, the Supreme Court has admonished us to leave such matters to Congress.

The US Supreme Court heard the case at the end of March 2005 and decided in favour of the music industry in June 2005. The US Copyright Office has a page devoted to the Grokster case.

On the day of the decision I wrote:

A clear win for MGM and the studios and a clear loss for the P2P companies. No doubt there is significant gnashing of teeth about this amongst the anti-copyright expansion lobby, suggesting it undermines or kills the Sony v Universal "subtantial non-infringing use" test but it is not as bad for them as it might at first appear.

First of all, the court was ruling on whether the Court of Appeal for the 9th circuit were right to give a summary judgement against MGM, saying they couldn't sue the P2P companies for damages. On that they decided the 9th circuit was wrong to stop MGM from getting a substantive hearing. So the case now goes back down the chain to look at MGM claims for damages in detail.

Secondly, the Supreme Court focused heavily on the intentions of the Grokster and Streamcast in relation to the "staggering" scope of copyright infringement on their networks. They concluded that there was a lot of evidence to demonstrate that these two particular P2P companies not only distributed technologies "capable of substantial non infringing uses" (which I thought would have been ok under the Sony test but there is a disagreement amongst concurring justices on this 3v3 with 3 not commenting on the issue), but that they also acted to heavily promote copyright infringing activity amongst the users of their software. The CTO of Streamcast, for example, is on record with the statement "[t]he goal is to get in trouble with the law and get sued. It's the best way to get in the new[s]."

And this becomes the key to the Court crafting a new inducement rule, based on a similar rule in patent law, which will probably become known as the rule in MGM v Grokster. On page 19 of the decision, Justice Souter says the patent law inducement rule:

… is a sensible one for copyright. We adopt it here, holding that one who distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression of other affirmative steps taken to foster infringement, is liable for the resulting act of infringement by third parties …

… mere knowledge of infringing potential or of actual infringing uses would not be enough her to subject a distributor to liability. Nor would ordinary acts incident to product distribution, such as offering customers technical support or product updates, support liability in themselves. The inducement rule, instead, premises liability on purposeful, culpable expression and conduct, and thus does nothing to compromise legitimate commerce or discourage innovation having a lawful purpose.

There are a lot of talking points in the decision but that's the key one. It adds up to a really bad decision for Grokster and Streamcast (moreso for the latter because Grokster at least sent emails to users warning them about infringing content), but not necessarily bad for other P2P companies.

The court clearly wanted to avoid scaring potential technology innovators and more or less strike that balance. Of course they have no control over how the decision will be presented, which will be crucial.

7.19 IPR enforcement, PIRATE and INDUCE

The passing of the European Union intellectual property rights (IPR) enforcement directive in March 2004 could lead to raids on alleged filesharers' homes and increasing costs for telecommunications companies associated with P2P litigation and P2P technologies.

Following the MGM v Grokster case, we saw increased though not entirely fruitful lobbying efforts to get the US Congress to change the law to target P2P technologies. (Pamela Samuelson, of the University of California at Berkeley, had some interesting views at the time of the decision as to how the ‘inducing infringement’ test introduced by the court deprived the industry of ‘its strongest argument for legislation to put P2P and other disruptive technology developers out of business.’)

There were (and are) ongoing proposals for laws in the US to tackle copyright infringement on P2P file-sharing networks. ACCOPS (Author, Consumer and Computer Owner Protection Security Act of 2003), ART (Artists' Rights and Theft Prevention Act 2004) and the PIRATE (Protecting Intellectual Rights Against Theft and Expropriation Act 2004) bills, which could have led to the jailing of P2P file swappers, the Justice Department pursuing civil legal action on behalf of the entertainment industry, and the facilitation of wire tapping for civil copyright infringement, were introduced.

Also in March 2004, California Attorney General Bill Lockyer (also President of the National Association of Attorneys General) circulated a letter to fellow state attorneys general calling P2P software a ‘dangerous product’ that facilitates crime and copyright infringement. The letter appears to have been drafted by a senior vice president in the MPAA (Motion Picture Association of America), however. In August 2004, an updated version of this letter was sent to P2P United, the trade body for P2P companies, urging the companies to ‘take concrete and meaningful steps to address the serious risks posed to the consumers of our States by your company's peer-to-peer (“P2P”) file-sharing technology.’

The movie industry has been shocked by the development of P2P software such as BitTorrent, which facilitates significantly faster distribution of large files than had previously been possible. The scale of the P2P file-sharing problem facing the film industry could potentially be as big as that faced by their counterparts in the music industry, sooner than they had expected. Hence the lobbying of officials like Lockyer. The MPAA have also begun their own campaign to sue individuals.

In June 2004 US lawmakers introduced the INDUCE (Inducement Devolves into Unlawful Child Exploitation) bill. By August 2004, the name of this proposed law was changed to the Inducing Infringement of Copyrights Act of 2004. While the intentions behind the INDUCE act were to regulate P2P copyright infringement, critics said that it could be used to sue libraries or any computer or electronics manufacturer for selling PCs, tape recorders, CD burners, MP3 players or mobile phones.

Lessig would portray all these developments – the court cases and attempts to strengthen the copyright owners' position in law – as examples of his counter-revolution. The Grokster case, even though the company lost in the Supreme Court, suggests that, when it comes to P2P technologies, the battles in the counter-revolution are not all one-sided, particularly if we agree with Pamela Samuelson that it was a very narrow and not entirely welcome victory for the music industy. The counter-revolution (if we agree with Lessig that there is a counter-revolution) by the established industries is a complex story and one that will continue to evolve.

Further reading:

  • IFPI response to Netherlands Supreme Court judgement on Kazaa

  • EDRi-gram on the passing of the EU IPR Enforcement Directive

  • Foundation for Information Policy Research on EU IPR Enforcement Directive

7.20 Consequences of control

The Net potentially increases the exposure of copyrighted material to copyright infringement. This is a serious issue that copyright owners have a legitimate concern about. The Net also potentially:

  • makes monitoring and control of content use greater in some circumstances;

  • threatens existing media by providing alternative production and distribution channels.

In most of the cases chosen the courts appear to be protecting copyright owners against internet innovation. Lessig goes further and accuses the courts of giving established business a veto over innovation and new competition. Despite the opportunities for existing industries provided by the Net, old business models are getting the protection of the courts.

Yet it is precisely because we do not know what the future holds that this is not a good idea – we should keep our options open and allow innovation to flourish, rather than allowing large businesses or industries with an interest in maintaining the status quo (at least insofar as it protects their interests) a veto over the future:

In the name of protecting original copyright holders against the loss of income they never expected, we have established a regime where the future will be as the copyright industry permits.

If the music industry, for example:

… has the absolute right to veto distribution it can't control, then it can strike deals with companies offering distribution that won't threaten the labels' power.

Lessig believes that there are, however, choices that can be made by ordinary people as consumers and voters, and by courts and governments as policymakers, which can influence the control that the old exerts over the new. One example is the choice that was made by the US government in the cases of the ‘player piano’ and cable television (the ‘first Napster’) – the ‘compulsory licence’. Under the compulsory licence Napster, MP3.com and others would get the right to use copyrighted material in innovative ways but they would have to pay the copyright owners a rate set by the licence. As long as the rate is reasonable, for both copyright owners and innovators, the copyright owners get compensated. But they get ‘compensation without control’.

"Overprotecting intellectual property is as harmful as underprotecting it. Creativity is impossible without a rich public domain."

(Judge Alex Kozinski in Vanna White v Samsung Elecs (1993))

Further reading: Some very smart people have been thinking about alternative compensation systems for copyright holders, including William Fisher at the Berkman Center at Harvard. A draft chapter of his book, Promises to Keep: Technology, Law and the Future of Entertainment, published in 2004, is available online.

7.21 Controlling innovation with patents

To date we have been focusing almost exclusively on the copyright element of intellectual property law. Patents are also very important. This page will briefly examine some of the issues.

According to the UK Intellectual Property Office:

A patent is an exclusive right granted by government to an inventor, for a limited period, to stop others from making, using or selling the invention without the permission of the inventor.

In the UK we are theoretically prevented from patenting ‘a scheme or method for doing business’. However, business process patents have been granted in the US and Europe, of which the most widely known is Amazon's ‘one-click’ patent (US Patent No. 5,960,411 ‘Method and system for placing a purchase order via a communications network’). In 1998 the US Patent and Trademark Office issued 125 business process patents for ways of doing business on the internet. In 1999 there were 2600 applications for ‘computer-related business method’ patents and the numbers have been increasing year by year since then. Online ordering with a credit card can only be done in a limited number of practical ways, so granting someone a monopoly right over such a process could create a problem for commerce.

In the UK, theoretically, a computer program alone cannot be patented. There are some complex exceptions to this and software patents are the subject of much controversy both in Europe and the US. Software patents are allowed in the US, despite a 1972 Supreme Court decision which stated that a computer program was not patentable (Gottschalk, Acting Commissioner of Patents v Benson et al., 409 US 63 (1972)). They have also been granted in the EU. Article 10 of the 1994 international TRIPS (Trade Related Aspects of Intellectual Property) agreement declares that computer programs ‘shall be protected as literary works under the Berne Convention’; this suggests that software is protected by copyright, not patents.

Lessig covers patents briefly in Chapter 11 and makes a number of observations about them:

  • Some patents are good and some are harmful.

  • The US Patent Office, without appropriate authority, expanded the scope of patents, i.e. the kinds of thing that patents can cover.

  • The US Patent Office is under-resourced and individual patent examiners over-worked.

  • The 1998 State Street Bank appeal court case opened the floodgates for ‘business process’ patents.

  • Patents are not the only incentive for innovation – being first to market is one good example of an alternative.

  • The patent system is supposed to encourage inventors to reveal their invention to the public, yet software writers do not have to reveal the source code to patented software.

  • Patent litigation is expensive, so only those with deep pockets can use the system.

  • The patent system inhibits innovators because too many people can block their ideas – it is a sort of anti-commons where everyone has the right to block the use of a resource by others.

Lessig's main problem with the current state of the patent system in the US is that the scope of patents has become too broad. The result is that rather than covering the practical application of an idea, patents are covering ideas themselves. It is equivalent to saying that someone can patent the idea of fishing and then demand royalties from anyone who goes fishing, regardless of the method they use to fish.

7.22 Gene patents

The debates surrounding patents become even more controversial when we look at the area of biotechnology and genetics. How are gene patents relevant to a course on law, the internet and society? Genetic research has turned animals and plants into information maps. As James Boyle so eloquently puts it in his book Shamans, Software and Spleens, we can now deal with DNA as ‘a language to be spoken, not an object to be contemplated.’ Genetic information is primarily thought of as information. It is therefore subject to processing via the internet and to the vagaries of intellectual property laws – laws governing copyrights, patents, design, trademarks and trade secrets. It was established in a 1980 US Supreme Court case, Diamond v Chakrabarty, 447 US 303 (1980), that patenting human DNA sequences is allowed. Could we be required in future to pay someone a licence fee just for using our own DNA? This is unlikely given the prevailing values in society but we do not know what the future holds.

In February 2000, a patent on a human gene was issued to Human Genome Sciences in Maryland. The patent means that no one else can do research on the gene without permission. The company says it makes the gene available to commercial companies at a price and academic researchers at no cost. The gene makes a protein that acts as a ‘receptor’ for a virus, i.e. the virus grabs this protein to enter a cell and begin infecting the host.

Some people repeatedly exposed to the HIV virus that causes AIDS do not develop the disease. In 1995, researchers discovered that the protein (called CCR5) that acts as the virus grabber was ‘defective’ or different in these people and would not allow the virus to take hold. In isolating the gene they also discovered that if the gene was defective it produced the defective protein. Human Genome Sciences had patented the gene in the mid-1980s, although at the time they did not know what the gene could do. They had therefore isolated and identified the gene before it was known to have a role in the progression of an AIDS infection. Once the patent was granted, if anyone wanted to do research with this gene, in order to develop an AIDS treatment, they would have to pay a licence fee for the privilege. It is possible that more than one licence will be required because there have been further applications for patents in processes to produce HIV-inhibiting drugs, based on the function of this CCR5 protein.

Let us suppose that a company has developed a drug which is considered by doctors to be the only treatment available to save the life of a particular AIDS patient. What would happen to the patient if the holders of the CCR5 or related patent obtained a court order to suspend the sale of the drug because they believed it infringed their patent? This is a worst case scenario, but a similar case was the subject of litigation in the US in relation to a cancer treatment (Johns Hopkins University et al. v CellPro, which ran through various courts from 1995 to 1998). In a more recent case a US biotech company, Myriad Genetics, threatened to sue Ontario province in Canada. The company holds patents on genes that can indicate a predisposition to develop breast cancer. The Organic Consumers Association has a web page with more information. There could be a lot of money involved, but you can see that it is a lot more important than money.

Note: The development of AIDS is much more complex than I have outlined above and the CCR5 receptor is just one of the ways in which HIV is thought to infect its victims. The human body is a hugely complex system, and attributing a direct cause-and-effect (e.g. susceptibility to HIV) process to a single gene is a vast oversimplification.

"Increases in intellectual property rights are likely to lead, over time, to concentration of a greater portion of the information production function in the hands of large commercial organizations …"

(Yochai Benkler (1998))

Further reading: Legal jargon warning – the following links are to legal documents outlining court decisions in important internet-related patent cases.

  • State Street Bank v Signature Financial Group

  • Amazon v Barnes & Noble (one-click patent) (Note in the Autumn of 2007, the US Patent and Trademark Office decided to throw out Amazon's one-click patent following a challenge by Peter Calveley.)

  • BT v Prodigy (hypertext linking patent)

  • Rambus v Infineon

  • Merc Exchange v eBay – A judge in August 2003 ordered eBay to pay $29.5 million in damages for patent infringement. The case was appealed and heard by the US Supreme Court in 2006, which ruled against Merc Exchange's request to issue an injunction preventing eBay using, for example, the electronic button at the heart of part of the dispute. It went back to a lower court which also refused the injunction in July 2007.

  • In August 2003, Eolas Technologies won a patent infringement case against Microsoft. A jury awarded Eolas and joint plaintiff the University of California $520 million. Microsoft appealed through the courts and the Patent Office. After various developments in the Patent Office and bouncing up and down through the courts, in 2007 a US Supreme Court decision weakened the Eolas case somewhat and the two parties then settled out of court by August 2007, with confidentiality agreements keeping the details hidden. The patent in this dispute goes to the heart of how the Web operates and it is as yet unclear what the longer-term implications of this case might be. The True Believer and Will browser verdict snare others? provide more details.

7.23 Commerce and architectures of identification

Chapter 10 of The Future of Ideas, deals with the counter-revolution through architecture. However, the argument in the chapter is difficult to follow in places, and you do not need to read it to complete this unit. Instead a short summary of the story is provided below.

7.23.1 Chapter 10 summary

Lessig says, in Chapter 10, that as the internet increasingly migrates to broadband (high-speed) use, via cable networks for example, it is moving to networks with more controlled architectures. On cable, content flows faster upstream than downstream (i.e. information flows faster from the cable company to the consumer than in the other direction). Broadband cable owners are not subject to the same legal restrictions as narrowband telephone network owners to keep their networks open. They have no obligation or incentive to operate open e2e architectures. As companies like Cisco develop technologies of control – such as policy-based routers – network owners will deploy them, enabling an evolution of the internet to a much more controlled architecture at the code layer.

As network owners await the development of these technologies of control, they are attempting to neutralise the competitive effect of the internet's code layer by controlling access to, and use of, their networks. One example is employing limited numbers of approved ISPs through which internet users can access their networks. The hope is that by providing controlled access points, or ‘chokepoints’, they can control the traffic on their networks.

This rational business behaviour may have a negative effect on the internet as an innovation commons.

7.24 General commercial incentives to change internet architecture

So far we have been focusing on the cable and telecommunications companies and the entertainment industry in our look at Lessig's counter-revolution. Now we need to think about commerce more generally. What kind of architecture or code layer does any business need in order to be able to do business reliably via the internet?

Well, in order to generate confidence and trust in business transactions and to encourage the growth of internet commerce, such an architecture would need to include:

  • authentication – you are who you say you are;

  • authorisation – you have the authority to spend £x or $y;

  • privacy – in communication;

  • integrity – transmission not altered en route (someone receiving the message should be able to check whether it has been interfered with);

  • non-repudiation – you can't deny it was you who committed to the deal.

Essentially, business needs architectures of identification. In the real world we can tell something about people through looking at them and knowing them or their reputation as part of a community. We can, for example, tell whether or not someone is a child, with a reasonable degree of confidence, just by looking at them. Hence we can make a judgement about whether to sell cigarettes to someone, having made a reasonable assessment about whether that person is legally old enough to buy them. What we don't know needs to be verified through documents like driving licences and passports, as well as trusting to some degree what people tell us.

On the Net, things are different. There is no equivalent to sizing someone up when we see them; and on the basic network there are no universally deployed driving licences or passports to help us learn about the users. This is changing. AOL – with e-wallet and related authentication services, Microsoft – with ‘Passport’ and ‘Next Generation Secure Computing Base’ (previously ‘Palladium’), and the Intel-led Trusted Computing Group (previously called the Trusted Computing Platform Alliance) are among those attempting to drive the change.

If doing business depends on trust, which itself depends on identity, certification of identity and a relationship built up overtime, then cyberspace gives commerce a problem. Therefore, ordinary businesses – not just the big entertainment, cable or software companies – have incentives to do something about the identity problems thrown up by the Net. They have incentives to support industry initiatives which will lead to a more business-friendly architecture or code layer.

There are three architectures of identification with which you may be familiar:

  1. Password and account name. You need a password and account name to get into your online banking system, for example.

  2. Cookies – small files that get entered into your computer's memory when you visit some websites.

  3. Digital certificates – an online passport that verifies that you are who you claim to be and contains lots of certified facts about you. Digital certificates depend on cryptography.

"The internet isn't free. It just has an economy that makes no sense to capitalism."

(Brad Shapcott)

Further reading:

  • Cambridge University security expert Ross Anderson has compiled a comprehensive set of frequently asked questions on Palladium and TCPA, which also touches on digital rights management (DRM).

  • Seth Schoen's trusted computing. Note: you'll need to scroll down the page a little to get to the relevant bit.

  • Edward Felten's armoured car analogy on DRM.

  • There is a useful website, CookieCentral.com, that explains everything you need to know about cookies.

7.25 The implications for privacy of changes in architecture

A US privacy advocate, Jeffrey Rosen, believes privacy is important because it:

… protects us from being misdefined and judged out of context in a world of short attention spans, a world in which information can be confused with knowledge.

Others don't place so much importance on privacy and see it as an outdated concept.

7.25.1 Privacy and why the Net changes things

A couple of hundred years ago, if we wanted to have a private conversation with someone, we could walk out into the middle of a field with them, have a look around to see there was no one near, and securely chat away in private. Things have changed a bit since then. We've had the telegraph, the telephone, satellites, wiretapping, James Bond-type audio bugs and video cameras that let us track, hear and see everything the bad guys are plotting and doing; radio and television and Oprah Winfrey broadcasting intimate details of people's personal lives; and the computer and the internet, among other things.

Technology will not necessarily discriminate and it will also let the bad guys monitor the good guys. So, is privacy good or bad? It is very hard to say, because ‘good’ and ‘bad’ are value judgments. ‘Privacy’ is almost as hard to define as ‘good’ and ‘bad’. In his first book, Code and Other Laws of Cyberspace, Lessig talks about privacy as the power to control what others can come to know about us.

Justice Brandeis, in a famous US Supreme Court case Olmstead v US (1928), said privacy is ‘the right to be left alone – the most comprehensive of rights, and the right most valued by a free people.’

On the surface it might seem to be inherently ‘good’ that we should be able to control what others can come to know about us. The notion of privacy as a right or something good is contested, however. There is a long tradition of prohibiting links between certain people. For example, a Catholic priest is not allowed to get married. Would it be alright, therefore, for a priest to get married and keep that information private or secret? Is it OK for people with deviant behavioural tendencies (e.g. criminals) to get together in private and control what others (e.g. the police) can come to know about them? Would it be easier if no one had any privacy, so that the people with something to hide, like criminals, would be exposed and easier to deal with?

What constitutes privacy is itself a value judgement, and whether it is good is also a value judgement and depends on the context. You have to decide for yourself whether it is a value that is important to you.

Why does the Net change things?

Monitoring, recording, processing power, computers, the internet and very powerful database filtering tools make it possible to find out all sorts of things about people. It is relatively difficult to search and correlate paper data. The power of computers to do ‘clever’ things with data is phenomenal, and the internet allows data to be searched remotely and merged with data from other databases. Credit card companies have a huge amount of information about people on their databases – what we buy, where, when and what we eat, our choice of entertainment and holidays. They can use this to predict what we might be likely to spend money on in the future. For example, the online retailer Amazon provides personalised ‘instant recommendations’ based on the items you have previously purchased from them.

There are no easy jurisdictional or centralised constraints because the data flows don't recognise jurisdictional borders. Data can be collected, processed and used on a scale not previously imagined. What's more, it is cheap to do it and getting cheaper.

As mentioned earlier in Section 7.3, Chapter 2 of the Regulation of Investigatory Powers Act 2000 (RIPA), in the UK, makes it possible for a public authority to obtain details of someone's clickstream (the series of links we click on and web pages we go to when using the internet) without a judicial warrant. This facilitates the observation and (possibly limited) control of someone's behaviour when using the Net. If we know we are being observed we are more careful about what we say and do. Technically the security services need a warrant to read the content of specific communications but there is no such restriction on monitoring the pattern of someone's internet travels – a good picture of someone's activities can be built up by tracking who they are corresponding with and when, how long the messages are, what kind of websites they visit, etc. This is one example of law enabling the use of privacy-invading technologies, albeit with the intention of aiding law enforcement agencies with their work.

So the Net changes things because it fundamentally changes the boundaries of what it is possible to do with surveillance – the scale and the speed at which data can be collected and processed, the intelligence behind that processing and the apparent absence of constraints on all this.

We saw on the previous page that there are incentives for businesses to support the development of architectures of identification. Commerce and government would both like to see architectures of identification at the code layer. Such a code layer potentially makes privacy invasion easier. It seems that there are careful policy choices to be made about changes to internet architecture to accommodate business while weighing up the implications for personal privacy.

7.26 Privacy suits: DoubleClick and Toysmart

Some examples should help to illustrate some of the problems associated with privacy and the internet. Privacy suits taken out against Doubleclick and Toysmart are important illustrative cases with respect to the influence of commerce on privacy. They raise questions about the kind of architecture we should be striving for, somewhere along the continuum from open-anonymous to closed-identifiable.

Late in 1999, the largest internet advertising company, DoubleClick, effectively changed its policy on tracking websurfers anonymously. In the summer of 1999 they bought out the direct marketing (junkmail and catalogue) company Abacus Direct, which reportedly had a database profiling the spending habits of 90 million US households. DoubleClick planned to cross-match the Abacus database with its own, building profiles of online and offline consumer habits. Privacy groups were angered by the move, as DoubleClick had maintained that their technology allowed internet users to remain anonymous. The company won Privacy International's ‘Big Brother Award’ in April 2000, in the Greatest Corporate Invader category ‘for monitoring the surfing of 50 million Net users.’ Some consumers and the State Attorneys General of New York and Michigan sued the company. ZDNet published a report in March 2000 on how the DoubleClick cases could change US law. It did not change the law – the New York Times reported in April 2001 on DoubleClick's victory in the case. (Note: the New York Times requires you to register and accept cookies to view their articles.)

The Federal Trade Commission (FTC) also investigated DoubleClick's data collection practices after a formal complaint from the Electronic Privacy Information Center (EPIC). With the FTC review and the court cases, DoubleClick shelved the plan to merge the online and offline profiling information until ‘government and industry privacy standards’ could be developed. The company was also responding to the huge drop in its share price which may have been due to the negative publicity over privacy (Wired ran a story on this at the time). The FTC investigation was closed in early 2001. The FTC wrote to DoubleClick saying they believed that the company abided by its own privacy policy:

Doubleclick never used or disclosed consumers' PII [personal identifying information] for purposes other than those disclosed in its privacy policy.

They also said that the closing of the investigation:

… is not to be construed as a determination that a violation may not have occurred, just as the pendency of an investigation should not be construed as a determination that a violation has occurred. The Commission reserves the right to take such further action as the public interest may require.

Legalese often gets tied up in ‘nots’ like this, making it confusing – it just means that the FTC are still sitting on the fence, i.e. they are not siding with DoubleClick or the company's critics, such as EPIC. If you are interested you can read the FTC letter to DoubleClick's lawyer. By the summer of 2001 DoubleClick unveiled a new privacy policy and asked for comments on it ‘to make sure that our policy provides awareness of our services and technologies in a way that is easy to understand.’

By the summer of 2007 DoubleClick was in line to be taken over by Google. Since Google's chief executive, Eric Schmidt, had declared that one of the company's key goals was to collect as much personal information about individual users as possible, this proposed takeover became the object of much concern among civil right activists. The takeover was examined by US and EU regulators with the US Federal Trade Commission approving the deal in December 2007 and the European Commission following suit in March 2008.

The Federal Trade Commission sued Toysmart in the summer of 2000. In May of that year, the company filed for bankruptcy and planned to sell confidential customer information. This was in violation of its own privacy policy. You can see the FTC's reasoning for taking the action in their press release at the time. The two sides came to an agreement that Toysmart could sell the information if the buyer agreed to uphold the original privacy guarantees provided by the company. Then the bankruptcy court judge overturned the deal. Next, Buena Vista Internet Group, a subsidiary of the Walt Disney Co. and the majority shareholder in ToySmart, offered to pay the company (about $50 000) to destroy the information. The bankruptcy judge partly agreed to the plan early in 2001. However, rather than having the information immediately destroyed, she required that the company lawyers retain it until all claims against the company have been settled. The $50 000 should then be divided among the company's creditors, with the lawyers having to provide a legal document (affidavit) to declare how the information was destroyed.

What each of these cases and the preceding discussion again illustrate is the need to make policy choices – about how commerce is handled via the internet and how far the need to facilitate e-commerce can be allowed to erode traditional notions of personal privacy.

I have hardly scratched the surface of the issues related to privacy and the internet. There have been a huge number of developments in the area since Lessig published his book, the most important, arguably, being those developments arising out of the tragedies of September 11, 2001. These include a whole host of anti-terrorism legislation (such as the USA/PATRIOT Act and the Anti-Terrorism Crime and Security Act in the UK), the national identity card proposed by the UK government, and the ‘Secure Flight’ passenger screening system (which replaced the Computer Assisted Passenger Pre-screening System (CAPPSII)), which aims to improve airline security.

The cost of manufacturing microscopic radio frequency identification (RFID) tags has also come down to the extent that it now makes economic sense for commerce to deploy them on a large scale. The Guardian reported (Tesco tests spy chip technology) on a trial run with these chips at Tesco in Cambridge.

In a cnet news report a Californian senator worried, ‘How would you like it if, for instance, one day you realised your underwear was reporting on your whereabouts?’

Further reading: the debates in relation to privacy also tend to get quite polarised, but there are quite a few useful information websites and books in the area should you decide to look into the matter further. For example:

  • Privacy International

  • EPIC

  • Center for Democracy and Technology

  • The Control Revolution, by Andrew Shapiro

  • The Internet, Law and Society, edited by Yaman Akdeniz, Clive Walker and David S. Wall

  • The Unwanted Gaze: The Destruction of Privacy in America, by Jeffrey Rosen

  • Database Nation, by Simson Garfinkel

  • Secrets and Lies, by Bruce Schneier

  • Beyond Fear: Thinking Sensibly About Security in an Uncertain World, by Bruce Schneier

Look out also for stories in the media about the handling and transfer of EU airline passenger data to US authorities and the use of the so-called CAPPS on CAPPSII database in the US. The CAPPSII program was declared dead by the US Secretary for Homeland Security in July 2004 but it is was replaced by a similar scheme operating under a different name, ‘Secure Flight’. By 2006 Secure Flight had been subjected to several government reviews (including reviews by the Department for Homeland Security and the General Accounting Office), all of which basically found it unfit for purpose. Though it was temporarily suspended in 2006, Secure Flight was re-introduced and, as of the autumn of 2007, the program was still being used, with budget plans to spend a further $38 million on it in the next financial year. US Customs have also for some years been operating an ‘Automated Targeting System’ which assigns a score to all travellers entering or leaving the US indicating what level of terrorist threat they pose. There is very little public information about how the programme operates, but the data collected can be shared with a range of federal or local authorities and even private organisations. It seems that the only people who can't get the information are those travellers to whom it relates. It is also subject to retention for 40 years.

7.27 Activity 6

This activity aims to demonstrate how much information we give out when surfing the internet.

Activity 6: Do your own privacy test

Click on the following links and make a note of the information they provide about you and your computer:

  • Leader's Smart Guide

  • Junkbusters alert on web privacy

The first will give a list of the following collected information – reported remote internet protocol (IP) address, your browser, your operating system (e.g. Windows XP), the referrer (the Web address you linked from – i.e. this one), whether a proxy was used, the nearest proxy, and your ISP or your employer's client IP address and mail server. In other words, they know the kind of software you are using and probably your location.

Junkbusters does a similar analysis and provides a link to the type of information about people that gets processed without their knowledge. It should be noted that Junkbusters is a campaigning organisation.

Now read Chapter 1 of the Center for Democracy and Technology's (CDT) Guide to Online Privacy. Note: Whereas consumers' associations are sometimes accused of being ‘anti-business’, the same cannot be said of the CDT, which is very much in favour of facilitating an environment where business can thrive. They are also funded by many large organisations, such as AOL, Microsoft, Ford, IBM and the RIAA, among others.

Now read Fourteen Ways to Protect Your Privacy Online.

The object of this activity is for you to act on the advice given by the CDT. See how many of the 14 (especially the final tip to ‘use common sense’ is probably the most important of the lot) you can do in one hour.

For example, under item 3 ‘Get a separate account for your personal e-mail‘, you could sign up for a free web-based email account at Cyber Rights.

Try out your new-found privacy awareness at no more than four of the following websites. Focus on:

  • the information the site already knows about you;

  • the information it is trying to collect;

  • whether it is intrusive.

Record your findings in your Learning Journal.

  • Electronic Privacy Information Center

  • Foundation for Information Policy Research

  • Powell's Books

  • Amazon

  • NASDAQ

  • Google

  • eBay

  • Arts and Letters Daily

  • The Irish Times

  • New York Times

  • The Times

  • Atlantic Monthly

  • Electronic Telegraph

  • BBC News

  • RTE News

  • MP3.com

  • Hacker 2600 Magazine

  • Slashdot

  • Linux Journal

  • Clay Shirky's writings about the internet

  • Walt Mossberg's Personal Technology Column from the Wall Street Journal

  • Recording Industry Association of America (RIAA)

  • IETF

  • Bodleian Library, Oxford

  • Project Gutenberg

  • The Science Museum, London

  • GILC

  • The Ethical Spectacle

  • Encyclopedia Brittanica online

  • Electronic Libraries online encyclopedia

  • The Stanford Encyclopedia of Philosophy

  • Free Online Dictionary of Computing

  • World Wide Words – Exploring the English lanquaqe

  • Project Diana

  • ICANNWatch

Privacy is like oxygen. We really appreciate it only when it is gone.

(Charles Sykes)

7.28 Summary and SAQs

This section was largely about how established industries are using law and architecture to counteract threats created by innovations on the internet. Lessig sees this counter-revolution succeeding at the expense of innovation. He suggests compulsory licensing as a possible way of balancing the needs of existing businesses with those of internet innovators, allowing ‘compensation without control’.

As the internet increasingly migrates to broadband (high-speed) use, via cable networks for example, it is moving to networks with more controlled architectures – remember, on cable content flows faster upstream than downstream. Broadband cable owners have no obligation or incentive to operate open e2e architectures.

We have also looked at how these developments might affect an area of public policy like privacy. There are general incentives for commerce to support the development of a business-friendly architecture for the internet. Such an architecture leads to increased use and processing of personal data, which has a direct impact on privacy.

These changes may have a negative effect on the internet as an innovation commons.

7.28.1 Self-assessment Questions

1. Why are commercial enterprises protecting their property described as ‘dinosaurs’?

Answer

Primarily as a tactic to paint them as old and candidates for extinction because Lessig sees them as killing the potential of the internet to continue to act as a source of innovation.

2. Did Napster have any real value other than as a tool for stealing music?

Answer

Some uses for Napster other than stealing music:

  • listening to music freely released on the Net;

  • listening to music in the public domain not subject to copyright restrictions;

  • listening to other kinds of authorised works, e.g. recordings of lectures (recall, the network administrators at Stanford locked Lessig out of the university's network when they discovered he was using peer-to-peer file sharing; ironically he was using it to distribute notes to his students);

  • sampling – whereby people check out music to see whether they like it before buying;

  • space shifting – in the US – downloading MP3 files of music that you already own on CD onto your computer for personal use (note: making extra copies for personal use is not allowed in the UK);

  • potentially the celestial jukebox described in Chapter 8 of The Future of Ideas.

3. How can laws regulating architecture control behaviour?

Answer

A law on disability discrimination mandating that buildings be accessible to people in wheelchairs is one example. Another is the UK Regulation of Investigatory Powers Act 2000, under which internet service providers can be forced to install equipment to facilitate surveillance by law enforcement authorities.

4. What are the consequences of control described in The Future of Ideas?

Answer

As the internet comes under the control of large organisations through law and architecture, Lessig sees its potential as an innovation commons diminishing. The result he predicts is that technological progress will slow down and potentially beneficial disruptive innovation will be greatly reduced.

5. How can the internet code layer affect privacy?

Answer

Commerce needs a code layer (or architecture) which is business friendly. This means it must facilitate identification, authentication, authorisation, integrity of transmission and non repudiation. This in turn leads to increasing amounts of personal information getting transmitted, processed, manipulated and stored, which has implications for personal privacy. Campaigning organisations like the Electronic Privacy Information Center provide access to lots of resources indicating that this should concern us. Organisations like the Direct Marketing Association defend the benefits of better informed businesses being much better able to meet individual consumer needs.

"The right to be left alone – the most comprehensive of rights, and the right most valued by a free people."

(Justice Louis Brandeis, defining privacy in the Olmstead v US Supreme Court case in 1928)

How many of you have broken no laws this month? That's the kind of society I want to build. I want a guarantee – with physics and mathematics, not with laws – that we can give ourselves real privacy of personal communications.

(John Gilmore)

Make sure you can answer this question relating to Section 7 before you continue:

  • How have established industries responded to the innovation enabled by the Net?

Record your thoughts on this in your Learning Journal.

7.29 Study guide

On completing this part of the unit, you should have:

  • read Section 7;

  • read Chapter 11 from page 180 to 205 of The Future of Ideas;

  • completed Activity 6;

  • answered the self-assessment questions;

  • recorded your notes in your Learning Journal.

8 The architecture of the internet

8.1 Balance and control

We have examined the story of innovation as related to the internet, using the perspective of Lawrence Lessig's book, The Future of Ideas, as a guide. In summary:

  • the internet led to an explosion of innovation;

  • this innovation was the result of the internet's innovation commons: e2e architecture (code layer); shared resources; laws keeping the phone networks open for use by internet innovators;

  • this innovation constituted a revolution which was, and continues to be, perceived as a threat by established commercial interests;

  • those interests acted to deal with the effects that this innovation had on their markets (Lessig's counter-revolution);

  • the main tools they used were law and architecture.

The law that we have been primarily concerned with is intellectual property law. Intellectual property rights are inherently paradoxical. Intellectual property theoretically produces an incentive to produce (create/innovate) new information, the distribution of which is in the public interest. Yet it generates this incentive theoretically by restricting access to the new information produced. So it is important for a balance to be struck between the rights of the creators and the rights of the public to access their creations. Economists would describe this as striking a balance between the ‘incentive function’ and the ‘distributive function’ of intellectual property. So if a small number of powerful actors have excessive influence over the shaping of intellectual property laws, as Lessig suggests, then this will lead to an imbalance.

Architectural effects can also lead to an imbalance. Robert Moses' bridges demonstrated how architecture can even interfere with individual freedoms. Lessig is concerned that a small number of powerful actors will come to control the architecture of the internet. This concentrated control would lead to an imbalance of power between the controllers and the remaining majority of the internet users, undermining the value of the internet as an innovation commons.

Whether and to what extent Lessig's predictions come true remains to be seen. Since the original publication of the book, Lessig claims that the signs of evolution towards the kind of control he described in The Future of Ideas are even stronger. As part of his fight against this control, Lessig convinced the publishers of his 2004 book, Free Culture, to make it openly available online.

Likewise Lessig’s 2008 book. Remix: Making Art and Commerce Thrive in the Hybrid Economy is also openly available online.

8.2 The physical layer

At the physical layer, Cisco sell intelligent routers, and Intel and others continue to work on ‘trusted computing’ platforms; their impact is as yet difficult to gauge. The continuing migration to broadband networks is arguably undermining standard access to the old end-to-end internet architecture. Apple launched the iPod in 2002

8.3 The code layer

At the code layer, Microsoft has released Windows 7 (and Windows Vista prior to that) and is working on the next generation of its operating system, ‘Windows 7’. The Federal Communications Commission (FCC) in the US has ruled that cable companies are not telecommunications companies and are therefore not required to provide open access to their networks in the way that the phone companies in the US are obliged to do. The FCC has also declared that all consumer electronic devices capable of receiving digital TV signals must have ‘broadcast flag’ copy protection built in by the summer of 2005. A coalition of groups led by the American Library Association challenged this decision through the courts and in May 2005 the US Court of Appeals for the District of Columbia circuit unanimously ruled that the FCC had no authority to mandate a broadcast flag. There have been various efforts since then to introduce the requirement for a broadcast flag into law, for example as part of the Communications, Consumer's Choice, and Broadband Deployment bill of 2006, though the bill was not passed in the end. Companies like Microsoft and IBM have been working on new internet protocols through standards bodies like the World Wide Web Consortium, W3C. In Europe, a hugely complex set of developments on internet surveillance and data retention to facilitate law enforcement and the combatting of terrorism is ongoing. The US Department of Justice, the FBI and the Drug Enforcement Administration have also requested and been granted (in August 2004) new internet surveillance powers. Following the attacks of 11 September 2001, the Bush administration more generally facilitated widespread telephone and internet surveillance without warrants. President Obama may change some of those activities.

8.4 The content layer

It is probably at the content layer that the developments are most visible, though. Increasing numbers of legal threats and lawsuits are being brought alleging intellectual property infringement and breach of anti-circumvention provisions of law. Companies such as Grokster (owned by Streamcast) and Sharman Networks, which owns a number of P2P file sharing services, have been targeted by the entertainment industry's representative bodies, the Motion Picture Association of America and the Recording Industry Association of America. In June 2005 the P2P companies lost the MGM v Grokster case in the US Supreme Court, though the Dutch Supreme Court (the highest European court so far to hear a P2P case) in December 2003 ruled that the developers of the Kazaa P2P file sharing software, Niklas Zennstroem and Janus Friis, could not be held liable for the actions of the users of the software.

The RIAA and the International Federation of the Phonographic Industry (IFPI) are now suing tens of thousands of individual P2P users. In the autumn of 2007 they were awarded $222,000 in damages against a woman proven to have shared 24 songs on a P2P network. By the autumn of 2008 the judge accepted that he had inadvertently misdirected the jury and therefore declared a mistrial. The RIAA failed to get this decision overturned in December 2008 and the case went to a retrial with a jury increasing the damages to $1.92 million in June 2009. The RIAA also stated towards the end of 2008 that they were intending to change their strategy somewhat by focussing less on suing individuals and more on co-opting ISPs into the efforts to combat illicit file sharing.

The International Federation of the Phonographic Industry (IFPI) sued file sharers in parts of Europe. The British Phonographic Industry (BPI) did likewise in the UK, with the first 23 people settling out of court for sums of between £2000 and £4500 in early March 2005. Both the IFPI and the BPI are also now concentrating considerable efforts on getting ISPs involved in their Internet copyright war. There are also keen to have a 3-strikes regime incorporated into law, whereby ISPs would send two warnings to suspected file sharers followed by cutting off their internet access should the suspicious activity continue. There has been considerable concern about this approach from legal and civil rights experts and it would arguably contravene the European Convention on Human Rights. (See http://b2fxxx.blogspot.com/2008/03/3-strikes-copyright.html)

In one case, a printer company, Lexmark International, sued a competitor, Static Control Components, for allegedly breaching the anti-circumvention provisions of the Digital Millennium Copyright Act 1998 (DMCA). Lexmark alleged that its competitor, in supplying replacement printer cartridges for some Lexmark printers, bypassed copy control technology built into Lexmark cartridges to enable the printers to identify them as legitimate replacements. In March 2003 a court ordered Static Control to stop making the replacement cartridges. By October 2003 the US Register of Copyrights reiterated the point that reverse engineering copy protection technologies to allow interoperability was allowed under the DMCA. She mentioned the Lexmark case in this context but was very careful to say that ‘wholesale copying of a copyrightable computer program is likely to be an infringing use’ and this is partly what Lexmark are alleging. Also, at least one state, North Carolina, was supporting legislation to combat what they saw as Lexmark's abuse of the DMCA. In October, 2004, Lexmark lost their case in the Appeal Court, and the same court denied the company's request for a re-hearing in February 2005.

There have been moves in the US Congress to introduce bills that water down what advocates like Lessig see as the draconian provisions of the DMCA, but there is no indication yet that these have any real likelihood of becoming law. At the opposite end of the scale bills like the CCCBDA (Communications, Consumer's Choice, and Broadband Deployment Act of 2006), ACCOPS (Author, Consumer and Computer Owner Protection Security Act of 2003), the PIRATE (Protecting Intellectual Rights Against Theft and Expropriation) and the INDUCE (Inducing Infringement of Copyright Act 2004), which could lead to the jailing of P2P file swappers, the Justice Department pursuing civil legal action on behalf of the entertainment industry, and the facilitation of wire tapping for civil copyright infringement.

Only two European countries, Denmark and Greece, succeeded in implementing the European Copyright Directive (the EU equivalent of the DMCA) into their national laws by the required deadline of December 2002. Austria, Italy and Germany (mostly) followed suit in 2003. In the UK, the implementation was partly delayed by concerns identified through the public consultation exercise initiated by the UK Patent Office. The directive has now been implemented in the UK and took effect at the end of October 2003. By 2005 the Commission were suing France, Spain, Finland and the Czech Republic for failing to implement the directive. In August 2006 France was the final member state to introduce the directive provisions into their own national law. In February 2007 the Commission published a Study on the Implementation and Effect in Member States of Directive 2001/29/EC on the Harmonisation of Certain Aspects of Copyright and Related Rights in the Information Society, concluding:

In sum, it is fair to conclude that the Directive has at best only partly achieved its main goal of promoting growth and innovation in online content services. As our benchmark test has revealed, the Directive deserves particularly low marks for its (lack of) harmonising effect and its (lack of) legal certainty. While the harmonised right of communication to the public is a model of technology-neutral regulation, the Directive's convoluted rules on TPM's have little more to offer to the Member States and its market players than confusion, legal uncertainty and disharmonisation. While the Directive's regime reflects the EC's faith in a future where DRM and contract rule, it is ironical to observe that the main stakeholders themselves seem to have lost their belief that the answer to the machine actually lies in the machine.

In March 2004 an ‘Enforcement of intellectual property rights’ directive was approved by the EU Parliament and the Council of Ministers, in order ‘to bolster the fight against piracy and counterfeiting.’ Essentially the idea is to apply the different enforcement mechanisms in individual member states to every country in the EU and also broaden the criminal sanctions. Proposals to allow the patenting of software are also taking place at a European level.

There have been a multitude of developments on intellectual property at EU level since 2005, including proposals to extend copyright term in sound recordings and introduce an EU-wide 3 strikes regime. Likewise in the US there have been endless proposals and several changes in the law. The whole intellectual property landscape is in a state of almost constant upheaval.

8.5 Update on patents

In May 2003 eBay were ordered by a judge to pay $29.5 million in damages for infringing two patents held by MercExchange. Their infringing ‘buy it now’ fixed price feature also had to be removed from their website. The case was appealed and heard by the US Supreme Court in 2006, which ruled against MercExchange's request to issue an injunction preventing eBay using, for example, the electronic button at the heart of part of the dispute. It went back to a lower court which also refused the injunction in July 2007.

More importantly, from a web architecture perspective, in July 2003 Microsoft were ordered to pay Eolas Technologies and the University of California $520 million for patent infringement related to the use of plug-in programs that allow websites to have fancy features like animations. James Grimmelmann at Yale University's Lawmeme has an amusing perspective on this. An Appeal Court effectively ordered a retrial of the case in March 2005. After a lot of to-ing and fro-ing between the courts (including the US Supreme Court in 2007) and the patent office, the companies eventually settled out of court in August 2007. Confidentiality agreements kept the details under wraps and the long-term fallout from the case remains difficult to determine with any degree of confidence.

8.6 Update on Microsoft

Following on from the anti-trust settlement in the US (though by October 2007, several states were asking for the term of the agreement to be extended), the EU has completed its investigation of the company's alleged anti-competitive practices and decided to impose a €497.2 million fine, as well as some further remedies relating to interoperability with competing software. Microsoft appealed and lost in the European Court of 1st instance in 2007. Subsequently the EU Competition Commissioner announced the Commission had come to an agreement with Microsoft, leaving open the possibility for the company to charge a flat fee of €10 000 for interoperability, which means it will not be accessible to most open-source developers. They have also left open the possibility of Microsoft offering patent licences. From the FAQ on the deal:

Can open source software developers implement patented interoperability information?

Open source software developers use various ‘open source’ licences to distribute their software. Some of these licences are incompatible with the patent licence offered by Microsoft. It is up to the commercial open source distributors to ensure that their software products do not infringe upon Microsoft's patents. If they consider that one or more of Microsoft's patents would apply to their software product, they can either design around these patents, challenge their validity or take a patent licence from Microsoft.

That could prove to be a significant get-out clause for the company. Even when Microsoft are apparently having to comply with difficult rulings they're always looking to outwit the authorities, and often do. By 2008 the EU antitrust authorities launched a new investigation into Microsoft’s potentially anti-competitive practices relating to bundling its web browser, Internet Explorer with its Windows Vista operating system.

IBM and Novell were other big commercial enterprises facing difficult court action in this area, initiated by SCO. The claim was that they were infringing SCO's intellectual property rights through its use of Linux open-source software. SCO also expanded their litigation and legal threats to commercial users of Linux, including the Lawrence Livermore National Laboratory and the National Energy Research Scientific Computing Center funded by the US Department of Energy and also DaimlerChrysler. Groklaw (linked in the Further reading section) is the best site on the Web dealing with the SCO litigation. SCO effectively lost the whole case in August 2007 when the judge ruled that Novell owned the copyrights at the heart of the dispute.

8.7 Update on Apple

On the commercial front, Apple's iTunes service, through which tens of millions of songs have been sold, is now seeing increasing competition in the music downloading services available. In July 2004 Apple were ‘stunned that RealNetworks has adopted the tactics and ethics of a hacker to break into the iPod.’ Real had developed a program to let consumers buy songs from a Real Music Store and play them on their Apple iPods. By December 2004 Apple had upgraded their iPod software so it would no longer play songs purchased from RealNetworks. There were similar shenanigans relating to the Apple iPhone in 2007. By the end of 2008 Apple declared their intention to drop drm ("digital rights management") of technologies.

8.8 Update on commons

On the notion of the commons, the BBC is partly opening its archive to the public, the Open University has launched the OpenLearn project, and Lessig's and others' Creative Commons initiative has seen the active participation of millions of creators.

‘Greg Dyke, when he was director general of the BBC, announced plans to give the public full access to all the corporation's programme archives,’ according to the Guardian. ‘The service, the BBC Creative Archive, would be free and available to everyone, as long as they were not intending to use the material for commercial purposes, Mr Dyke added.’ Following Mr Dyke's demise in the wake of the Hutton Report, there was some concern that the initiative would be shelved. The opening of the archive has gone ahead, though possibly to a more limited degree than open content advocates would have liked. Just under a hundred extracts from programmes were released. Lawrence Lessig was one of the people advising the BBC on the matter.

The Creative Commons folk have launched a new offshoot called Science Commons. In the summer of 2007 ccLearn ‘dedicated to realising the full potential of the internet to support open learning and open educational resources’ became their latest big project. It will be interesting to see if these can become as successful as the parent organisation.

By the end of 2008 Apple declared their intention to drop drm (“digital rights management”) technologies.

8.9 Summary

What does all this mean in the context of Lessig's concerns? The jury is still out. Arguably the worst excesses of what both Lessig and his opponents have been fearing have not come to pass, though by 2007 the music industry were reporting that CD sales had dropped dramatically. With the advent of new P2P services like BitTorrent, which allow much faster sharing of large files, we have seen the movie industry facing similar challenges to their colleagues in the music industry, but we don't know what the future holds, other than to expect that the shape and form of these industries will be different, with an increasing proportion of their sales taking place via the networks. But Tim O'Reilly, the owner of a specialist publishing business, has some interesting advice for his fellow publishers and the other content industries in relation to their concerns about the impact of the internet on commerce.

9 Unit summary

The course on which this unit is based was the Open University's first release of archived course material into the commons in April 2005. It was then withdrawn with the intention of releasing it again once the OpenLearn project was established. I can only apologise for the delay in getting it out again. Thanks for your patience, especially those of you who have emailed me regularly from various corners of the world wondering when the course would be up again. Now it is over to you in the spirit of OpenLearn and ccLearn to use, share, re-use, adapt and share again.

We will not be updating the unit regularly as we did when it formed an active part of our undergraduate curriculum, but John and I regularly write about any updates to cases, for example, at our respective blogs, http://memex.naughtons.org/ and http://b2fxxx.blogspot.com/. Nevertheless, should it prove useful and fruitful to do so, we may review the decision not to update here and perhaps there is some mileage in exploring a Wiki format for the course material.

"The thing that troubles us about the industrial economy is that it tends to destroy what it does not comprehend, and that it is dependent upon much that it does not understand."

(Wendell Berry)

Now you have completed this unit, you should be able to:

  • understand the value of a ‘commons’ or ‘free’ resources in facilitating innovation and creativity;

  • understand the four main influences or constraints on behaviour – law, social norms, architecture and market forces;

  • understand the particular importance of law and architecture to the future of the internet – developments in internet architecture backed up by powerful laws have serious implications for the future of ideas;

  • understand the three-layers model of the internet;

  • appreciate that the internet facilitated a lot of innovation;

  • appreciate some of the different factors which make the internet a catalyst for innovation, in particular:

    • the end-to-end architecture;
    • the regulation of the owners of the networks on which the Net runs (the phone companies);
    • the open code and protocols on which the Net is built;
    • the capacity of individuals to innovate/create;
  • appreciate the influence that the internet has had on commerce;

  • explain some legal concepts, such as ‘copyright’, in basic terms;

  • critically evaluate a range of perspectives on how the internet should be allowed to develop or be constrained.

This unit also set out to develop the practical, critical and analytical skills needed to participate confidently in debates about changes in law and technology and their wider implications for society. You should be able to:

  • critically analyse the material you read;

  • communicate about the social impact of, and policy making in relation to, the Net;

  • find, analyse and evaluate information on the World Wide Web;

  • apply theoretical concepts in the course to real examples.

10 Further reading

10.1 Web pages, blogs, law and news sources

  • Aaron Swartz

  • A copyfiqliter's musings

  • Alex Salkever's Security Net

  • American Prospect

  • Ananova

  • Atlantic Monthly

  • Bag and baggage

  • Balkinization

  • BBC

  • Berkeley IP Blog

  • Berkman Center for internet and Society

  • beSpacific

  • Bitlaw

  • Blogs at Harvard

  • BNA net news

  • Boingboing

  • Center For Democracy and Technology

  • Chilling Effects Clearinghouse

  • Chronicle of Higher Education

  • CNet News

  • CNN

  • Consensus at Lawerpoint

  • Copyfight

  • Copyright Colloguium

  • Copyright Readings blog

  • Cornell's Legal Information Institute

  • Creative Commons

  • Cryptogram

  • Current bytes in brief

  • CyberRights and Cyber Liberties UK

  • Dan Gillmor

  • disLEXia

  • Doc Searls

  • Don't link to us!

  • Drew Clark

  • Economist

  • Electronic Frontier Foundation

  • Electronic Privacy Information Center

  • Electronic Telegraph

  • Ernie the Attorney

  • EUR-Lex index

  • Euractiv news

  • Europa

  • European Commission Pressroom

  • Europemedia

  • First Monday

  • Financial Times

  • FindLaw

  • Foundation for Information Policy Research

  • Freedom to Tinker

  • Froomkin

  • Furdlog – Frank Field

  • GigaLaw

  • GrepLaw at Harvard

  • Groklaw

  • Harvard JOLT

  • ICANN Watch

  • Infolaw gateway to UK legal web

  • InstaPundit

  • International Herald Tribune

  • internet Legal Resource Guide

  • internet ScamBusters

  • IP Matters

  • ITN

  • James Boyle

  • Jennifer Granick

  • Jessica Litman's new developments in cyberlaw

  • John Naughton's Footnotes

  • Journal of Information Law and Technology (JILT)

  • Jurist

  • Justice Talking

  • Kuro5hin

  • Law Society Gazette

  • Law.com

  • LawMeme at Yale

  • Lessig weblog

  • Linux Journal

  • Mercury News

  • Memex

  • Mindjack

  • MIT Technology Review

  • MSNBC

  • Newsforge

  • Nolo Law Center

  • New York Times

  • NTK

  • On Lisa Rein's Radar

  • OneWorld

  • Online Journalism Review

  • O'Reilly

  • Politech

  • Privacy Journal

  • Privacy Law and Policy Reporter

  • Public Knowledge

  • QuickLinks

  • Ray Corrigan

  • Reason

  • Red Herring

  • Regulation of Ivestigatory Powers archive

  • Roger Clarke

  • Ross Anderson

  • Salon

  • Samuelson's cyberlinks

  • Sarah Carter's lawlinks

  • ScadPlus – Activities of the EU

  • SCOTUS blog

  • Scripting News

  • Shifted Librarian

  • Shirky

  • Siva Vaidhyanathan

  • Security Focus

  • Seth Finkelstein

  • Silicon Valley.com

  • Slashdot

  • Slate

  • Snopes Urban Legends

  • Stanford Technology Law Review

  • Tapped – American Prospect Weblog

  • Tech Law Journal

  • The CATO Institute

  • The Green Bag

  • The Guardian

  • The Industry Standard

  • The New Republic

  • The Register

  • The RISKS Digest

  • The Times

  • The Trademark Blog

  • The Village Voice

  • The Volokh Conspiracy

  • Townhall.com

  • UCLA Cyberspace Law and Policy Center

  • Urban Legends

  • VUNet

  • >Walt Mossberg in the Wall Street Journal

  • ZDNet

10.2 Books

  • Digital Decision Making: Back to the Future by Ray Corrigan, published by Springer-Verlag, 2007

  • A Brief History of the Future: the Origins of the internet by John Naughton, published by Weidenfeld & Nicolson, 1999

  • Code and Other Laws of Cyberspace by Lawrence Lessig, published by Basic Books, 1999

  • Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity by Lawrence Lessig, published by Penguin Books, 2004

  • Shamans Software and Spleens: Law and the Construction of the Information Society by James Boyle, published by Harvard University Press, 1996

  • Copyrights and Copywrongs: The Rise of Intellectual Property and How it Threatens Creativity (Fast Track Books) Siva Vaidhyanathan, published by New York University Press, 2001

  • Cyber Rights: Defending Free Speech in the Digital Age, by Mike Godwin, published by Random House/Times Books, 1998

  • Digital Copyright by Jessica Litman, published by Prometheus Books, 2001

  • The Illustrated History of Copyright by Edward Samuels, published by Thomas Dunne Books, 2000

  • The Control Revolution: How the internet is Putting Individuals in Charge and Changing the World We Know by Andrew Shapiro, published by PublicAffairs/Perseus Books Group, 1999

  • Copyfights: the Future of Intellectual Property in the Information Age edited by Adam Thierer and Clyde Wayne Crews Jr, published by the CATO Institute, 2002

  • The internet, Law and Society edited by Yaman Akdeniz, Clive Walker and David Wall, published by Pearson Education Ltd, 2000

  • Law and the internet edited by Lilian Edwards and Charlotte Waelde, published by Hart Publishing, 2000

  • Internet Law: Text and Materials by Christopher Reed, published by Butterworths, 2000 Information Technology Law – 3rd edition by Ian J. Lloyd, published by Butterworths, 2000

  • The Digital Person: Technology and Privacy in the Digital Age by Daniel J. Solove published by New York University Press, 2004

  • The Future of Reputation: Gossip, Rumor, and Privacy on the internet by Daniel J. Solove, Yale University Press, 2000

  • No Place to Hide by Robert O' Harrow, Jr. published by the Free Press, 2005

  • Privacy and Human Rights 2003: An International Survey of Privacy Laws and Developments, published by The Electronic Privacy Information Center and Privacy International, 2003

  • The Unwanted Gaze: The Destruction of Privacy in America by Jeffrey Rosen, published by Random House, 2000

  • Database Nation: The Death of Privacy in the 21st Century by Simson Garfinkel, published by O'Reilly and Associates, 2000.

  • Secrets and Lies: Digital Security in a Networked World by Bruce Schneier, published by John Wiley and Sons, 2000

  • Beyond Fear: Thinking Sensibly About Security in an Uncertain World by Bruce Schneier, published by Copernicus Books, 2003

  • Information Feudalism: Who Owns the Knowledge Economy? by Peter Drahos with John Braithwaite, published by Earthscan, 2002.

  • The Anarchist in the Library: How the Clash between Freedom and Control is Hacking the Real World and Crashing the System by Siva Vaidhyanathan, published by Basic Books, 2004.

  • Digital Code of Life: How Bioinformatics is Revolutionizing Science, Medicine and Business by Glynn Moody, published by John Wiley and Sons, 2004

  • The Biotech Century: How Genetic Information will Change the World by Jeremy Rifkin, published by Phoenix (Orion Books), 1998

  • Hit Men by Fredric Dannen, published by Vintage Books (Random House), 1991

  • Sonic Boom: Napster, P2P and the Battle for the Future of Music by John Alderman, published by Fourth Estate, 2001

  • We the Media: Grassroots Journalism by the People for the People by Dan Gillmor, published by O'Reilly Media, 2004

  • Empire of the Air: The men who made radio by Tom Lewis, published by HarperCollins, 1991

  • Promises to Keep: Technology, Law and the Future of Entertainment by William W. Fisher III, published by Stanford University Press, 2004

  • Innovation and its Discontents by Adan B. Jaffe and Josh Lerner, published by Princeton University Press, 2004

  • Brand Name Bullies: the quest to own and control culture by David Bollier, published by John Wiley & Sons, 2005

  • Information Rules by Carl Shapiro and Hal R. Varian, published by Harvard Business School Press, 1999

  • Copyright in Historical Perspective by Lyman Ray Patterson, published by Vanderbilt University Press, 1968

  • Steal This Idea: Intellectual Property Rights and the Corporate Confiscation of Creativity by Michael Perelman, published by Palgrave, 2002

  • Remix - Making Art and Commerce Thrive in the Hybrid Economy by Lawrence Lessig, Bloomsbury Academic, 2008

  • The Public Domain: Enclosing the Commons of the Mind by James Boyle, Yale University Press, 2008

  • Content: Selected Essays on Technology, Creativity, Copyright and the Future of the Future by Cory Doctorow, Tachyon Publications, 2008

  • Everything Is Miscellaneous: The Power of the New Digital Disorder by David Weinberger, published by Henry Holt, 2008.

  • Born Digital: Understanding the First Generation of Digital Natives by John Palfry, Basic Books, 2008

  • The Future of the Internet and How to Stop it by Jonathan Zittrain, Allen Lane, 2008

References

Jessica Litman (2001) Digital Copyright, USA, Prometheus Books, 208pp.
James Wilsdon and Daniel Stedman Jones, The Politics of Bandwidth: network innovation and regulation in broadband Britain, London, Demos, 2002;
Shirky, Clay, Content Shifts to the Edges, www.shirky.com/writings/content.html
Shreve, Jenn, ‘Avi Rubin: Publius’ Public Crusade’, The Industry Standard, 13 September 2000. Available online at http://xent.com/FoRK-archive/sept00/0278.html.
Felten, Edward, Grokster Ruling: Instant Analysis, http://www.freedom-to-tinker.com/ archives/000374.html

Acknowledgements

The content acknowledged below is Proprietary (see terms and conditions) and is used under licence.