Skip to main content

IT: Information

Completion requirements
View all sections of the document
Printable page generated Thursday, 28 March 2024, 9:39 AM
Use 'Print preview' to check the number of pages and printer settings.
Print functionality varies between browsers.
Unless otherwise stated, copyright © 2024 The Open University, all rights reserved.
Printable page generated Thursday, 28 March 2024, 9:39 AM

IT: Information

Introduction

This course looks at the technologies used to acquire information about the world. A particular focus is the technology used by television businesses in gathering news reports. The course draws upon the expertise of two individuals who have worked in senior positions in the UK television industry.

This OpenLearn course provides a sample of level 1 study in Computing & IT

Learning outcomes

After studying this course, you should be able to:

  • demonstrate a knowledge of the different types of storage media for digital data

  • understand the basic concepts of electrical voltage and resistance, and the parameters used to specify batteries

  • demonstrate an overview of the historical development of IT in video recording, newsgathering and new dissemination

  • compare the merits of different media as sources of news

  • discuss issues of trust and authenticity in information sources.

1 Bringing the news on the back of a horse

We seem to be surrounded by 'news' these days, but it was not always like that. In Shakespeare's Henry IV Part 2, Falstaff hears the news that his former friend and drinking partner, Prince Hal, is now King Henry V, following the death of Henry IV. It is a comic scene set in Gloucestershire, 200 km from the royal court in London, and it is clear that before the messenger (called Pistol) arrived on horseback Falstaff did not even know that Henry IV had died.

It would not be like that now. Maybe Falstaff would have got a text message on his mobile phone: Hnry 4 ded. Hal 2 b Kng Hnry 5.

And perhaps he would have then dashed home to watch BBC News 24, listen to BBC Radio Five Live or log on to the official website of the British Monarchy.

I do not want to dwell for long on the times before the Industrial Revolution, but before we move on I would like to draw out some of the themes, issues and concepts that help to provide a framework for discussing newsgathering and dissemination of news.

Activity 1

Think for a moment about the following aspects of news dissemination during the time of Henry IV and V (15th century), or any time before the Industrial Revolution.

  1. What determined how fast the news could get from one place to another?

  2. What determined how much information you could get about an event?

  3. What determined how many people could find out about an event?

  4. How far could news travel?

Discussion

For the most part, news would have travelled with people, so the answer to all of the questions about the spread of news is linked with individuals travelling. There were a few other methods that didn't require people to travel, such as beacons, semaphore and carrier pigeons, but these had rather specialised applications.

  1. The fastest means of transport on land would have been a galloping horse, so we can think of news travelling at up to a few tens of kilometres an hour.

  2. If a messenger is bringing the news, then perhaps the amount of information they can carry is determined by how good their memory is. They might also, or instead, have something in writing (exploiting technology) and the text could supplement their memory. Either way, a messenger can bring quite a lot of information.

  3. If the spread of news is relying on word of mouth, then we can imagine news spreading in the way of the 'office grapevine' today, where one person tells two or three others who each tell another two or three people and so on. The total number of people who know the news rises rapidly in this way. Alternatively, if the news is written down, the written text can be passed around and read by more than one person (assuming widespread literacy). Even better, once printing has been invented, large numbers of copies can be produced and many people can read it at the same time.

  4. In principle there is no limit to how far news can travel – it just might take a long time, since the speed at which it travels is limited to that of a galloping horse. In practice, only the most important news items are likely to get very far.

2 From newsreels to real news

2.1 Communication technologies

With the Industrial Revolution the idea of 'news' developed rapidly, and these days most people in the UK and other developed countries have concept of 'the news'. We expect to be kept up to date with the news through various sources, and to satisfy this expectation we have the businesses of newsgathering and dissemination of news.

In this section you will be learning about the development of the technologies used for newsgathering and dissemination by reading extracts from a paper written by one of the leading experts in the field.

In 1995 the IEE (Institution of Electrical Engineers – a UK-based association) held a colloquium entitled Capturing the Action: Changes in Newsgathering, which brought together technical experts working in the business of newsgathering in order to review developments. (A colloquium is a meeting at which specialists give talks on a topic or on related topics and then lead a discussion about them.) The introductory talk at the colloquium was presented by E.V. Taylor, who was at that time Head of Technology at ITN. His talk, 'From newsreels to real news', reviewed developments in news technology from the Industrial Revolution to 1995. As is usual with a colloquium, a 'Colloquium Digest' was produced which contained technical papers associated with each of the talks. I shall be using Taylor's paper to look at the role of information technology (IT) in the news business.

There are two factors which you should bear in mind as you read the paper.

  1. The audience was made up of people working in the field or with specific interest in newsgathering. Taylor was able to assume, therefore, that his audience was already familiar with many of the concepts and the specialist language ('jargon') that he used.

  2. Although the colloquium digest is available as an IEE publication, its primary distribution was to delegates who attended the colloquium (it would have been given to them when they arrived on the day). It would be used as a reminder of the talk and to fill in some of the factual details that the audience might have missed. It was not produced as a stand-alone document in the form you would find in a journal. There are no section headings, for example, and it is written using language similar to the language Taylor would have used when speaking.

Despite these shortcomings as a stand-alone printed text, Taylor's paper provides an excellent overview of the role of technology in the broadcast news industry, written by an expert who was living through the changes that were taking place. In the text which follows, I have broken down the original paper into a few paragraphs at a time and removed some text not needed for my purposes. I have also added a commentary to explain some of the terms and to discuss some other important topics in IT.

2.2 The role of technology in the broadcast news industry

Taylor's introductory comments

Taylor starts with some introductory comments. Notice the informal style he uses because this is essentially a script for a talk to a colloquium. Notice also the other issue that I raised earlier, that Taylor is assuming that his listeners are familiar with terms such as ITN, ENG and video servers. I shall explain terms like these as we go through the paper. I have highlighted in bold terms which I explain or discuss further in following notes.

From Newsreels To Real News

E.V. Taylor, 1995, Institution of Electrical Engineers

Prompted partly by the fact that ITN has just celebrated its 40th anniversary I would like to start by briefly reviewing some of the historical landmarks in news broadcasting that were driven by past technological development.

With this review as a reference I hope I can convince you of the enormous importance of the new technological developments you will hear about today. Make no mistake; for the TV news business these developments are going to create a revolution even greater than that caused by ENG in the late 70s/early 80s – and those of us who lived through that particular revolution still bear some of the scars. So when, very shortly, we have to face the realities of video servers, digital compression and digital tapeless integrated newsrooms – make sure your seat belt is pulled nice and tight! We will all need to keep our nerve as I suspect it will be a bumpy ride.

(Taylor, 1995)

ITN stands for Independent Television News. To quote from the ITN website (ITN, 2005), 'ITN is one of the largest news organisations in the world, producing news and factual programmes for television, radio and new media platforms, both in Britain and overseas. ITN was founded in 1955, as an independent organisation owned by ITV companies producing news programmes for national broadcast on ITV.'

ENG is an abbreviation of Electronic News Gathering. It is the process of recording sound and images electronically, originally as analogue signals on magnetic tapes (video and audio tapes), and conveying them back to the newsrooms in an electronic format; this could be done by physically transporting the tapes or sending the electronic signals over communications network. ENG is here contrasted with the previous use of film.

Video servers, digital compression and digital tapeless integrated newsrooms. Video servers are computers with large storage capacity (large hard disks or sets of hard disks) used to store and retrieve compressed digital video files. News editors in the 'digital tapeless newsroom' will be working on computers that interface with the server.

2.3 Newsgathering and newspapers

Newspapers

Taylor now discusses some early information and communication technologies and the extent to which they had an impact upon newspapers.

So let me start by looking at what was often optimistically called 'news' in the early years of the 20th Century.

Before the establishment of regular radio services in the early 1920s the public were entirely reliant upon the newspaper industry for information about what was going on locally, nationally and internationally.

Despite the development of the telegraph by William Cooke in 1837, and later the telephone by Alexander Graham Bell in 1876, it was really not until the 20th Century that lines infrastructures were developed sufficiently for newspapers to be in a position to report the remoter national events in the same week that they occurred. Reporting of events abroad was often many weeks or even months behind the occurrence. Photography had been invented in the 1830s but even by the 1900s newspaper photographs were a rarity and stories were frequently illustrated by sketches, diagrams and cartoons (interesting to note that in the UK at least artists' sketches are still the way we illustrate what is happening within a courtroom where cameras are prohibited). Newspaper photographs did exist of course but had to be hand carried back to the newspaper offices by train, ship or road – a time consuming business in those days.

This situation improved dramatically for the newspaper industry by the development of the wire picture by Reuters in the early part of this century. It is worth noting that the Reuters system was a very early example of digital coding and even incorporated data compression with a form of what we now call 'run length coding' – not much is really new, is it? This development enabled pictures to accompany the telegraphed or telephoned reports from many major cities in Europe and the US.

As a consequence, the newspapers prospered and fortunes were made by the now infamous press barons whose influence on both the public and governments was considerable.

(Taylor, 1995)

The lines infrastructure refers to the network of wires connecting different places together ('line' as in 'telephone line' or 'transmission line').

The idea of a wire picture is that an image is coded in a method that allows it to be transmitted over 'a wire' – i.e. sent along a telegraph link.

Taylor highlights the development of the lines infrastructure and invention of the wire picture as being developments that enabled telephony and telegraphy to be exploited by the news industry.

These are two themes you will find coming up all the time in discussions of IT systems: networking issues – specifically 'network reach' – and coding

2.4 Comparing early sources of news

Radio and newsreels

Taylor compares the merits of radio and newsreels, as sources of news, with those of newspapers.

The value of radio as a communications medium had proved itself during the 1914–18 conflict following its development around the turn of the century by Marconi, Hertz, Popov and others.

The passing of the First World War soon saw the establishment of many national radio broadcasting organisations, the BBC being formed in 1922.

It was not long before regular news bulletins were being broadcast and despite the development of the technique of going over live to quote 'our reporter on the spot', which considerably enhanced the impact of the report, radio was of course not able to illustrate the news

Despite some imaginative painting of pictures with words, the newspaper industry did not regard radio as a threat but more as a useful advertising medium to alert the public to the fact that something interesting or dramatic had happened causing them to dash out to buy a newspaper to get the details and all important pictures to fill in the gaps in the radio report.

During the 1930s the ability to add live action sound onto film caused the cinema industry to explode onto the mass entertainment scene. The visual power of cinema as a news medium was quickly recognised and organisations such as British Movietone News and Pathe News soon established themselves with 'newsreels' which were a compilation of the week's best visual stories shot and made on high quality 35 mm film.

Despite the likelihood of the cinema goers already being aware of the newsreel stories, it was the combination of well-shot pictures with well-written, punch commentaries which made these newsreels very popular, especially during the Second World War where the visual impact was sometimes quite shocking with audiences unused to the realities of life frequently reduced to tears.

So it had taken around 100 years to develop a means and organisation from the original enabling telegraphic and photographics technologies to deliver moving news images to the public albeit somewhat later than the actual event.

In the meantime, the newspapers continued to be the primary source of news for the public.

(Taylor, 1995)

I was struck here by Taylor's comment that the public would 'dash out to buy a newspaper to get the details and all-important pictures to fill in the gaps in the radio report', because I will still buy a newspaper to get more details about a topic – even if I've seen pictures on TV. This raises the question of how much information you can get from different media (I believe I get more from the newspaper than from a TV report). There are also perhaps differences in the nature of the information that you can get from different media, with more comment and analysis in newspapers.

'[T]he ability to add live action sound onto film' mentioned by Taylor is significant because there was no sound on early film (as in 'silent movies'), and mechanisms for recording sound onto film alongside the images came later (in the 1930s). The methods used were analogue.

The reference to 35 mm refers to the width of the film. The wider a film, the bigger the picture and the higher the quality of the projected image. However , as Taylor discusses later in the paper , the equipment needed for filming in a narrower width (16 mm was used) was lighter and more portable.

Activity 2

From your reading of this material and your understanding of the media, list the merits and limitations of each of: newspaper, radio and newsreels, as a source of news during the 1920s and 1930s. How does today's television news compare?

Discussion

Newspaper. Merits: (still) pictures, details of news stories, and you can choose when and at what pace to read. Limitations: delay (not live), no moving images, no sound.

Radio. Merits: live reports, sound. Limitations: no images.

Newsreels. Merits: moving images, sound. Limitations: delay (not live).

Today's television has the merits of radio and newsreels – live reports with sound and moving images – but often not as much depth and analysis as you can get in newspapers, nor does it have the time flexibility of newspapers.

2.5 News and television

Early television

In the next part of the paper Taylor discusses the early days of television.

This status quo of print/radio/cinema newsreels existed until the early 1950s when the BBC restarted TV services […].

It was natural that news bulletins would form part of this reborn visual communications medium. But resources were in short supply and early BBC TV news programmes were little more than radio news read into camera by a newsreader with a posh voice wearing a dinner jacket. What picture they did have was like the cinema newsreels shot on 35 mm film.

There was nothing there to threaten the newspaper proprietors.

But in 1955 came Commercial Television and the creation of Independent Television News.

From the start, ITN wanted to produce a newscast which would be different from the BBC's by showing much more moving picture.

Here technology came to the aid of ITN. Recent improvements had been made to the quality of film stock and ITN took the then controversial decision to adopt 16 mm film.

Compared with 35 mm equipment, the 16 mm camera was very light, more manageable and much more affordable to purchase and operate. It was therefore more suitable for volume newsgathering.

ITN judged, quite correctly as it turned out, that illustrating more stories with moving pictures would appeal to the public – even if the picture quality was somewhat worse than they were used to in the cinema or on the BBC.

The BBC soon followed with its own 16 mm equipped film crews.

[…]

By the end of the 1960s, with the introduction of colour, bulletins were taking on a style not too far removed from present day news bulletins – remember the first broadcast of News At Ten, the first half-hour news on British TV, which was in 1967.

Something else happened in the latter half of the 1960s which was to have a major impact on the immediacy of TV news – the development of the communications satellite.

Telstar in 1962 had shown the way ahead and was rapidly followed by the geostationary Early Bird satellites over the Atlantic Ocean.

TV news companies were now able to include live reports from the USA in their news bulletins.

By the early 1970s the satellite networks had become global and TV news companies were regularly including illustrated stories from around the world into their evening news programmes despite the very high price tag of $2,000 for a 10 minute slot.

The newspaper industry was now beginning to worry – television news was able to include stories in late evening bulletins which the dailies did not have for their next day editions.

(Taylor, 1995)

Restarted TV … reborn. The BBC started a TV service in 1936, but it was suspended at the outbreak of the Second World War in 1939. It was restarted after the end of the war.

The then controversial decision to adopt 16 mm film. Taylor here explains how the lower quality of 16 mm film compared with 35 mm film was more than offset by other advantages of using the smaller format.

There are parallels here in other IT contexts, where the balance between quality and 'being able to do it at all' leans towards the 'being able to do it'. For example, people seem to be willing to accept inferior sound quality from mobile telephones compared with fixed-line phones.

To put it another way, 'good enough' wins over 'the best', and in general, digital techniques allow you to make trade-offs so that the quality is pitched at 'good enough' for any given service. In digital audio broadcasting (DAB, digital radio) a trade-off is possible between the number of channels available and the sound quality. Currently, DAB uses a much lower quality than possible to allow for a large number of channels.

Communications satellites. Orbit the Earth and allow communications by microwave links between terrestrial locations that are a long way apart (Figure 1). They can be used for communications between fixed locations on the Earth or else to provide a wide coverage for mobile users. The original use was only for fixed locations, because the ground stations (the transmitting and receiving aerials and associated equipment on the Earth) needed for the users on the Earth were too large to be mobile.

Communications satellite
Figure 1 Communications satellite

Satellites have to be orbiting in order to stay at a fixed height above the Earth, and the speed at which they orbit is related to their height. The higher they are, the longer they take to go once around the Earth. At one particular height – about 36,000 km above the ground – the orbit time is 24 hours. If a satellite's orbit is a ring directly above the equator at this height, the satellite will remain over the same spot on Earth. Such an orbit is known as a geostationary orbit, and a satellite in a geostationary orbit is called a geostationary satellite.

Telstar I was the first communications satellite that allowed television signals to be sent across the Atlantic, in 1962. Telstar I was not in geostationary orbit and as a consequence was only in the right position to allow transatlantic communications for 30 minutes at a time, three or four times a day.

Activity 3

What are the advantages of using 16 mm film that offset its poorer quality compared with 35 mm?

Discussion

From what Taylor says, it seems that the equipment needed for the 16 mm film was cheaper, lighter and more manageable than that needed for the 35 mm. Using 16 mm film therefore allowed ITN to have moving pictures with more of their stories.

2.6 New media

From film to videotape

Taylor now describes the era when film was replaced with analogue electrical video.

However, despite all these electronic advances, our newsgathering was still all film based – but the next development was not too far away.

In 1976 RCA demonstrated the […] 'Hawkeye' combined camera and recorder as an all electronic concept to replace newsfilm.

This development sparked the imagination of news broadcasters who quickly recognised the benefits of getting away from film with all its processing delays and bulky, expensive telecine equipment and of course its inability to provide live coverage.

By 1979 the ENG revolution was gathering momentum and it soon became unstoppable.

In 1980 ITN became the first UK broadcaster to introduce large scale ENG operations. By 1982 film as a newsgathering medium was dead.

TV news was now moving into the position of being the public's primary source of news, with newspapers accepting that they had lost the battle.

It had taken TV news just 30 years from its inception to reach this dominant position. The news reels of the 1930s, 1940s and 1950s were long gone and the demand for real up to the minute news was growing even stronger

Well, there were still large areas of the world which did not have wideband cable infrastructure or possess large expensive satellite ground stations and these became the next technological battleground.

These communication dead spots provided the challenge to fire the development of transportable ground stations.

In 1985 ITN formed an alliance with the IBA and McMichael Electronics to develop the world's first SNG uplinks – the Newshawk – which we first used in 1986.

[…]

(Taylor, 1995)

RCA (which originally stood for Radio Corporation of America) is the company that manufactured the equipment it called 'Hawkeye', which was one of the first examples of what we now call a 'camcorder' – combined video camera and recorder.

A telecine is a device that converts a film to an electrical video format. When film was used for TV newsgathering, a telecine was then needed to convert from the film to an electrical format for TV broadcast.

A wideband cable is a cable capable of conveying a wide bandwidth signal. For digital signals, that would mean a high data rate – lots of bits per second. However, here Taylor is talking about an analogue signal and he means a cable capable of carrying a wide range of frequencies. In both cases – analogue and digital – a wideband cable is the sort of cable needed to carry video signals and high-quality audio. It is in contrast to narrowband, which would be capable only of carrying less demanding signals, such as telephone-quality audio.

As noted earlier, at first satellite communications could be used only for communication between fixed locations on the Earth, but with further advances in technology, satellite ground stations could be mobile. Initially they could be mounted on a vehicle, but now they can even be small enough to be carried. Satellites allow communication to remote regions of the Earth where the infrastructure does not exist for any other means of communication (communications dead spots, as Taylor refers to them), which allowed for satellite newsgathering (SNG). The (satellite) uplink is the communication path from the ground to the satellite. The other direction is the downlink.

The IBA is the Independent Broadcasting Authority. This was the body set up to regulate commercial television and radio. It became the ITC (Independent Television Commission) in 1990, which in turn ceased to exist in December 2003 when its function was taken over by OFCOM, the Office of Communications.

The Newshawk is a portable satellite ground station, which can be connected to a video camera, used for satellite newsgathering.

Activity 4

What does Taylor say are the disadvantages of film which were overcome by moving to analogue electronic video gathered using the Hawkeye?

Discussion

Taylor says of the development of the Hawkeye camcorder:

This development sparked the imagination of news broadcasters who quickly recognised the benefits of getting away from film with all its processing delays and bulky, expensive telecine equipment and of course its inability to provide live coverage.

Taylor (1995)

In other words, the disadvantages of film were that:

  • there were processing delays

  • the equipment needed to convert from film to electronic video for broadcasting (the telecine equipment) was expensive and bulky

  • it could not be used for live coverage.

2.7 Digitisation of the news

Into the digital era

Remember that this paper was written in 1995, at which time digital techniques were just beginning to take over in electronic newsgathering. Taylor therefore concludes his paper with comments on the nature and impact of changing to digital techniques.

By the mid-1990s therefore one could be forgiven for thinking that news providers had all the technology they needed to deliver real news from more or less anywhere in the world.

And indeed we have.

So what is it that now attracts news companies to an all digital solution for newsgathering, post production and transmission?

Well, the traditional broadcast industry view is of course well known: digital brings consistent high quality pictures and sound from reliable, stable equipment which requires minimal or in some cases zero routine alignment.

Digital techniques together with advances in VLSI and [other developments in electronics], have opened the way for video signal processing to be carried out on economically priced standard computer platforms, with appropriate software, to give broadcasters much greater choice and ever more function per pound spent – great news when wrestling with hard-pressed capital budgets.

These benefits are attractive to news companies too, but the real bonus which drives our interest is the potential for digital computer based solutions to deliver high quality news programmes free from the multigeneration limitations of analogue VTRs and the editorial inflexibility of tape based production where stories cannot easily be altered or updated and at the same time achieve substantial operational cost savings.

[…]

[L]et me conclude by highlighting what I believe to be the digital newsgathering promise:

  • More efficient newsgathering.

  • More options for getting the story back.

  • Faster post production.

  • Greater editorial freedom.

  • Broad multiskilling opportunities.

  • Easier automation.

  • Improved technical quality.

  • Lower operating costs.

I make no excuse for emphasising the cost saving elements that digital news operations should achieve.

News companies are having to compete in an ever more cost conscious broadcasting industry. Our greatest asset is our staff but regrettably it is also our most expensive cost.

Yes, all the quality and other operational benefits of this technology are highly desirable and contribute toward a news provider's competitive edge, but at the end of the day it is the potential cost savings primarily achieved through multiskilling and job elimination which are the main attraction to news providers.

It is for this reason that I stated at the start of my presentation that the impact of this particular technology revolution is going to be bigger than anything we have experienced in the past.

[…]

(Taylor, 1995)

Alignment. Analogue electronic equipment often requires adjustments in order for it to perform at its best. On magnetic tapes, for example, it might be necessary to adjust the position of the record and playback heads on the tape, and the record and playback levels might have to be adjusted up or down. Generally speaking, it is much easier to make digital systems independent of alignments. This is because the exact details of the symbol do not matter, provided you can tell whether a symbol is representing a 1 or a 0.

VLSI is very-large-scale integration. Originally, electronic circuits were built from 'discrete components' (such as resistors, capacitors, inductors and transistors – but you do not need to know what these all are). A single component would do one job, and a complicated electronic circuit would be built up from many components. Later, integrated circuits (ICs) came along, which combined many components on a single 'silicon chip'. As the technology advanced, more and more components could be fitted onto a single integrated circuit so that the single device (single chip) could do more and more advanced functions. Integrating large numbers of components onto a single chip was called 'large-scale integration', then getting even more onto a chip was 'very-large-scale integration'. I am not sure if anyone has precisely defined 'large' and 'very large' in this context.

Video signal processing. When you have an image encoded electronically (the video signal), you can manipulate it to do things like change colours or remove unnecessary detail. Manipulating a signal in this way, for whatever purpose, is known as signal processing.

Multigenerational limitations. Each time a copy is made from an analogue signal – video or audio – there is inevitably a degradation in quality, because copying can never be perfect. Thus, there is a limit to the number of 'generations' of copies that can be made. The maximum number of generations before the quality becomes unacceptable depends upon the copying process and the application, but typically we are talking in terms of three or four. With digital signals this effect can be virtually eliminated, because digital signals can be regenerated and then they are 'as good as new'.

VTR stands for video tape recorder.

Stories cannot easily be altered. As I shall be discussing later, the fact that stories can be easily altered with digital systems is not always beneficial!

The term post production refers to the collection of processes that are done to a sequence of video or audio after the filming or recording. The processes include editing and signal processing.

Activity 5

Taylor's paper highlights a number of important dates in the development of newsgathering and news broadcasts. A good way of getting a picture of historical developments is to plot events on a time line. Figure 2a begins this for some of the events mentioned by Taylor. Go back over Taylor's paper and extract more dates to add to a copy of the time line. (I found another eleven events to add.) Note that the use of 'c', an abbreviation for 'circa' meaning 'about', indicates an approximate date, as in c.1970 for 'about 1970'.

Start of a time line for Activity 5
Figure 2a Start of a time line for Activity 5
Discussion

My answer is shown in Figure 2b.

Time line for the answer to Activity 5
Figure 2b Time line for the answer to Activity 5

Activity 6

When considering the relationship between technology and society it is often helpful to consider influences in two directions:

  1. Technology push. Newly developed technology creates a need that wasn't there before.

  2. Demand (or market) pull. Users have a need, and technology is deliberately developed to satisfy that need.

Thinking in terms of the user as the news industry (journalists, and newspaper, radio or TV businesses in general), there is evidence of both technology push and demand pull in Taylor's description.

Identify one example of each, and support your answer with quotes from Taylor's paper.

Discussion

The example which I thought indicated technology push was the use of cinema for newsreels:

The visual power of cinema as a news medium was quickly recognised and organisations such as British Movietone News and Pathe News soon established themselves with 'newsreels' which were a compilation of the week's best visual stories shot and made on high-quality 35 mm film.

Taylor (1995)

I thought that the clearest example of demand pull was the development of the Newshawk for satellite newsgathering:

Well, there were still large areas of the world which did not have wideband cable infrastructure or possess large expensive satellite ground stations and these became the next technological battleground.

These communication dead spots provided the challenge to fire the development of transportable ground stations.

In 1985 ITN formed an alliance with the IBA and McMichael Electronics to develop the world's first SNG uplinks – the Newshawk – which we first used in 1986.

Taylor (1995)

3 Newsgathering now

3.1 Introduction to SNG and ENG microwave

Taylor's paper, From Newsreels To Real News, provided a historical overview of newsgathering up to the time the paper was written in 1995. It provides a good background but is out of date as I write this in 2005 (ten years is a very long time in the recent history of IT). Taylor wrote an updating paper, Real News Meets IT. I shall be drawing on Real News Meets IT in later sections of this course. In this section, I have reproduced an extract from a book (Higgins, 2004) which introduces the principal elements used by a TV station to get a report for broadcast.

Higgins says that his book was written to offer 'beginning professionals in satellite and electronic newsgathering an introduction to the technologies and processes involved in covering an event'. Like Taylor, Higgins worked in the news industry for many years and so has the authority of an 'insider'.

The extract here is from the beginning of the book and you should be able to follow most of the content from what you already know. As you read, try to make links with the issues raised in Taylor's paper as well as your developing understanding of IT generally. I also found it helpful to think about news reports that I have seen on television to set the paper into a context that I could recognise.

Introduction to SNG and ENG Microwave

J. Higgins, 2004, Elsevier Focal Press

Basic Overview of the Role of ENG/SNG

Television newsgathering is the process by which materials, i.e. pictures and sound, that help tell a story about a particular event are acquired and sent back to the studio. On arrival, they may be either relayed directly live to the viewer, or edited (packaged) for later transmission.

The process of newsgathering is a complex one, typically involving cameraman and a reporter, a means of delivering the story back to the studio, and for live coverage, voice communication from the studio back to the reporter at the scene of the story.

Coverage of a sports event involves essentially the same elements but on a much greater scale. Instead of a single reporter you would have a number of commentators, and instead of a single cameraman, you might have up to thirty or forty cameras covering a major international golf tournament.

Whether it is a news or a sports event, the pictures and sound have to be sent back. This could be done by simply recording the coverage onto tape, and then taking it back to the studio. However, because of the need for immediacy, it is far more usual to send the coverage back by using a satellite or terrestrial microwave link, or via a fibre optic connection provided by, say the telephone company.

[As I shall discuss in a later section in this course, tape is likely to be replaced by other storage media over the next few years. The story described here would be essentially the same, though, using, say, flash memory.]

[…]

Principal Element in Covering an Event

Let us just look at the principal elements of covering a news story from where it happens on location to its transmission from the TV studio.

We will pick a type of story that is of local and possibly national interest. Just suppose the story is the shooting of a police officer during a car chase following an armed robbery.

Camera and sound

The shooting happened around 2.30 pm, and the TV station newsroom was tipped off shortly after by a phone call from a member of the public at the scene.

Having checked the truth of the story with the police press office, by 3.00 pm the newsroom at the TV station despatched a cameraman (generically applied to both male and female camera operators) and a reporter to the location.

Generally these days, the cameraman is responsible for both shooting the pictures and recording the sound. The reporter finds out all the information on the circumstances of the armed robbery, the car chase and the shooting of the police officer. The cameraman may be shooting 'GVs' – general views of the scene and its surroundings onto tape – or interviews between the reporter, police spokesmen and eyewitnesses.

The reporter then will typically record a piece-to-camera (PTC) … which is where the reporter stands at a strategic point against a background which sets the scene for the story – perhaps the location where the officer was shot, the police station, or the hospital where the officer has been taken – and recounts the events, speaking and looking directly into the camera.

So by 5.00 pm the cameraman has several tapes (termed 'rushes'), showing the scene, interviews and the reporter's PTC. Now, will this material be edited on site to present the story, or will the rushes be sent directly back to the station to be edited ready for the studio to use in the 6.00 pm bulletin?

Editing

The 'cutting together' of the pictures and sound to form a 'cut-piece' or 'package' used to be carried out mostly back at the studio.

Mobile edit vehicles were usually only deployed on the 'big stories', or where there was editorial pressure to produce a cut-piece actually in the field. In the latter part of the 1990s, with the increasing use of the compact digital tape formats, the major manufacturers introduced laptop editors.

The laptop editor has both a tape player and a recorder integrated into one unit, with two small TV screens and a control panel. These units, which are slightly larger and heavier than a laptop computer, can be used either by picture editor, or more commonly nowadays, by the cameraman.

During the 1990s, the pressure on TV organisations to reduce costs led to the introduction of multi-skilling, where technicians, operators and journalists are trained in at least one (and often two) other crafts apart from their primary core skill.

However, the production of a news story is rarely a contiguous serial process – more commonly, several tasks need to be carried out in parallel. For instance, the main package may need to be begun to be edited while the cameraman has to go off and shoot some extra material.

The combination of skills can be quite intriguing, so we can have a cameraman who can record sound and edit tape; a reporter who can also edit tape and/or shoot video and record sound (often referred to as a video journalist or VJ); or a microwave technician who can operate a camera and edit.

So it is often a juggling act to make sure that the right number of people with the right combination of skills available are on location all at the same time.

Getting the story back

There are now three options as to how we get the story back to the studio for transmission on the 6 o'clock news bulletin – it can be:

  • taken back in person by the reporter and/or the cameraman

  • sent back via motorbike despatch rider

  • transmitted from an ENG microwave or SNG microwave truck.

The first two are obvious and so we need not concern ourselves any further. The third option is of course what we are focused on – and in any case, is the norm nowadays for sending material in this type of situation from location back to the studio.

As it turns out, the newsdesk – realizing the scale of the story once the reporter was on the scene – had despatched a microwave truck down to the location at 4.00 pm. The ENG microwave or SNG truck (for our purpose here it does not matter which) finds a suitable position, and establishes a link back to the studio, with both programme and technical communications in place.

By just gone 5.00 pm, the tape material (rushes or edited package) is replayed from the VTR in the truck back to the studio.

Going live

The reporter may actually have to do a 'live' report back to the studio during the news bulletin, and this is accomplished by connecting the camera to the microwave link truck (along with sound signal from the reporter's microphone) either via a cable, a fibre optic connection or using a short-range microwave link […].

From the studio, a 'feed' of the studio presenter's microphone is radioed back to the truck, and fed into an earpiece in the reporter's ear, so that the studio presenter can ask the reporter questions about the latest on the situation. The reporter will also be provided with a small picture monitor (out of camera shot) so that they can see an 'off-air' feed of the bulletin.

This is commonly known as a 'live two-way', and what the viewer sees is a presentation of the story, switching between the studio and the location. […]

Typical transmission chain

We now have all the elements that form the transmission chain between the location and the studio, enabling either taped or live material to be transmitted.

The camera and microphone capture the pictures and sound. The material is then perhaps edited on site, and then the pictures and sound – whether rushes, edited or 'live' – are sent back via the truck (ENG microwave or SNG) to the TV station.

The processes that occur at either end of the chain are the same no matter whether the signals are sent back via terrestrial microwave or via satellite.

[…]

Activity 7

Earlier we saw that Taylor suggested eight areas in which digital methods promised improvements:

  1. More efficient newsgathering.

  2. More options for getting the story back.

  3. Faster post production.

  4. Greater editorial freedom.

  5. Broad multiskilling opportunities.

  6. Easier automation.

  7. Improved technical quality

  8. Lower operating costs.

Can you see any of these appearing in Higgins's description?

Discussion

Of Taylor's suggestions for where digital methods promised improvements, the area that I particularly noticed was number 5, multiskilling, because Higgins explicitly discusses this, and gives examples of:

a cameraman who can record sound and edit tape; a reporter who can also edit tape and/or shoot video and record sound … or microwave technician who can operate a camera and edit.

Higgins (2004)

Higgins says that the multiskilling was motivated by the need to reduce costs. Assuming the desired outcomes were realised, this is evidence of number 8, lower operating costs.

Several of the others are implied by Higgins, although maybe not made explicit. For example, he describes the use of laptop computers to do editing in the field. This is likely to deliver 'faster post-production' and 'greater editorial freedom'.

3.2 IT processes in newsgathering

The generic diagram of a communication system, as discussed previously, is shown in Figure 3. If we think of newsgathering as communication from the reporter in the field (User 1) to news editors in the studio (User 2), then we can relate some of the processes described by Higgins to the processes in the boxes. Note in particular Higgins' summary of the 'typical transmission chain':

The camera and microphone capture the pictures and sound. The material is then perhaps edited on site, and then the pictures and sound – whether rushes, edited or 'live' – are sent back via the truck (ENG microwave or SNG) to the TV station.

Higgins (2004)
Generic diagram of a communication system
Figure 3 Generic diagram of a communication system

So at the transmitter we have:

  • Receives from User 1. This is done by the camera and microphone, which convert the image and sound to electrical signals.

  • Manipulates. Editing on site is an example of manipulation.

  • Send. The transmitter on the truck sends the signals via microwave.

  • Stores/retrieves. Higgins describes the material being recorded to tape (remember that the unedited recorded tapes are referred to as 'rushes'), and then retrieved (either the rushes or edited) to send back to the studio.

The only network activity described here is 'conveying'. The main focus of Higgins's description involves conveying via microwave, but he also makes reference to taking the story on tapes back in person or using motorbike despatch rider.

Equipment at the TV studio, including video servers and the computers used by the editors, constitute the receiver. Here, the microwave signal (or the tapes) is received, and manipulated if any further editing is required. If the item is not being broadcast live then it will be stored and subsequently retrieved at the time of broadcast – presumably it is in any case stored for archive purposes.

I shall now look in more detail at some of the technology used in the field.

4 Anatomy of a digital camcorder

4.1 An introduction to the camcorder

The development of portable camcorders capable of recording long sequences of high-quality video has been important for newsgathering, but camcorders are also very popular consumer items. In this section I shall be looking in more detail at elements of a camcorder, using it as an example of an IT system which you are probably familiar with to some extent. Even if you have not used one yourself you will have seen them being used or seen them in shops. (If you get the opportunity, you may like to have a look at a modern digital camcorder in a high-street shop.)

'Camcorder' is a contraction of '(video) camera and recorder'. There is an implication in this context that a video camera alone might not record (store) the image. Whereas a film camera always stores an image (on film), a video camera might only convert the image to an electrical signal for display remotely on a TV monitor. For example, you can buy video cameras for domestic security which allow you to view outside your house by displaying the output of a camera on your television. If you connect one of these cameras to a video recorder you can record what you see, but the video camera does not itself contain storage. A camcorder is a combination of the two: the camera which converts the image to an electrical signal and a video recorder which records the electrical signal.

Figure 4 represents a camcorder based on the model introduced earlier in the course.

Model of camcorder
Figure 4 Model of camcorder

Activity 8

How does Figure 4 differ from the model of a stand-alone computer that you have seen before?

Discussion

The only difference in the diagram is that the model of the camcorder has light and sound as additional inputs.

You would not normally think of a camcorder as a computer, but for our purposes, in the context of IT, this is a useful starting point. A camcorder does indeed contain a computer which, because it is hidden from the user and it takes inputs from other sources as well as the user, is called an embedded computer. In Figure 4 we draw attention to this feature of the camcorder – the embedded computer – neglecting other features that are for the present purposes irrelevant. Nothing about the shape, the construction materials or the power source appear on Figure 4, for example, although these are all important in other contexts. Also, Figure 4 does not include any details of the things that it shows.

The approach to understanding a device or system by focusing on particular aspects and neglecting details is a common tool of technology, and is known as abstraction. Typically, to analyse a device or system you start at a high level of abstraction, where you consider only very broad features, then move to a lower level of abstraction where you look at more details. This is what I shall do for the camcorder, and in the next section I look at some of the processes involved in receiving light and sound.

4.2 Sound and light input

Figure 5 shows a model of a camcorder at a lower level of abstraction than Figure 4, concentrating on the input of light and sound.

The input of light and sound to a camcorder
Figure 5 The input of light and sound to a camcorder

Activity 9

Although Figure 5 shows more details of the light and sound input than Figure 4, some other aspects of the camcorder are shown in less detail than in Figure 4. What things have gone, and can you suggest why I haven't shown them?

Discussion

In Figure 5 I have not shown the list of processes – receives, sends, stores/retrieves, manipulates – nor have I shown the oval labelled 'user' with the two input/output arrows. I have omitted these things because I am concerned specifically with the light and sound inputs, and the other features would be a distraction for the moment. This is an important feature of abstraction: selecting what is important.

I shall now go on to discuss the components shown in Figure 5 in more detail.

4.2.1 Microphone

A microphone converts sound in the air to an electrical signal. Sounds consist of pressure waves in (usually) the air, and to reproduce a particular sound it is necessary to reproduce the corresponding pattern of pressure waves. A microphone converts the pressure waves in the air to the same pattern of voltage waves on wire. If this pattern of voltage variation is applied to a loudspeaker, the speaker converts the electrical signal back to pressure waves in the air, reproducing the sound.

4.2.2 Microphone input subsystem

We'll now move on to consider the microphone input subsystem.

Activity 10

Is the output from the microphone an analogue or a digital signal?

Discussion

The output from a microphone is an analogue electrical signal. Just as sound in the air involves continuous pressure variations over a continuous range, so the voltage from the microphone varies continuously over a continuous range.

Because I am considering here a digital camcorder, the analogue audio signal will be converted to a digital signal, and this is the main function of the microphone input subsystem. The digital audio signal might also be compressed, and put into a standard format.

4.2.3 Lens system

The function of the lens system is to project an image onto the CCD light sensor. A well-designed system ensures that the image is sharp – in focus – and bright. To achieve this, what is needed is a good-quality glass lens (or rather a series of lenses) and accurate focusing. The brightness of the image depends upon the size of the lens. The bigger the lens, the brighter the image, but bigger lenses are more difficult to make and therefore more expensive. Also, of course, bigger lenses mean bigger and heavier cameras. As so often in technology there are trade-offs between several factors. In this case, brightness of the image (and therefore the ability of the camera to operate in low light levels) interacts with cost, size, weight and image quality. I shall discuss focusing in more detail later, after describing the light sensor.

4.2.4 CCD light sensor

The CCD light sensor is a transducer that converts light to an electrical signal. CCD stands for 'charge coupled device', and physically a CCD light sensor is an integrated circuit with a transparent cover. A photograph of one is shown in Figure 6. Under the cover is a rectangular array of light-sensitive electronic components called photosites. You do not need to know the mechanism involved, but each photosite provides an analogue electrical output that measures how bright the light is on that site. Each photosite can therefore contribute one pixel to the detected image. Important parameters of a CCD light sensor are the size of the light-sensitive area and the number of photosites – and hence the number of pixels in the image it can produce.

Picture of a CCD
Figure 6 Picture of a CCD (source: NanoElectronics Japan)

The size of the device is usually expressed in terms of the length of a diagonal line from one corner of the rectangle to the other. As you often find in IT, advances in the technology lead to a reduction in the size, with the same or better performance and maybe lower cost. CCD sensors are an example of this as Taylor observes in his 2004 updating paper:

[There has been] the widespread adoption of small hand held mini-camcorders producing good quality images from small 1/3rd inch CCDs originally developed for the consumer market. An enormous amount of Japanese development effort has gone into producing high resolution, high sensitivity small CCDs to the point where current 1/3rd inch CCD produces better all round performance than a standard 2/3rd inch broadcast format camcorder of 5 years ago, but at around only 20% of the cost. Now there is a trend to move to 1/5th inch CCDs which will enable even cheaper and more compact camcorders. This lower cost has enabled news companies to put greater numbers of camcorders into the field and gather wider cross section of material.

Taylor (2004)

For high-resolution images large numbers of pixels are needed, and at the time of writing, camcorders can have up to several 'megapixels', where one megapixel is 1 million pixels (106 pixels).

Activity 11

If a camcorder has a CCD with an array of 811 pixels horizonally by 508 pixels vertically, how many pixels is that in total? Give your answer to two significant figures using scientific notation.

Discussion

811 pixels horizontally by 508 pixels vertically gives a total of 811×508=411,988 pixels. To two significant figures that is 410,000, or 4.1×105, using scientific notation.

The output from one photosite on a CCD is a measure of how bright the light is at that site. It contains no information about the colour of the light. To get colour information, coloured filters are placed in front of the CCD so that separate photosites measure the brightness in each of the three primary colours of light: red, blue and green.

Some cameras use separate CCDs for each of the three colours whereas others use a single CCD with different coloured filters interleaved over individual sites, but the details of the configuration of the filters does not concern us here. If you want to know more you will be able to find information by searching on the Web.

Instead of using CCDs for light sensors, some cameras use CMOS (complementary metal oxide semiconductor) sensors. There are differences between CCD sensors and CMOS sensors – broadly speaking CCD sensors provide better image quality but cameras using CMOS sensors can be smaller – but at the level of the discussion here they essentially perform the same function.

I shall now return to the lens system of the camera, to explain how it focuses the light.

4.2.5 Focusing

Focusing is done by adjusting the size of the gap between the lens and the light sensor. To get distant objects in focus, the gap needs to be smaller than that required for close objects (see Figure 7 below).

Focusing light using a camera lens
Figure 7 Focusing light using a camera lens

In theory the exact gap size is determined by the exact distance to the object being filmed. In practice, one gap size will be adequate for a range of object distances, this range being called the 'depth of field'.

Focusing in a camcorder is invariably done with an electric motor moving a lens, and there will be a facility to focus automatically (autofocus). Auto focusing is either passive or active.

Passive autofocusing works by a computer embedded in the camera examining the image (from the light sensor) to determine whether it is in focus or not.

Activity 12

How do you decide whether an image is in focus? What do you think the camera's computer can look for to determine whether the image is in focus?

Discussion

You can tell whether an image is in focus by seeing how 'sharp' it is. The camera's computer looks for sharp edges – sudden changes in colour or brightness. These abrupt changes will be present only if the image is in focus.

Under control of the camera's computer, the motor will move the lens in and out to find the best focus, identified by the presence of sharp lines in the image.

Active autofocusing works by the camera measuring the distance to the object viewed, and using that to calculate the gap needed between lens and light sensor. It measures the distance by sending out pulses of infrared light towards the object being filmed, and measuring how long it takes the reflected light to get back to the camera. During this time, between sending the pulse and detecting the reflected pulse, the light does a round trip – it travels out and back again. The time the light takes to go one way is therefore half the measured interval. I'll call this time – half the time between sending and receiving the pulse – the transit time.

The speed of light is known, so the transit time can be used to calculate the distance between the camera and the object.

Specifically, the distance from the camera to the object is given by the transit time multiplied by the speed of light.

This can be written much more concisely using

  • d for the distance to the object

  • t for the transit time

  • c for the speed of light.

Strictly c is the speed of light in free space (a vacuum). In the atmosphere – through air – light travels at almost the same speed as in a vacuum, and we can neglect the difference. In glass, however, light travels substantially slower (about two-thirds the speed) and it is necessary to take account of this fact when considering light in optical fibre.

Then I can write:

When writing equations like this, it is a convention that you can miss out the multiplication symbol. Any numbers or letters written next to each other are taken to be multiplied together, so we can write:

Equations relating three quantities in this particular way, where one (in this case d) is given by multiplying the other two (in this case t and c) together, are quite common, and you will be meeting other examples later in this course. For reasons which will become clearer later, it is useful to draw this type of equation in a 'formula triangle', as shown in Figure 8. The quantities that are multiplied together go in the bottom two corners (it doesn't matter which way around), and the thing they calculate goes in the top corner. I am not going to say anything more about the formula triangle for the moment, though it might be a bit mysterious, but it will become clearer later.

A formula triangle
Figure 8 A formula triangle

I now want to put in a value for the speed of light, c, so that I will have a formula that allows me to calculate the distance d directly from the transit time t.

The speed of light in metres per second is 3×108. That is to say, light travels 3×108=300,000,000 metres every second. When I am doing calculations related to the focusing of the camera, however, I will find the times I am using will be much smaller than a second and the relevant distances will usually be of the order of a few metres – not three hundred million metres! What I am going to do, therefore, is express the speed of light in terms of how far it travels in one nanosecond (ns), which is one thousand-millionth of a second:

Activity 13

How far will light travel in one nanosecond?

Discussion

Light travels 300,000,000 metres in one second, so in one nanosecond it travels:

So I can express the speed of light as 0.3 m/ns (0.3 metres per nanosecond). But this means that when I write the equation, t represents a time in nanoseconds and d represents a distance in metres. It is important that all the units match. Using a value of c=0.3 m/ns and provided t and d are in nanoseconds and metres respectively, I get:

It doesn't matter which way round you write a multiplication (4×5 is the same as 5×4) and it is a convention always to put numbers before letters, so this would normally be written:

For example, if the transit time is 20 ns, the distance in metres is given by:

So the distance is 6 m.

Activity 14

If the transit time is 14 ns, how far away is the object?

Discussion

The distance in metres is given by:

So the distance is 4.2 m.

Algebra and the use of symbols

The use of symbols to represent numerical values, such as 'd ' for distance, 't' for time, 'c' for the speed of light, is the starting point for algebra. If you don't like maths this might be worrying, but I hope that when you get used to it – and simple familiarity goes a long way to demystifying algebra – you will see that at least it provides useful shorthand.

If you know that c is being used to represent the value for the speed of light and t a time duration, then you will automatically read c×t as 'multiply the speed of light by the time duration'. It is helped – when you are used to it – by the fact that the speed of light is nearly always represented by c, and that t refers to time in lots of different contexts. Notice, incidentally, that there is a subtle difference in the use of c compared with d and t, because c is a fixed number, but d and t can change. We say that c is a 'constant' whereas d and t are 'variables'.

Besides the use of symbols being a shorthand, there is much that you can do by 'manipulating' them, but you will only be meeting this at a simple level in this course. There is more about algebra in The Sciences Good Study Guide, Maths Help, Section 9 (Northedge et al., 1997).

Remember that the reason for discussing this calculation was to show that when using active focusing the camcorder can measure the distance to an object by transmitting and detecting an infrared pulse. Active autofocusing therefore involves another output from and input to the camcorder (sending and receiving the infrared pulse). We can show this by adding it to the high-level diagram of a camcorder that was shown in Figure 4, to get that shown in Figure 9.

Model of camcorder with active autofocusing
Figure 9 Model of camcorder with active autofocusing

At a lower level of abstraction, the components of the active autofocusing are shown in Figure 10, together with other relevant components of the camcorder.

Active autofocusing
Figure 10 Active autofocusing

Figure 10 is similar to Figure 5, but I have removed the detail of the sound subsystem in order to concentrate on the light and focusing subsystems.

Activity 15

Write down what you think is done by each of the following boxes in Figure 10

  1. Infrared transmitter/receiver

  2. Motor output subsystem

  3. Focusing motor

Discussion
  1. The infrared transmitter/receiver receives a digital signal from the rest of the camcorder which instructs it to generate and transmit pulse of infrared light. It also detects the reflected pulse of infrared light which it reports back to the rest of the camcorder in the form of a digital signal.

  2. The motor output subsystem receives a digital signal from the rest of the camcorder which it converts to the appropriate analogue electrical signal that drives the focusing motor.

  3. The focusing motor changes the position of the lenses so that light is focused in the light sensor. It is controlled by the electrical signal it receives from the motor output subsystem.

4.3 Recorder

The descriptions of newsgathering in the extracts by Taylor and Higgins make reference to videotapes because, until recently, tape was the main storage medium used for video. Camcorders had a built-in VTR (videotape recorder) and in the first camcorders the video was recorded as an analogue signal. Camcorders used for ENG (electronic news gathering) now store the video as a digital signal, whatever medium is used for recording. Camcorders are now appearing which can store the digital video on DVDs or on memory cards.

Taylor (2004) compares DVD-based camcorders and memory-card camcorders in his updating paper. An extract from this follows.

[…]

[T]wo quite different solutions are competing for the TV news broadcaster's camcorder business:

  • DVD based camcorders

  • Solid state flash memory based camcorders

Ever since the development of the DVD, its potential as a rugged, compact, random access medium has made it attractive to TV news companies as an alternative to video tape for use in camcorders and also as an archive medium. It is not surprising therefore that at least one broadcast equipment manufacturer (Sony) is currently introducing camcorders and field players using the latest 'blue disk' DVD technology […] which offers approximately 27 GB recording capacity on a 120 mm disk. At 25 Mbps this provides about two and a half hours of recording. The hope is that the blue disks will soon be compatible with the DVD drives available in standard laptops thus facilitating convenient low cost field editing.

There are a few physical limitations with this solution due to the size of the disk, its susceptibility to vibration and unreliability of the disk burning process at low temperatures. However, careful design is minimising the impact of these limitations and a number of TV news organisations are changing over to DVD acquisition.

Even more exciting, last year [2003] Panasonic showed a prototype completely solid state camcorder (called P2 CAM) using four of the consumer SD [secure data] flash memory cards embedded in a standard PCMCIA package called the P2 card.[PCMCIA (Personal Computer Memory Card International Association) is an industry trade association that creates standards for the memory cards that slot into notebook computers and other small portable devices.] Using 1 GB SD cards each P2 card supplies an initial 4 GB memory capacity. The camcorder holds five P2 cards to give it a total recording duration at 25 Mbps video of about 90 minutes (allowing for audio channels and overheads). Clearly this camcorder totally overcomes the physical limitations which are inevitable with any mechanical recording system and it created enormous interest from the broadcast industry. It is now in production and being evaluated by a number of TV news companies.

The convenience of simply taking the P2 card out of the camcorder and slotting it directly into a laptop computer for editing and downloading to the station server system, via the Internet if necessary, was instantly appreciated. The Achilles' heel of this otherwise ideal solution is the cost of the SD media, but as experience indicates, the cost per Gigabyte of flash memory is reducing by a factor of around 4 per annum. It will not be too long before individual SD cards reach 16 GB capacity at affordable prices. Each P2 card would then be capable of storing nearly five hours at 25 Mbps – more than enough capacity to handle HD [high definition] TV at 100 Mbps! It is also worth noting that Panasonic point out that, unlike tape, the SD cards are 'non-consumables' lasting the life of the camera. Therefore only a relatively small number of cards are required for each camera as the content should be downloaded into station servers soon after shooting, thus freeing up the cards and making the cost of the medium unimportant.

When either, or both, of these new acquisition technologies replace video tape based camcorders over the next couple of years the migration of the TV news industry to IT technology will be complete. It will have been an extraordinary revolution in terms of its implementation speed having taken less than 10 years from servers and hard disk editing systems first attracting TV news companies' attention. Remarkably it is only around 25 years since Electronic News Gathering ousted film and ushered in the all electronic era.

Taylor (2004)

Tape, DVD and memory cards use three different physical principles for storing data (Figure 11), and have different merits and limitations.

Storage media
Figure 11 Storage media

Tape is a magnetic storage medium. In very general terms, data is stored by the orientation of the magnetic field in microscopic particles on the tape. The orientation is set when writing to the tape and detected when reading from it. Tape is cheap and can store large amounts of data, but has the significant disadvantage that it can only be written to and read from sequentially. It is not possible to jump to somewhere in the middle of the tape; you have to run through it to get there. Also, compared with DVDs and memory cards, tape is less robust. It can be quite easily damaged and wears out after repeated use.

A DVD is an optical storage medium. Data is written to it by putting microscopic marks on the surface of the disk. Data is read from it by detecting the presence of the marks, which is done by shining a laser onto the surface and measuring the amount of light reflected. Depending on the type of disk, writing to the disk may be permanent or reversible (for a DVD-RW, rewritable, disk). DVDs are now cheap and robust. Certainly data can be read from the disk any number of times with no significant degradation to the disk, and RW disks can be rewritten many times. DVDs do not need to be read serially like tape because it is possible to jump straight to anywhere on the disk – they are random access.

Memory cards use flash memory, which is an electrical storage medium, used in an increasing range of applications including digital (still) cameras. Microscopic cells in an integrated circuit can be set to a voltage and they remain at that voltage by holding electrical charge even when the power is disconnected, until deliberately changed. Some types of flash memory are random access, but others require sequential access. Writing to and reading from memory cards is faster than with a DVD.

Whereas a tape has to be moved physically through a VTR, and a DVD is spun around as the read/write 'head' moves across the disk, there are no moving parts involved in using a memory card. This is what Taylor is referring to when he says they are solid state. Generally speaking, moving parts are more prone to wear and tear and failure, so solid state components tend to be more reliable and last for longer. The equipment for reading and writing to flash memory is therefore more rugged than that for tapes and DVDs.

Activity 16

Using information from the above discussion, answer the following questions about tape, DVDs and flash memory.

  1. Which allow random access when reading?

  2. Which is solid state?

Discussion
  1. DVD and some types of flash memory allow random access, but tape does not. Tape is written and read sequentially.

  2. Flash memory is solid state. Tape and DVDs require moving parts to read and write data.

Taylor makes extensive use of approximate calculations on memory sizes, data rates and recording durations in the extract. It is instructive to 'unpick' some of his calculations.

Taylor says that the 27 GB (gigabyte) capacity of the blue disk DVD provides about two and a half hours of recording at 25 Mbps (mega bits per second). A gigabyte is about 1,074,000,000 bytes and there are eight bits in a byte, so 27 GB is about:

(to three significant figures).

One gigabyte is actually 230=1,073,741,824 bytes, but I don't need all the figures because I only want the answer to three significant figures.

At 25 Mbps this is enough storage for:

Dividing by 60 for the number of seconds in a minute gives 155 minutes, which is indeed 'about two and a half hours'.

About the P2 cards Taylor says:

four of the consumer SD [secure data] flash memory cards [are] embedded in a standard PCMCIA package called the P2 card. Using 1 GB SD cards each P2 card supplies an initial 4 GB memory capacity.

Taylor (2004)

This is simply saying: 4×1 GB = 4 GB

Taylor then says:

The camcorder holds five P2 cards to give it a total recording duration at 25 Mbps video of about 90 minutes (allowing for audio channels and overheads).

Taylor (2004)

Five P2 cards will store 5×4 GB = 20 GB. That is 20 gigabytes which using a similar calculation to the one we did above, is about 1.72×1011 bits. At 25 Mbps that is enough for:

Dividing by 60 for the number of seconds in a minute gives about 115 minutes. That is rather more than the 90 minutes that Taylor estimates, but we have not taken account of the audio coding or any overheads (Overheads are extra bits that are needed to manage the data – file names, bits to keep the video and audio synchronised, etc.) We don't have any information on how much should be allowed for these other factors, but it seems reasonable that they could account for the difference.

Later, Taylor says:

It will not be too long before individual SD cards reach 16 GB capacity at affordable prices. Each P2 card would then be capable of storing nearly five hours at 25 Mbps – more than enough capacity to handle HD TV at 100 Mbps!

Taylor (2004)

If an SD card stores 16 GB, then a P2 card (which holds four SD cards) can store 4×16 GB=64 GB. Using Taylor's claim that 20 GB is enough for about 90 minutes, 64 GB should be enough for about:

Dividing by 60 for the number of minutes in an hour, that is 4.8 hours, which is 'nearly five hours', as Taylor says.

Activity 17

Based on Taylor's statement:

Each P2 card would then be capable of storing nearly five hours at 25 Mbps – more than enough capacity to handle HD TV at 10 Mbps!

Taylor (2004)

approximately what duration of HD TV should a P2 card be capable of storing?

Discussion

We are told that a PCMCIA card can store five hours at 25 Mbps and that HD TV (high-definition television) runs at 100 Mbps, which is four times the data rate. This indicates that a PCMCIA card should be able to store 5/4 hours, which is 1 hour and 15 minutes, of HD TV.

4.4 Batteries

Though batteries are in some ways less glamorous than other components of IT systems, advances in battery technology are every bit as important to the success of IT as developments in other areas.

Before I can say anything useful about batteries, however, you need to know some basic ideas about electricity.

4.4.1 Voltage, current and resistance

Voltage (or, more correctly, electromotive force, emf – but I shall follow common practice and just say voltage) is a measure of the force with which electricity is 'pushed'. Nothing happens, however, unless there is an electric circuit, which is a path from one terminal of a voltage source (the battery, in this case) to the other, along which the electricity can flow (Figure 12).

An electric circuit
Figure 12 An electric circuit

If there is an electric circuit, the rate at which electricity flows is determined by the nature of the circuit and the value of the battery's voltage. To quantify the rate at which electricity flows we need to know how much the circuit allows or resists the flow of electricity, and this is determined by a measure known as resistance.

If a circuit has a high resistance, little electricity flows for a given voltage. If it has a low resistance, a lot of electricity flows. Resistance is measured in units called ohms, and the rate of flow of electricity is measured in units called amps. The rate of electricity flow, which we call the electric current, in amps is calculated by dividing the battery voltage in volts by the circuit resistance in ohms.

That is:

Qualitatively, you might be able to see that this is plausible. The bigger the voltage (the stronger the push), the more current will flow. The bigger the resistance, on the other hand, the smaller the current.

The convention is to use symbols, i for current, v for voltage and r for resistance, so we write:

For example, if a battery voltage is 3 volts and the circuit resistance is 10 ohms the current flowing is:

The symbol for amps is a capital A, so this is written as 0.3 A. The symbol for volts is a capital V and for ohms is the capital Greek letter omega, Ω.

One amp is a fairly large current flow for electronic equipment, and quite often it is easier to work in units of 1/1000 of an amp which is 1 milliamp, written as 1 mA. Similarly, a resistance of 1 ohm is very small for electronic equipment and often resistance will be expressed in units of kilohms, kΩ, which are thousands of ohms.

Activity 18

If the battery voltage is 1 V and the circuit resistance 1 kΩ, what is the current flow in mA?

Discussion

1 kΩ=1000 Ω, so the current is given by:

The formula triangle that I introduced earlier can be used with the equation for v, i, and r This time because v is on the top of the fraction, v goes at the top of the triangle, with i and r in the bottom corners (again it does not matter which way around), resulting in a formula triangle as shown in Figure 13. The relationship between voltage, current and resistance represented by this triangle has a special name: it is known as Ohm's law. (Georg Simon Ohm, 1789–1854, was the German physicist who first described this relationship.)

The formula triangle for Ohm's law
Figure 13 The formula triangle for Ohm's law

4.4.2 Battery parameters

Now that we have covered some background on electricity, I will return to discussing batteries.

Activity 19

What do you think would be the important characteristics of a battery for a portable IT device such as a camcorder or a mobile telephone?

Discussion

The things that I thought of as the most important were:

  • Weight: if the device is to be portable it must not be too heavy

  • Size: for a portable device it needs to be kept small

  • Running time: the user does not want to have to replace or recharge the battery too frequently

  • Cost: it should not be too expensive.

I'd like to explore how these parameters are specified in more detail. To make the discussion more concrete I'll compare examples of some of the most widely available types of rechargeable battery. I shall look at two different sizes: AA and C. You are probably familiar with these sizes because AA is widely used in portable radios and CD players while the larger C size is used for torches, bicycle lights and portable stereos, among other things. Batteries described as LR6 and MN1500 are the same size as AAs, while R14 and MN1400 are the same size as Cs.

For each of the sizes I shall compare two battery technologies: nickel–cadmium (abbreviated to the chemical symbols NiCd, or called 'NiCad') and nickel–metal hydride (abbreviated to NiMH). At the time of writing, although both NiCd and NiMH batteries are widely available, NiCd batteries are declining rapidly in popularity. I shall have more to say about why this is later, but for the moment it is convenient for my purposes to compare the two technologies.

Batteries produce electricity by a chemical reaction, and nickel–cadmium or nickel–metal hydride refer to the chemicals used in the battery. All NiCd batteries will have some similar characteristics because they use the same chemistry, but different sizes and physical constructions will lead to some differences. Likewise for all NiMH batteries, or any other chemistry.

Some basic data on specific examples of each of these four batteries is given in Table 1.

Table 1: Data based on Ansmann batteries
SizeChemistryVoltageDimensionsWeightCapacityPrice
HeightDiameter
AANiCd1.2 V50 mm15 mm24 g0.8 Ah£1.40
AANiMH1.2 V50 mm15 mm24 g2.1 Ah£2.50
CNiCd1.2 V60 mm26 mm75 g1.7 Ah£4.00
CNiMH1.2 V60 mm26 mm80 g3.5 Ah£7.00

Footnotes  

Ah: amp-hours.

All four batteries in Table 1 provide 1.2 volts. This is a consequence of the chemistry used, and the fact that each one is a single 'cell'. The cell is the basic building block of the battery, and to get higher voltages, cells can be connected together, as shown in Figure 14. This way of connecting cells or batteries, with the positive terminal of one connected to the negative terminal of the next, is know as connecting in series, and results in an output voltage that is the sum of the voltages of the individual batteries. The voltages just add together. You might be familiar with this way of connecting batteries from when you have put batteries in radios or torches, where they are nearly always connected in series.

Strictly the term 'battery' should only be used when there is a combination of cells used together, not for the single cells of an AA or C 'battery', but this is a distinction that is rarely adhered to.

Cells connected together to get a higher voltage
Figure 14 Cells connected together to get a higher voltage

Batteries using a different chemistry produce different voltages from a single cell. A single cell of an alkaline battery (the technology used for the most common non-rechargable batteries) for example, produces 1.5 volts. The chemistry of NiCd and NiMH is similar, so they produce the same voltage as each other

The battery size, AA or C, characterises the dimensions, expressed as the battery height and diameter.

You can see that the weights of the two AA batteries are the same, and there is only a small difference between the weights of the two batteries. In fact it is only when we come to the battery capacity and the prices that there is a significant difference between the NiCd and the NiMH batteries.

The NiMH batteries are more expensive but have a greater capacity. The units of capacity, Ah, are 'amp-hours', amps multiplied by hours. The idea behind this is that you can't specify a single value for the length of time a battery can be used because it depends upon the current being drawn from it. If you draw a lower current the battery will last longer. However if you multiply the value of the current being drawn by the length of time it can be used, you get a constant value: the battery capacity.

For example, a battery with a capacity of 1 Ah could supply 1 A for 1 hour, or else it could supply 2 A for half an hour or 0.5 A for 2 hours. More generally, if a battery can run at a current i for t hours, then its capacity is:

If you know the capacity of a battery and want to know how long it can be used to supply a given current, then you divide the capacity by the current.

The time for which the battery can be used, t , is given by:

Activity 20

The form of these equations should be starting to be familiar by now. Again, the relationship between capacity, running time (t) and current(i) can be represented by a formula triangle. Draw one now.

Discussion

The relationship between capacity, and t was presented in two forms:

and

From either of these, and from what you were told previously, you can see that capacity is the quantity that should go at the top of the triangle, so that the triangle is in either of the forms shown in Figure 15.

Formula triangle relating battery capacity to current and running time
Figure 15 Formula triangle relating battery capacity to current and running time
Activity 21

In a test, it is found that a battery can be used for 10 hours supplying a current of 0.4 A.

  1. What is the capacity of the battery in Ah?

  2. If a current of 0.3 A is flowing from the battery, how long can it be used for?

Discussion
  1. capacity=i × t = 0.4 × 10 Ah = 4 Ah

  2. time battery can be used at 0.3 A:

This is 13 hours and 20 minutes.

It is important to appreciate that the figures quoted for capacity and the length of time a battery can be used depend very strongly on the way it is being used and the temperature. Also, a battery does not just suddenly run out of electricity – it is not like a car running out of petrol where suddenly there is no more and it stops. Rather, while a battery is being used the voltage falls and as the battery runs out (goes 'flat') its voltage drops more quickly. When specifying a battery's capacity a lower limit to the acceptable voltage is specified, and the battery is defined as flat when that lower limit is reached.

Bearing all this in mind, the values for battery capacity are nevertheless useful for comparisons and estimates of battery performance.

Activity 22

How long could a device which uses 0.1 A be run from each of the four batteries in Table 1?

Discussion
  1. AA, NiCd. Capacity of 0.8 Ah, so it can run for 0.8/0.1 h=8 hours.

  2. AA NiMH. Capacity of 2.1 Ah, so it can run for 2.1/0.1 h=21 hours.

  3. C NiCd. Capacity of 1.7 Ah, so it can run for 1.7/0.1 h=17 hours.

  4. C NiMH. Capacity of 3.5 Ah, so it can run for 3.5/0.1 h=35 hours.

As you can see from Table 1, for the batteries considered here, an NiMH battery has a greater capacity than an NiCd battery of the same size and weight, but costs more. This is generally true for NiMH compared with NiCd batteries, although as NiMH become more widely used their prices are getting lower. These are not the only considerations when choosing batteries. For example, with rechargeable batteries such as these there are also issues as to how easy they are to recharge and how many times they can be recharged. On these considerations, broadly speaking NiMH batteries come out better than NiCd batteries. Another significant consideration is the fact that cadmium is highly toxic (poisonous) and so NiCd batteries should be handled carefully and should not be disposed of with other waste, but should be recycled so that the cadmium is extracted safely. For all these reasons, NiCd batteries are falling out of favour.

Activity 23

Have a look for any batteries that you have, especially rechargeable batteries, and see if they say what voltage they are and what capacity they have. Compare them with those in Table 1. Alternatively, if you don't have any rechargable batteries, you can look for adverts to see what information you can find. (Non-rechargeable batteries don't generally quote their capacity.)

Discussion

I had an AA NiCd battery. It was labelled as '1.2 V, 0.65 Ah'. This is lower capacity than the AA NiCd in Table 1. I weighed it on the kitchen scales (on a piece of paper as a precaution, remembering the toxicity of cadmium) and found it was about 55 g. Clearly mine is inferior to some NiCds – heavier and lower capacity – but I don't recall how much it cost (I bought it several years ago).

I also found both NiCds and NiMH batteries advertised in catalogues that I had at home. The catalogue listed NiCd and NiMH batteries with similar capacity to those in Table 1. Some of them quoted the capacity in mAh, which is milliamp-hours. The value in mAh needs to be divided by 1000 to get Ah so, for example, the capacity of a battery advertised as a 'super high capacity' NiMH AA battery was given as 2300 mAh. This is equal to 2.3 Ah.

Another important type of battery is based on chemical reactions involving lithium. 'Lithium Ion' (Li-ion) batteries are commonly used in laptop computers and other portable IT equipment.

A complication when comparing Li-ion batteries with NiCd and NiMH batteries is that the voltage delivered by an Li-ion cell is around 3.6 volts, compared with the 1.2 volts of NiCd and NiMH cells. To make fair comparisons of capacity you need to be looking at supplies at the same voltage.

Activity 24

If some equipment requires a 3.6 volt power supply it can use a single Li-ion cell. How many NiCd or NiMH cells would it need, and how should they be connected?

Discussion

To get 3.6 volts from NiCd or NiMH cells, which are each 1.2 volts, three cells would need to be connected in series, as shown in Figure 16.

Cells connected in series, for the answer to Activity 24
Figure 16 Cells connected in series, for the answer to Activity 24

When the different voltages have been taken into account, the capacities of battery packs using Li-ion batteries are greater than packs using NiCd and NiMH batteries for a given size and weight.

There are other pros and cons to Li-ion batteries, and a particular disadvantage is the need to control more carefully the charging and discharging of the batteries, both to maximise the battery life and for safety reasons.

5 Signal transmission

5.1 Transmission of electrical signals on wires

In the discussions of newsgathering in the Taylor and Higgins papers, you saw the significance of the development of systems that allowed long-distance transmission of electronic signals. Initially transmission used metallic wires (remember Taylor's reference to the importance of the 'lines infrastructure' and his mention of the 'wire picture') and later wireless transmission (terrestrial and satellite microwave) became important. In this section I shall look at some aspects of the transmission of digital signals, starting with a close look at the transmission of electrical signals on wires.

Using a wire to transmit a signal is simple in principle: you operate a switch at one place and observe the effect somewhere else. In Figure 17 I have shown this as a light coming on at a remote location. Notice in this diagram the standard symbol for a battery consisting of two parallel lines, one shorter than the other, and the symbol for a light bulb which is a circle with a cross in it. The standard symbol for a switch that can be 'open' or 'closed' consists of two dots and a line which either connects the dots (when the switch is closed as in Figure 17b) or misses one of the dots (when the switch is open, as in Figure 17a). To switch the light on you close the switch so that there is an electric circuit from the battery to the light bulb and back again. To switch the light off you open the switch to 'break' the circuit.

Switching a light on at a remote location: (a) light off; (b) light on
Figure 17 Switching a light on at a remote location: (a) light off; (b) light on

When you switch a light on, the light appears to come on immediately. There does not appear to be any delay between operating the switch and the effect at the light (although, depending on the type of light there might be a delay before it comes on fully – this is especially noticeable with fluorescent light tubes). In reality there is a delay – it is just very short indeed.

To get a better insight into what is happening, imagine measuring the voltage between the wires. This can be done with something called a voltmeter (Figure 18). A voltmeter has two wires and a display. When you touch the ends of the wires to the terminals of a power source like a battery, the display on the meter tells you what the voltage is between the terminals.

A voltmeter
Figure 18 A voltmeter

Imagine using the voltmeter to measure the voltage between the two wires at some point between the switch and the light, as in Figure 19. When the switch is open (off) the reading on the meter will be zero. When the switch is closed (on), the reading will (ideally) be equal to the voltage of the battery – which I shall assume is 1.2 volts. Now imagine having the voltmeter touching the wires while the switch is changed from open to closed. In this case you will see the voltage change from 0 to 1.2 V.

Measuring the voltage across a pair of wires
Figure 19 Measuring the voltage across a pair of wires

Now, if the voltmeter is touching the wires right next to the switch, you would see the voltage rise from 0 to 1.2 V at the same instant as the switch is closed. If, on the other hand, the voltmeter is touching the wires further away from the switch there will be a delay between the switch closing and the voltage rising. We can display this by plotting graphs as shown in Figure 20.

The voltage across wires when a switch is closed: (a) voltage across the wires as measured at the switch; (b) voltage across the wires as measured 200 m away from the switch
Figure 20 The voltage across wires when a switch is closed: (a) voltage across the wires as measured at the switch; (b) voltage across the wires as measured 200 m away from the switch

These graphs show how the reading on the voltmeter changes with time. Along the horizontal axis from left to right corresponds to time passing, and up the vertical axis corresponds to increasing voltage, as measured by the voltmeter.

By convention, the axis that goes across the page is the 'horizontal' axis, and the axis that goes up and down the page is the 'vertical' axis. Sometimes they are called the x and y axes, for horizontal and vertical respectively.

The time axis is labelled in units of microseconds, where one microsecond is one-millionth of a second. Notice also that the time axis is relative to some time origin which is labelled 0. The actual time corresponding to 0 as shown on the axis might have been, say, Thursday 20 May 2004, 2.17 pm and 35.031233 seconds, but labelling the axis with that level of detail would be confusing and irrelevant.

Figure 20(a) shows what the voltmeter would do when connected to the wires next to the switch, while Figure 20(b) shows what it would do when connected to the wires 200 m along towards the light bulb. The switch was closed at time 1, on this scale, so the voltage measured next to the switch rises at time 1.

There is a delay before the voltage rises at the voltmeter when it is 200 m along the wire.

Activity 25

How much of a delay?

Discussion

The voltage rises when the time is equal to 2 microseconds. The switch was closed at a time equal to 1 microsecond, so there is one-microsecond delay between the switch being closed and the voltage changing 200 m along the wire.

We can think of the change in voltage moving along the wires. This idea of the change in voltage moving along the wire becomes clear if we think about turning the light on and then off again afterwards. This is illustrated in Figure 21.

A voltage pulse travelling along a pair of wires: (a) voltage across the wires as measured at the switch; (b) voltage across the wires as measured 200 m away from the switch
Figure 21 A voltage pulse travelling along a pair of wires: (a) voltage across the wires as measured at the switch; (b) voltage across the wires as measured 200 m away from the switch

Here, the switch is closed at time 1 microsecond and opened again at time 3 microseconds. We now have a voltage pulse. Looking at the voltage across the wires 200 m from the switch, both the rise and fall in voltage happen 1 microsecond later, and the voltage pulse has taken 1 microsecond to travel the 200 m along the wires.

I now say 'look at the voltage' rather than 'the value on the voltmeter'. I only introduced the idea of the (idealised) voltmeter to set up the concept of the voltage having a value at some place on the wires.

Activity 26

How fast is the pulse travelling, measured in metres per second?

Discussion

The pulse travels 200 metres in 1 microsecond. 1 microsecond is one-millionth of a second, so in 1 second it would travel 200×1 million metres=200 million metres or 2×108 metres. The speed is therefore 2×108 m/s.

This is two-thirds of the speed of light, which is typical of the speed that electric signals travel along wires.

Activity 27

Assuming that the pulse continues to travel at the same speed, draw a graph of voltage against time for measurements taken 600 metres from the switch.

Discussion

The pulse travels 200 m in 1 microsecond, so it takes 3 microseconds to travel 600 m. The pulse will be as in Figure 22.

A pulse 600 metres along the wires, for the answer to Activity 27
Figure 22 A pulse 600 metres along the wires, for the answer to Activity 27

Figures 20 and 21 (and my answer to Activity 27) are simplifications because they have not shown attenuation or distortion. Figure 23 shows the sort of effects that attenuation and distortion might have on a pulse.

Voltage across the wires as measured 200 m away from the switch, showing the effects of attenuation and distortion
Figure 23 Voltage across the wires as measured 200 m away from the switch, showing the effects of attenuation and distortion

Attenuation reduces the height of the pulse, so that it does not reach 1.2 volts any more. Some of the voltage has been 'lost' as it travels the 200 metres because some energy from the electricity is absorbed by (very slightly) heating the wires and some energy is radiated into the air as the wires act as a (very inefficient) aerial.

Distortion alters the pulse, rounding the corners and generally changing the shape. Qualitatively, the smoothing of the corners is because the wires do not allow the voltage to change instantaneously – there is a sort of electrical drag as the pulse travels along the wires. More random distortion effects are caused by what is referred to as noise. By analogy with the common meaning of noise as unwanted, meaningless sounds, noise in the context of electrical signals is the unavoidable effect where signals develop unwanted, meaningless distortions.

Attenuation and distortion become worse as the pulse travels further. Amplifiers can be used to compensate for attenuation, but that still leaves distortion, which ultimately limits how far signals can be transmitted along a wire – or indeed any transmission medium. With digital signals, however, regenerators can be used instead of (or as well as) amplifiers to overcome both attenuation and distortion.

The concept of regeneration is that when a pulse has become badly attenuated or distorted, it can be regenerated to produce a new, perfect pulse for onward transmission. This is illustrated in Figure 24. Note that to simplify the diagram I have drawn a single line to represent a pair of wires. The pulses drawn next to the line represent pulses across the pair of wires at that location.

The concept of regeneration
Figure 24 The concept of regeneration

You do not need to know how regeneration is done in detail; you just need to understand that it is possible. The reason it is possible is that with digital signals there is a restricted range of possibilities of what the signal could be. For example, with a binary signal, 1s and 0s might be transmitted on wire by using, say, 5 V to represent a 1 and 0 V to represent a 0. The regenerator 'knows' that the only signal it is expecting is something which started out either as a pulse of 5 V or as 0 V. The regenerator decides which of the two possibilities is most likely, and produces a new 5 V pulse or 0 V accordingly. Although there are practical complications, in principle this decision can be very simple. An electrical circuit compares the received voltage with a threshold value (say 2.5 V) and if the received voltage is greater than the threshold the output is a new 5 V pulse, otherwise the output is 0 V.

Ideally, regeneration can be repeated indefinitely allowing transmission over unlimited distances (Figure 25).

Successive regeneration for long-distance transmission
Figure 25 Successive regeneration for long-distance transmission

The boxes labelled 'regenerator' in Figure 25 are electronic circuits, but interestingly the very first 'regenerators' were human beings! Earl electric telegraph systems operated by the opening and closing of a switch at the transmitter having the effect of a pen putting marks on a piece of paper at the receiver. These were digital systems, with 'marks' and 'spaces' on the paper performing a similar role to 0s and 1s in modern digital systems. In telegraph relay stations, telegraph operators (people) would look at the marks and spaces on an incoming telegraph line and duplicate them on an outgoing telegraph to send the message on the next leg of its journey.

5.2 Other transmission media

Wires are still used to carry electrical signals over short distances. At the time of writing, for example, most connections between telephones in private houses and the local telephone exchange still use wires. The telephone networks within office buildings are mostly connected with wires, and so are many computer networks (local area networks, LANs) within single buildings. However, all longer-distance communication, between towns, cities or countries, uses either optical fibre or microwave systems. Increasingly, even shorter distances use either optical fibres or wireless links of one sort or another.

Many of the concepts described in the previous section for signal transmission on electrical wires also apply to wireless systems (such as microwave) or optical fibres. Pulses are still attenuated and distorted, for example. It is worth, however, briefly looking at some of the characteristics particular to these other transmission media.

5.2.1 Microwave

You saw the importance of microwave transmission for newsgathering in the Higgins extract. The term 'microwave' identifies a particular range of frequencies used for radio communications. The range of frequencies that are referred to as 'microwave' is not exactly defined (or, rather, slightly different ranges are used in different contexts), but roughly speaking it is from about 200 MHz to 50 GHz. [Remember that MHz stands for megahertz, which is 1,000,000 Hz (106 Hz) and GHz is gigahertz, which is 1,000,000,000 Hz (109 Hz).]

It is possible to transmit digital signals over microwave by using pulses of microwave power to represent 1s, and the absence of microwave power to represent 0s. This type of transmission is known as on-off keying. In practice this is not generally the best way to use microwave for transmission, and more sophisticated ways of putting the data on the radio signal (different modulation schemes) are normally used.

Regeneration is used in microwave transmission systems, but surprisingly long distances are possible without regeneration. In satellite communications, the satellite performs regeneration (as well as some other functions), but that still means that the signal has to travel the distance from the ground to the satellite in one go (and the same distance back). For geostationary satellites this is 36,000 km each way.

Even more remarkable is the microwave transmission that was used to send data back from the Cassini–Huygens space exploration mission to Saturn and its moon, Titan, in 2004/05. The distance between the Earth and Saturn was at that time 1,517,000,000 km! The reduction in the signal strength over that distance is very great, so the power of the signal received back on the Earth is small. In practice this means that the data rate (bandwidth) is small, because there is a trade-off between signal power and data rate. For high data rates higher power is needed. If the power is low, only low data rates are possible. (It is the same if you are having difficulty hearing someone speaking, when you might ask them to speak more slowly.

5.2.2 Optical fibre

In all developed countries, long-distance communication links (which used to be called 'trunks', by analogy to 'trunk road') nearly always use optical fibre. It is only where the terrain makes it difficult to lay a cable (such as in mountains or, sometimes, between islands) or when a new link is needed quickly and there isn't time to lay a cable that microwave links are used instead.

An optical fibre is a strand of glass or plastic, not much thicker than a human hair, which guides light from one end to the other (Figure 26). The guidance comes about because of an effect known as total internal reflection. This means that light shone in one end of the fibre doesn't come out from the sides of the fibre even if the fibre is bent around corners. The light just travels inside the fibre until it comes to the far end. Because the fibre is so thin it is flexible and, from the outside of a cable with protective plastic coverings, looks and feels much the same as an electrical wire. Signals are conveyed by changing the 'brightness' of the light injected into the fibre and measuring it at the far end. Bits are sent by 'on-off keying': 1s are represented by light on and 0s by light off.

Optical fibre of the type used for communications. The bare glass fibre is vulnerable to scratches and will break if bent too tightly, so the cable shown has several layers of protective coverings, shown here stripped off layer by layer.
Figure 26 Optical fibre of the type used for communications. The bare glass fibre is vulnerable to scratches and will break if bent too tightly, so the cable shown has several layers of protective coverings, shown here stripped off layer by layer.

The attraction of optical fibre is that it can be used for very high data rates over long distances. It is this combination – high data rates and long distances – that distiguishes it from wires.

When electrical signals are transmitted over wires the attenuation increases with increasing data rate, so the higher the data rate, the greater the attenuation and therefore the shorter the distance that can be used. So, for example, although signals at 1 gigabit/s can be carried around on wires inside a computer, it is difficult to transmit electrical signals even a few metres at data rates that high. With optical fibre the attenuation is not dependent on the data rate, and the attenuation is anyway very low, so that signals at tens or even hundreds of gigabit/s can be sent for tens of kilometres with the right sort of fibre.

There are other factors to consider, including, as mentioned above, that higher data rates require higher power (which is the case whatever the medium used to carry the signal). Nevertheless, even though optical fibre is generally more expensive to use than wires, optical fibre is the transmission medium to use for high data rates over long distances.

Activity 28

When optical fibre was first developed as a communications medium, it was initially used only for long-distance transmission between cities. More recently it has been used for shorter distances, including many new local area networks (LANs) within office buildings. There is also a debate about how and when it should be used in the links between private homes and the local telephone exchange. From the discussion above, can you suggest why it is now finding applications for shorter distances, where metallic wires were previously used?

Discussion

In recent years the data rates required of many communication links have been increasing. Since the attenuation of wires increases with increasing data rates, many links which previously could use wires cannot do so any more, because the data rates are too high. Where data rates have got too high to use wires, optical fibre is often used instead.

Also, although this was not discussed in the text, the equipment needed for optical fibre transmission has been getting cheaper, making its use more economical in a variety of applications.

5.3 Signal speeds, propagation times and distance: the formula triangle

When signals travel along a wire or optical fibre, or through space, the relationship between the speed, propagation time and distance can be written in three ways, depending upon which one you want to calculate.

If you know the speed and the propagation time and want to know how far the signal will travel, you use:

You should recall that this is the calculation used for the active autofocusing discussed in Section 4.2.5.

I shall use d for the distance and t for the time, as I did earlier. For speed I shall follow the common convention of using v , which comes from 'velocity' – but you have to be careful not to get confused with v for voltage, as used in Section 4.4.

So we have

Technically the value for the velocity of something includes both its speed and the direction it is moving in. For our purposes there is no need to distinguish between speed and velocity. I shall just use the word speed and the symbol v.

If you know the distance travelled and speed but want to calculate the propagation time you use:

which can be written:

Finally, if you know the distance travelled and the propagation time, and want to calculate the speed, you use:

or

These are of course quite general relationships between distance travelled, time taken and speed. We used the same relationship when doing the calculations for autofocusing in a camcorder, and it applies equally for the journey time driving along a motorway or cycling to work, assuming constant speed (or using average speed in the calculation). The relationship between these three terms is displayed on a formula triangle as shown in Figure 27. You can swap the 'time' t and 'speed' (v) in the lower corners – it doesn't matter which way round these are.

Figure 27 The formula triangle

Earlier I explained how you draw the formula triangle given one of the equations, but what is useful about the triangle is that you can get back to any of the equations from the triangle.

You do this by covering the quantity you want to calculate, and looking at the position of the other two (Figure 28). So if you want to calculate the time, you cover 'time' and observe that distance appears above speed, so you calculate time from distance divided by speed. Similarly, to calculate speed, you cover 'speed' and observe that distance appears above time so you divide distance by time. For calculating distance, you cover 'distance' and note that time is alongside speed, so you multiply the two together.

Using the formula triangle
Figure 28 Using the formula triangle

Activity 29

In Activity 20 you drew a formula triangle for the relationship between battery capacity, current (i) and the length of time a battery can be used (t).

  1. Suppose you know the battery capacity and the time you want to use the battery for. Write down the formula which will allow you to calculate the current. Use the formula triangle to help you.

  2. Suppose a battery has a capacity of 1.8 Ah and is to be used for 20 hours. What is the maximum current that can be drawn from it?

Discussion
  1. You need to use the formula triangle from the answer to Activity 20. If you know time and capacity, you can see by covering i (current) that current will be given by:

  2. If the capacity is 1.8 Ah and the time is 20 hours, the maximum current is:

It is important when doing the calculations that you use consistent units. You could use the standard units known as SI units (see box below), but you don't have to, so long as the units are consistent. For example, if you have speed in kilometres per hour and time in hours, then distance will be in kilometres.

SI units

One way of ensuring that you are working with consistent units is to use the international standard units known as 'SI' units, where SI stands for the French words Système International. The SI unit for length is the metre and for time is the second. In SI units, therefore, speed is expressed as metres per second. There is more about SI units in The Sciences Good Study Guide (Northedge et al., 1997).

Activity 30

For a communications satellite to be in a geostationary orbit it has to be about 36,000 km above the Earth. How much delay will be introduced to a radio signal by having to go up to and back down from the satellite? Radio signals travel at the speed of light (3×108 m/s), and you should assume that the signals go straight up and straight down. Note that this assumption – straight up and straight down – simplifies the calculation, and means that you get a value that would be an underestimate to the delay, for all cases except where the communication really is straight up and down (see Figure 29). In practice, communication via the satellite will often use an angled path and therefore have a larger delay.

Path lengths in satellite communications
Figure 29 Path lengths in satellite communications
Discussion

We need the time, so we use:

We first write the distance and speed in consistent units. The speed is in units of metres per second (3×108 m/s) but the distance is in kilometres (36,000 km). The multiplier 'kilo' is ×1000, so in metres the distance is 36,000×1000 m=36,000,000 m=3.6×107 m. This is the distance to or from the satellite. One 'hop' – up and down – is twice this distance, 7.2×107m.

So we have:

You might think that 0.24 seconds (about a quarter of a second) is not very long, but in fact if you were having a conversation with someone and there was a delay this long, it would be quite noticeable – and something of a nuisance. When you say something then stop to wait for a reply, it takes a quarter of a second for what you say to reach the other person, then there is another quarter of a second delay before the reply reaches you. So, in total, there is half a second delay (in addition to the recipient deciding what to say) between you finishing what you say and hearing the reply.

Activity 31

Satellites are frequently used for transatlantic communication, but the alternative is to use undersea cables. These days undersea cables would invariably use optical fibre. Light in fibre travels at about 2/3 of the speed of light in the air (light travels more slowly in glass than in the air) and therefore the signal speed is about 2×108 m/s. The distance across the Atlantic (depending upon where you start and finish) is about 4000 km. How long does it take for a signal to cross the Atlantic travelling through optical fibre?

Discussion

We need the time, so we use:

We first write the distance and speed in consistent units. The speed is in metres per second, m/s. The distance is in kilometres, so we need to change it to metres. 1 kilometre is 1000 metres, so 4000 km is 4000 × 103 m=4×106 m.

So,

You will have seen from the last activity that the delay when using optical fibre is very much less than when using a geostationary satellite. This is not the whole story, because there may be further delays when signals are manipulated (which can happen both with satellites and optical fibre links), but nevertheless it remains true that in speech telephony there is a noticeable delay when the communication uses a geostationary satellite, but not, usually, when it uses a fibre link. Delays used to be commonly encountered when you telephoned the USA from the UK, but that is rare these days because most transatlantic calls are via optical fibre links. It is important to appreciate that the large delay when using geostationary satellites comes about because geostationary satellites are so far from the Earth. Other satellites are also used whose orbits are much closer to the Earth. Communication via these non-geostationary satellites can have a smaller delay, but there are other complications because the satellite is moving relative to the Earth's surface.

As you saw in the Taylor extract in Section 2, news broadcasts often use geostationary satellites and therefore suffer from the larger delay. The processing (manipulation) used in MPEG encoding, especially motion compensation, adds yet more delay. The combined delays of transmission and processing can cause problems for live news broadcasts, as Higgins discusses later in his book:

The interaction in a 'live' two-way interview requires that both the questions and answers are delivered as smoothly as possible, but the delay of the compression process which is then added to the fixed satellite delay means that these interviews often have an awkward hesitancy about them.

This can be masked to a degree by imaginative techniques used both in the studio and out in the field, but there is always the evident hesitation between interviewer and interviewee. These compression delays ('latency') have reduced as the computational advances in processing speed have increased.

The viewer is also growing more tolerant of these delays to the point where they are hardly noticed, and so the problems are diminishing as time passes.

Some coders also offer a facility to improve the latency at the expense of movement compensation [motion compensation] – the so-called interview or low-delay mode. This is selected via the front panel menu of the MPEG-2 coder, reducing the overall processing time of the signal.

Changing production techniques is by far the best way to try to overcome these awkward pauses. It is common for the studio to cue the reporter a fraction of a second earlier than normal, so that by the time they respond, the delay has passed unnoticed.

Often you will see reporters in the field looking thoughtful or slowly nodding after they have replied to a question, so that it makes their eventual answer to the next question look as if it is very carefully considered reply! Like many other things in TV, much can be achieved by using 'smoke and mirrors'!

These techniques of course do not work if it is a straight 'down the line' interview with a member of the public, who naturally is unaware of these tricks of the trade.

However, you can often see this all going horribly wrong even with a seasoned reporter if there is a studio presenter asking questions who does not appreciate the subtle techniques required to cope with satellite and compression delay.

Classically this happens when part way through the answer from a reporter in the field, the presenter interjects with a supplementary question or comment. The field reporter carries on for a second or so, then halts in their tracks, meanwhile the studio presenter realises their mistake and urges the reporter in the field to continue – and you get a cycle of each end saying 'Sorry, please go on'.

This was a lesson that had to be learnt in the early days of satellite broadcasting which became more acute when digital processing and encoding was introduced – yet you still see these problems occurring today.

Higgins (2004)

6 Trust

6.1 Reliable information

Information is worthless if you have no trust in it. This has always been the case, but there are issues of trust that arise specifically in the context of modern information and communication technologies. Think about the following:

  • You do a search on the Web and get results from several different sites. Do you trust the information in them all? How do you decide which are the most trustworthy?

  • You get an email, a letter or a phone call purporting to come from your bank, recommending a change to your account. Do you follow the advice that you are given?

  • You read a story in a newspaper, hear it on the radio or see it on TV. Do you believe it? Does it make any difference if it is accompanied by a photograph?

  • In each of these cases there are two elements to your trust:

    1. The authority of the information source. Do you trust the BBC, ITN or someone you've never met writing a weblog? The Guardian or the Daily Mail? Your bank?

    2. Authentication of the message – does it really come from whom you think it does? Is it really the BBC's website you are looking at? Does this person writing a weblog really live in Iraq? Is the email, phone call or letter really from your bank?

Authority of OU teaching material

It may be dangerous to raise this question … but do you trust what you read in OU courses? What grounds are there for trusting us?

I hope that you do trust the OU – but not unquestioningly because we do get things wrong sometimes. The Open University 'brand' comes with some authority, and there are mechanisms within the University procedures to ensure the quality of OU material. These procedures include:

  • Team working. This course, for example, has emerged from course team discussions, and drafts which have been read and criticised by the whole course team.

  • External consultants. Experts from outside the University are asked for advice and contribute in various ways.

  • External assessors and examiners. OU regulations require that senior academics from other universities approve courses both during the production phase and annually during the course's presentation.

Further procedures operate at higher levels within the OU structure and formally the quality of all university education in the UK is monitored by the QAA: Quality Assurance Authority for Higher Education.

It must also not be forgotten that when the course is in presentation large numbers of Associate Lecturers and students read the material and provide feedback if they identify problems.

6.2 Authority and the variety of information sources

Technology has massively increased the number and variety of news sources that we have access to. We still have printed books, magazines and newspapers, while digital techniques have increased the number of broadcast radio and TV channels that we can get. On the Web we have access to online versions of many of these. This allows us access to media that previously would have been inaccessible.

With traditional news sources such as these, we have some understanding of the authority that they bring with them. Newspapers, for example, rely to some extent on their reputation. This may be damaged – they might lose readers – if their stories are found to be wrong or misleading, so it is in their own interests to maintain standards. Also, in the UK, newspapers and magazines are regulated by the Press Complaints Commission, the PCC. There are similar considerations that apply to radio and TV to maintain standards. In all these cases there will be some degree of editorial control over the content, and one of the responsibilities of the editors is to maintain standards of honesty appropriate to their publication or channel.

The PCC is a form of self-regulation rather than statutory regulation (regulation by law), and some people argue that this is inadequate.

On the internet, however, there are sources of news and information that are completely unregulated. The technology is such that with a minimum of knowledge and little expense, virtually anyone (in the developed world anyway) can say almost anything they like on a personal web page or a weblog, and, in principle at least, their words are instantly available to millions of people all over the world. The absence of any regulation or external editorial control might be thought to devalue personal web pages or weblogs as sources of news, but there are other considerations. To what extent do websites gain authority by the number of other sites that link to them, and by who links to them? Can personal recommendations replace recognised authorisation? And anyway, perhaps regulation sometimes becomes censorship, and who has the right to determine the editorial 'line'?

I raise these questions because they highlight issues that arise from the development of the Web, but there are no simple answers. We can gain some insight into the issues involved, however, by looking at one particular example where a weblog was able to provide news that was simply not available through any other source.

The example that I shall use is that of the 'Baghdad Blogger', Salam Pax, who was posting reports from Baghdad in the run-up to, and all through, the 2003 war in Iraq. As the US and British troops advanced on Iraq, news was coming from several sources, most of which might be suspected to be censored in some way. Salam was a resident of Baghdad who did not set out to be a reporter, but whose interest in the Web led him to create a weblog that became seen by many as providing a valuable insight into life in Baghdad at this time.

In the article below (itself taken from one of the traditional news sources: the Guardian newspaper), Salam Pax (2003) writes about his experiences. As you read it, think about some of the questions that I asked earlier, but also notice the role of the technology.

'I Became The Profane Pervert Arab Blogger'

S. Pax, 9 September, 2003, The Guardian

My name is Salam Pax and I am addicted to blogs. Some people watch daytime soaps, I follow blogs. I follow the hyperlinks on the blogs I read. I travel through the web guided by bloggers. I get wrapped up in the plot narrated by them. […]

We [the Iraqi people] had no access to satellite TV, and magazines had to be smuggled into the country. Through blogs I could take a peek at a different world. Satellite TV and the web were on Saddam's list of things that will corrupt you. Having a satellite dish was punishable with jail and a hefty fine […]

While the world was moving on to high-speed internet, we were being told it was overrated. So when in 2000 the first state-operated internet centre was opened, everybody was a bit suspicious, no one knew if browsing news sites would get you in trouble. When, another year later, you were able to get access from home, life changed. We had internet and we were able to browse without the minders at the internet centres watching over our shoulder asking you what that site you are browsing is.

Of course things were not that easy, there was a firewall. A black page with big orange letters: access denied. They made you sign a paper which said you would not try to get to sites which were of an 'unfriendly' nature and that you would report these sites to the administrator. They blocked certain search terms and they did actually have a bunch of people looking at URL requests going through their servers. It sounds absurd but believe me, they did that. I had a friend who worked at the ISP and he would tell me about the latest trouble in the Mukhabarat [secret police] room.

[…] With blogs the web started talking to me in a much more personal way. Bits of news started having texture and most amazingly, these blogs talked with each other. That hyperlink to the next blog – I just couldn't stop clicking. […]

To tell you the truth, sharing with the world wasn't really that high on my top five reasons to start a blog. It was more about sharing with Raed, my Jordanian friend who went to Amman after we finished architecture school in Baghdad. He is a lousy email writer; you just don't expect any answers from him. […]. So instead of writing emails and then having to dig them up later it would all be there on the blog. So Where is Raed? started. […]

The first reckless thing I did was to put the blog address in a blog indexing site under Iraq. I did this after I spent a couple of days searching for Arabs blogging and finding mostly religious blogs. I thought the Arab world deserved a fair representation in the blogsphere, and decided that I would be the profane pervert Arab blogger just in case someone was looking.

Putting my site at that portal (eatonweb) was the beginning of the changing of my blog's nature. I got linked by the Legendary Monkey and then Instapundit – a blog that can drive a stampede of traffic to your site. I saw my site counter jump from the usual 20 hits a day to 3,000, all coming from Instapundit – we call it experiencing an Insta-lanche (from avalanche) […]

What really worried me was the people writing those emails were doing so as if I was a spokesman for the Iraqi people. There are 25 million Iraqis and I am just one. With the attention came the fear that someone in Iraq might actually read the blog, since by now it had entered warblog territory. But Mr Site Killer still didn't block it. I preferred to believe they were not watching. They were never patient. If they knew about it I would already have been hanging from a ceiling being asked about anti-governmental activities. Real trouble comes when big media takes notice and this happened when there was a mention of the blog and its URL in a Reuters piece […]

By the end of January war felt very close and the blog was being read by a huge number of people. There were big doubts that I was writing from Baghdad, the main argument being there was no way such a thing could stay under the radar for so long in a police state. I really have no idea how that happened. I have no idea whether they knew about it or not. I just felt that it was important that among all the weblogs about Iraq and the war there should be at least one Iraqi blog, one single voice: no matter how you view my politics, there was at least someone talking.

I was sometimes really angry at the various articles in the press telling the world about how Iraqis feel and what they were doing when they were living in an isolated world. The journalists could not talk to people in the street without a Mukhabarat man standing beside them. As the war came closer, my blog started getting mentioned more and more. There were people quoting it even after I told them not to, because I feared it would attract too much attention. I talked to as few people as possible and did not answer any interview requests, but my blog was popping up in all sorts of publications. The questions people were asking me became more difficult and the amount of angry mail I was getting became unbelievable. Raed thought I should start panicking. People wanted coherence and a clear stand for or against war. All I had was doubt and uncertainty.

[…]

Activity 32

  1. One issue with weblogs as a source of news is that they present just one individual's perception of events, arguably with no more authority to speak than anyone else. Salam makes two apparently contradictory statements about this issue in the article. Pick out these two statements.

  2. What was it that led to a sudden increase in the number of people looking at Salam's website?

  3. What might lead you to trust the content of Salam Pax's blog?

Discussion
  1. The two statements that I picked out are:

    'What really worried me was the people writing those emails were doing so as if I was a spokesman for the Iraqi people. There are 25 million Iraqis and I am just one.'

    and

    'I just felt that it was important that among all the weblogs about Iraq and the war there should be at least one Iraqi blog, one single voice: no matter how you view my politics, there was at least someone talking.'

    These seem to be contradictory, but maybe there just isn't a simple answer.

  2. Salam says that the number of 'hits' on his blog – the number of times someone looked at his site – rose from 20 per day to 3000 following his site being linked by 'Legendary Monkey and then Instapundit'. Specifically, he says that the 3000 were all coming from Instapundit.

  3. The fact that other people and organisations, whom you already trust, are implicitly or explicitly endorsing Salam Pax's blog might give you confidence in it. It seems that 'Instapundit' already had the confidence of many people when it provided a link to 'Where is Raed?', and its being mentioned by Reuters would have given it some authority. Personally, it was when it appeared in the Guardian that I first came across it, and I assumed that the Guardian would have made some checks on its authenticity.

6.3 Authentication of information

When I watch TV news, listen to the radio or buy a newspaper I never think to question whether I really am watching ITV, listening to Radio Five Live or getting the Guardian. In each of these cases it is theoretically possible that they are not who they say they are, but the practicalities of performing the masquerade are so complicated that the possibility can be discarded.

With emails and websites it is a very different matter. Indeed, in recent months I have received several emails apparently coming from organisations, such as Microsoft and NatWest bank, which I know were fake. They looked entirely authentic, with the correct graphics, and the first time I received one of these – apparently from Microsoft – at first sight I was taken in. However, from other sources of information I learned that 'scams' such as these were in circulation and Microsoft and the major banks have said that they will never use emails to ask for personal information.

The authentic appearance of the emails was meaningless, since it is almost trivially easy to copy images from websites and paste them elsewhere, such as into emails. Incoming telephone calls are equally suspect, and again there have been cases of scams whereby a caller claims to be from somewhere they are not.

Letters with official documents have in the past been more reliable, since it was difficult to reproduce headed notepaper accurately. It is still possible to generate official documents that are hard to imitate (through the use of watermarks or embossing, for example) but the availability of high-quality colour printers has made it easier to produce official-looking documents.

Activity 33

Suppose you are contacted by email, telephone or letter and you want to check whether the communication is authentic. What could you do?

Discussion

If you already have a contact number for the organisation that has contacted you, you could call them and ask about it. This only works, of course, if you already have the number which you know is correct. Email scams often contain a phone number, but that number is, of course, bogus.

A personal signature on a letter changes the situation, provided you know the signature, can recognise it and it isn't a photocopy. The signature is the authentication of the letter. Similarly, recognising the voice of someone on the telephone authenticates the call. Authentication of emails is also possible by the use of digital signatures. A digital signature is a special piece of data which is added to a message. Software on the recipient's computer can analyse the message and the signature and determine whether the message is authentic. Digital signatures only work if your computer already 'knows' about the sender.

The technology of digital signatures can be used to authenticate websites as well as emails. In this case, your web browser checks the certificate of the site. This is usually done automatically, with your browser reporting to you if there is a problem.

6.4 Pictures

It used to be thought that a photograph could provide proof of an event – someone could be caught red-handed by a photograph, as proof of their guilt. 'The camera never lies', it was said. If you have a digital camera and have been 'touching up' photographs on your home computer you will know that this is far from true now. It is easy to lie with a digital photograph.

The idea that the camera never lies has always been a myth, however. As far back as 1917 the photographs of the Cottingley Fairies 'proved' the existence of fairies. Two girls, Elsie Wright (age 16) and Frances Griffiths (age 9), took photographs of themselves apparently in the company of fairies (Figure 30). Eventually, in 1981, the girls admitted that they had faked most of the pictures – although they always maintained that one of them was genuine.

The Cottingley Fairies
Figure 30 The Cottingley Fairies

The difference today is the ease with which digital photographs can be manipulated. It is argued that because of this, digital photography is qualitatively different from analogue photography.

One of the benefits that Taylor claimed for digital techniques is the improved options for editing, and he contrasted digital techniques with analogue techniques where 'stories cannot easily be altered'. The counterpoint to this is that digital stories can be easily altered – which makes them all the more unreliable.

This has sparked a debate about the changing nature of photography. The artist David Hockney, who has used photography in his work, has argued that the ease of editing digital images has made photography a dying art. Hockney's views were discussed in a newspaper article in 2004.

Hockney says he believes modern photography is now so extensively and easily altered that it can no longer be seen to be true or factual. He also describes art photography as 'dull'.

Even war photography, once seen as objectively 'true', has now been cast in doubt by the ubiquitous use of digital cameras which produce images that can be easily enhanced or twisted.

Hockney points to the case during the Iraq war when the Los Angeles Times sacked a photographer for having superimposed two images to make them more powerful.

Not everyone entirely agrees.

Russell Roberts, head of photography at the National Museum of Photography, Film and Television, said Hockney's argument was 'simplistic'.

Mr Roberts said manipulation of images was as old as photography. He could cite numerous examples from the 1840s, the first decade of photography, of images which claimed to be accurate depictions of events but were in fact highly stage managed.

[…]

Eamonn McCabe, a former picture editor of the Guardian, said it had become increasingly difficult for picture editors to tell whether a picture had been manipulated and a growing number of digitally manipulated pictures were being published.

'I think there was perhaps a point where there was a general perception that photography was truth, but we have lost that,' he said.

But McCabe said this did not detract from the value of good photography. 'To say that photography is dead is faintly ludicrous. It would be better to say that you should be wary of everything.'

Jones and Seenan (2004)

McCabe's measured response to the consequences of digital photography, in contrast to Hockney's more sensational reaction, echos contemporary discussions comparing IT with telegraphy, films and electricity. It is suggested that you should be wary of exaggerated, utopian claims about the power of technologies. Likewise you should watch out for exaggeration in claims of the negative consequences of technology. Certainly IT has made significant changes in many areas of our lives. I hope, however, that your study of this course puts you in a more powerful position to think critically about IT, and make informed judgements on the consequences of new information and communication technologies.

Conclusion

The theme of this course has been the impact that information and communication technologies have had on the news industry. I introduced this theme with a short historical overview of technology in the news industry followed by a look at how technology is used for newsgathering.

We have been looking in some detail at aspects of the underlying technologies used in newsgathering, including the basic components of digital camcorders and the methods of signal transmission over wires. We have used some mathematical methods, with a particular emphasis on equations that can be represented with a formula triangle. We have also explored some basic ideas about electricity in the context of the batteries that are needed to supply the power for portable IT equipment.

Finally we have considered the issue of trust in information, and the way in which recent developments in IT have required us to think again about what enables us to trust information.

References

Higgins, J. (2004) Introduction to SNG and ENG Microwave, Elsevier Focal Press, Oxford.
Jones, J. and Seenan, G. (2004) 'The camera today? You can't trust it. Hockney sparks a debate', The Guardian, 4 March 2004.
Northedge, A., Thomas, J., Lane, A. and Peasgood, A. (1997) The Sciences Good Study Guide, Milton Keynes, The Open University.
Pax, S. (2003) 'I became the profane pervert Arab blogger', The Guardian, 9 September 2003.
Taylor, E. V. (1995) 'From newsreels to real news', paper presented at a colloquium entitled Capturing the Action: Changes in Newsgathering, Institution of Electrical Engineers, London.
Taylor, E. V. (2004) 'Real news meets IT', [online] T175, http://students.open.ac.uk/technology/courses/t175, The Open University.

Acknowledgements

Cover image: Richard Masoner in Flickr made available under Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Licence.

The content acknowledged below is Proprietary (see terms and conditions) and is used under licence.

Grateful acknowledgement is made to the following sources for permission to reproduce material within this course.

Figure 6 NanoElectronics Japan

Figure 30 The Cottingley Fairies © Science and Society Picture Library

Taylor E.V., (1995) 'From Newsreels to Real News', Capturing the Action: Changes in Newsgathering Technology © 1995 The Institute of Electrical Engineers.

Higgins J., (2004) 'Introduction to SNG and ENG Microwave' Elsevier Science.

Pax S., (2003) 'I Became The Profane Pervert Arab Blogger', © The Guardian Newspapers Limited.

Don't miss out:

If reading this text has inspired you to learn more, you may be interested in joining the millions of people who discover our free learning resources and qualifications by visiting The Open University - www.open.edu/ openlearn/ free-courses