Skip to content
Skip to main content

About this free course

Download this course

Share this free course

Introducing ethics in Information and Computer Sciences
Introducing ethics in Information and Computer Sciences

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

1.2 Ethical examples

But is this a tenable position? In other words, is it only the people who use the technologies who carry the ethical burden? Conversely, is ethics of any interest to engineers, programmers and scientists? What, in the first place, constitutes an ethical issue? To begin examining these questions, let's look at some examples.

Example 1: The pensioner's faulty digital TV box

In 2006 a pensioner in Plymouth came back home one evening to find people standing by her front door holding a big antenna. Apparently the lady's digital TV box had a fault and accidentally transmitted on the emergency frequencies. The outcome of this was that an air-sea rescue mission was launched to search for a vessel in trouble, which, of course, did not exist. The case was widely reported on the media (see BBC News for a snippet [accessed 18 June 2009]).

You may ask yourself who was to blame for this blunder, or even if blame would need to be assigned at all, but you'll probably agree with me that, certainly, none of it was the pensioner's fault. A good question to ask would be this: should it have happened in the first place? Did the engineers who designed the box have a duty to look at ways of preventing fault conditions that could cause interference on the emergency frequencies? Such questions of duty are ethical questions.

Example 2: Safety on the railways in Britain

It is an unfortunate fact that fatalities occur on the railways across Britain, and, as a result, politicians tend to act quickly to allay public fears. In 1999 they announced the nationwide introduction of automatic systems to prevent trains passing through red lights, so as to prevent collisions (the Train Protection and Warning System – TPWS – deployment completed at the end of 2003; see www.railwaypeople.com [accessed 18 June 2009]). Naturally these systems have to be installed on the track and subsequently maintained. The problem is that the trackside is a very dangerous place. Actually, the rate of fatalities amongst the trackside work force is considerably higher per capita than that of passengers: passenger fatalities are approximately 1 in 112,500 per year (TPWS predicted improvement: 7.9 per cent), whilst track workers fatalities are around 1 in 7,000 per year (see the Health and Safety Statistics provided by the Office of Rail Regulation, online at www.rail-reg.gov.uk [accessed 18 June 2009]). Bearing these figures in mind, it would seem that the result of the government's initiative might be that more rail workers would be likely to be killed in the course of installing and maintaining the additional system. It really is not clear whether the overall death toll will be higher or lower with the new safety system installed.

Now suppose the engineers recognised this and even had strong evidence that the cost in lives would be considerably higher. It would still require considerable political skills to overturn a decision already made by the politicians. Here we would have a very complex situation, one which I could make yet a bit more complicated. For instance, I could argue that, if we didn't install the system aimed at saving passenger's lives, then maybe people would see the headlines on the newspapers and decide to travel by car instead of taking the train. But, of course, as soon as they did that, because statistics suggests that it's more dangerous to travel by car, there might be more deaths than there would be originally. In other words, fewer people would trust the train and take greater risks on the road. So, in the case of the railway system, working out what the best course of action might be isn't a particularly straightforward proposition. Even if I had an especially strong case, I would still have to persuade politicians and the lay audience.

This example illustrates that technology developers, in considering technical matters, have not only to understand the ethical issues involved, but also be able to present good arguments.

Example 3: Internet protocols

This example is a bit more subtle and less hazardous as it is hidden away in the Internet protocols. One of the primary protocols on the Internet is TCP (Transmission Control Protocol). There is always a question, when starting to communicate using this or any other protocol, about what the rate of data transmissions should be. In a revision of the early version of TCP, the speed of transmission begins slowly, and if this is successful – with success indicated by the receipt of an acknowledgements from the recipient in a reasonable time – the transmission speed is slowly increased. If at any stage there is a succession of failures, the transmission speed is reduced in large steps until success is consistently achieved. Transmission rates are then gradually increased again.

The way this can be done is given in RFC 1122, which identifies ‘work by Jacobson on Internet congestion and TCP retransmission stability [which] has produced a transmission algorithm combining "slow start" with "congestion avoidance"’. RFC1122 goes on to stipulate that ‘TCP MUST implement this algorithm’. The opening to RFC 2481 explains broadly how the congestion control operates, identifies some issues and proposes changes to lower level protocols. In practice, refinements were made to the TCP congestion control, and RFC 2001 documents the refinements to the congestion control algorithms, giving details of the TCP protocol congestion control and start-up as it operated in working systems. This was converted into a standard recorded in RFC 2309 (which later gained some small modifications in RFC 3390). The mechanisms have proved to be largely effective, but experience of the mechanisms effectiveness has been gained when individual elements of the Internet behave according to the specified start-up and congestion avoidance procedures.

The crucial points are that, on the Internet, there is no central control; congestion can occur and can be detected. If it does occur, there are prescribed ways in which the software in any data transmitter using TCP should behave. Broadly, each sender must reduce their demands in steps until the congestion disappears. Once congestion is cleared, they can then ramp up their demand. It is a way of regulating usage to prevent gridlock. Although there is a standard way of doing this, the standard merely imposes constraints on how things are done, so within the standard variations are possible. Also, research continues with the aim of improving the performance of TCP under conditions of heavy traffic. To some degree, therefore, the actual performance of the Internet is in the hands of individual programmers who chose to produce the variants of the TCP congestion control mechanisms.

However, knowledgeable and devious programmers could write software that starts up quickly, backs off more slowly or, perhaps, not at all and, once the congestion is clear, ramp up their demand more rapidly than the standard requires, in order to grab a greater share of the communication resources than others. The fair allocation of resources, therefore, depends on adherence to a standard and on self-restraint. In other words, it depends on everyone sticking to the rules.

If there were a ‘free-for-all’ and people did not bother to stick with the standard, it would be impossible to predict the consequences. However, if the intention were to get as much of the communication as possible, that might lead to a ‘congestion collapse’ similar to that described in RFC896, which occurred in the early days of the Internet before congestion control was introduced. While there is excess capacity, there are no problems, but as congestion begins to occur, data is delayed, and senders waiting for an acknowledgement may conclude data has been lost and, consequently, retransmit. This adds to the traffic on an already congested network, making matters worse by creating more delays. In this way the capacity of the network diminishes and every communicating device gets a worse service. It has been observed (in RFC896) that once a ‘saturation point has been reached, (…) the network will continue to operate in a degraded condition’ in which each item of data is transmitted several times rather than once.

The ethical ‘lesson’ here is to do with questions regarding something that, although of immediate benefit to an individual, may, if practised widely, be detrimental to everyone – the common good.

In the examples that I've given you, fairness, duty, the distribution of harm and benefit and the need for political skills, all of which are ethical issues, and feature clearly in a technological context. These are only a few examples amongst many, but they do suggest that ethics is something that does concern designers and engineers, lending support to the case that the study of ethics is useful to technology developers.