Skip to content
Society, Politics & Law

Man vs machine

Updated Thursday 6th August 1998

Should we worry about robots one day getting tired of being the junior partners?

Boris Johnson, mayor of London, and a dalek, master of the universe Copyrighted image Icon Copyright: BBC

About the discussion

It has been said that the question is not whether machines can think, but whether human beings do. Today’s Open Forum discusses the relationship between the human and the mechanical elements of our society.

We sometimes refer to the parent/child relationship between us and our machinery. Is this analogy workable?

John Monk suggests that the parent/child analogy implies a certain type of relationship with a certain type of control. He is more comfortable with the idea of technology as an extension of ourselves, the analogy of the cyborg, whereby technology helps us to operate in an environment which is itself created by our technology.

John McCrone agrees. He says technology only has real impact when it becomes a seamless extension of ourselves. At the moment this relationship is very clunky and the real impact of our technologies is still to come.

He also says that it is a false dream to create a machine that thinks like us – why would we want to do this? We’ve lived through enough of the computer revolution to predict what will happen, which is that the technology will act as an amplifier of our own existing personalities. For example, technology allows solitary people to become more isolated, but also enables sociable people to communicate more.

So what do we want from our computers - personalities and idiosyncrasies or plain facts?

Michael Marshall Smith suggests that we want both. Whilst we want them to be accurate and reliable, we also prefer machines with more humanity. When we’re isolated, for example working from home, we like to give our machines personalities to get a sense of companionship. The more complex machinery becomes the less predictable malfunctions become, the less we understand and the more we see these machines as having their own consciousness. This increasingly complex relationship will lead to stronger relationships with our machines in the future. We just have to look at the people we like – often it’s our friends’ quirks that draw us to them. Perfect people are very boring.

So, as machines get more complex, how will we know when we’ve created a conscious computer?

John McCrone suggests that in the 1970s there was a strong drive to make a conscious machine and all it proved was how little we understood about what was needed. To develop a consciousness it is necessary to have a life history, to grow up and be brought to an adult stage, like a child. What we really want is for computers to be a transparent extension of ourselves. The real issues are how much our personality will change, where the boundaries will blur. We treat our machines as surrogate humans because they are noticeable, but the thrust of industry is to make them unnoticeable. Once they are transparent we will look through them to other people.

Michael Marshall Smith disagrees. While he concedes the purpose of some machines is to enable transparent communication between people, the purpose of other machines is to entertain or to carry out tasks which are nothing to do with anyone else. These machines we like to interact with, to engage with – transparency is not always appropriate.

John Monk says that in a sense machines can never be completely transparent – they enable us to communicate with other people, but every technology will distort these interactions. It is this element of distortion that presents idiosyncrasies and gives the technology its ‘personality’.

So if we ever do succeed in creating a conscious machine, will it thank us for doing so?

John Monk suggests that it would be difficult to imbue a machine with any morality other than our own. It would act as a mirror to our own behaviour and personality so if the machine despised us, this would say something quite worrying about ourselves.

Will technology make us more mechanistic and less humane?

Michael Marshall Smith suggests that new developments only allow us to do what we would have done anyway. Much of what we do is run on mechanistic principles anyway, for example how much money a hospital has in order to save a certain number of lives. It is not a purely humane decision anyway. The questions we should be asking of our new technology is how well it will interact with human nature – will it stop us or enable us to do the things we want to do anyway. If we use it for ‘bad’ purposes that’s not the machine’s fault, it’s ours. John Monk points to the technocratic governments of history who tried to govern on entirely rational principles. The fact that they all failed indicates hope for humanity.

John McCrone agrees that people tend to think that consciousness is all about brains, but it is much more to do with society and culture – how we’re trained, brought up, how we interact with others. Focussing on hardware and replicating our hardware in computers is missing the point. What we want is a transparent link to the world, a higher quality of social feedback, and computers are very good at connecting us to society.

It’s true that technology gives us more opportunity to do bad things, but is also means more people are watching, which encourages people to stick to the socially agreed morality. Moral issues can sort themselves out as long as there is real access to democracy.

Is our world better as a result of technology?

As John Monk points out, we use technology to create an environment then develop more technology to survive in this environment. For example, technology allowed us to live in cities, but as a result certain viruses arose, so we had to develop medicine to deal with these. We are on a treadmill which it would be very difficult to escape.

John McCrone also points out that technology improves many aspects of our lives, but makes some aspects worse.

John Monk agrees – some people will be better off with new technology, some worse. We should aim for a fair balance, but it’s difficult to predict how it will turn out.

Michael Marshall Smith points out that there is a tendency towards dystopia in science fiction because it’s easier to write about. We will probably rub along with machines in the same way we rubbed along with eachother. All we can do is to keep our eyes on the ball and try to push it in the right direction.

Take it further

Virtual/Embodied
Edited by
John Wood, Routledge

Escape Velocity
Mark Dery, Hodder & Stoughton

One of Us
Michael Marshall Smith, HarperCollins

What You Make It
Michael Marshall Smith, HarperCollins

 

For further information, take a look at our frequently asked questions which may give you the support you need.

Have a question?