Designing a distributed system can also be a problem, for example the fact that computers in a distributed system are joined by communication media which can stretch over thousands of miles provides an added dimension to the design process in that response time can be a problem. Another, equally serious problem is that of reliability, for example the fact that a hardware malfunction can bring down a poorly-designed distributed system.
As an example of one design problem that a distributed systems developer has to face consider that of replicated data. Replicating data is such a common technique that there are a large number of products available that allow you to implement it without very much programming. A replicated database is a database which exists in the same form at a number of points in a distributed system. There are two reasons for having replicated databases: the first is reliability. When a system contains a number of replicated databases and one of them becomes unavailable – perhaps because of a hardware fault – another database can take over its role. The second reason is to improve response time. A designer of a distributed system will try and place a database close to its users, usually connected via a fast local area network. Often the original database that is used is a long distance away and can only be accessed via slow internet connections; hence replicating the database and placing it close to the users usually results in a large reduction in response time.
However, using replication comes at a cost: each replicated database needs to keep up-to-date data and will need to coordinate with other databases in order to do this; this gives rise to synchronisation traffic over the network which supports the databases, and can result in a very slow response time. Designing for data replication, where the amount of replication and the location of the replicated data is such that response time is lowered, and yet traffic is not increased to the point where all the gains are nullified, is an art.