Skip to content
Skip to main content

About this free course

Download this course

Share this free course

An introduction to software development
An introduction to software development

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

3.2 A historical perspective

Manchester University was the home of the world’s first stored-program computer, the Small-Scale Experimental Machine – also known as ‘Baby’ – which ran its first program on 21 June 1948 (Tootill, 1998). Baby’s first run was on a 17-line program that calculated the highest factor of an integer. This was not a mathematical problem of much significance, but was chosen as a proof-of-concept because it made use of all seven instructions in Baby’s instruction set (Enticknap, 1998).

This trivial problem was, arguably, the beginning of the modern information age. At that time, a simple software process was sufficient: the program was simply crafted by inspection of the problem and debugged until it was correct. Things didn’t stay that simple for long. By the early 1990s, problem complexity had increased significantly, with code complexity increasing in turn by several orders of magnitude; for instance, by 1991 the IBM CICS (Customer Information Control System) had grown to 800,000 lines of code.

A first response to the need to manage higher code complexity was to provide richer and richer structures within the programming languages in which to implement that code. High-level programming languages were developed to provide greater abstraction, often with concepts from the problem domain allowing more complex code to service more complex problems. There was a steady trend in sophistication, leading to the programming languages we still make use of today. Then the concept of software architecture was developed, placing at the heart of design repeated code and thought structures, which were independent of programming language and could be used to structure large systems (Bass et al., 2003).

Coincident with the rise of code complexity was a growing realisation of the importance of an engineering approach to software; here, the traditional engineering disciplines inspired early software development processes, as with the waterfall model. Also, with architecture at the heart of code design, problem complexity started to be addressed through software requirements engineering – see, for instance, Robertson and Robertson (2006). This complemented code design through phases of elicitation, modelling, analysis, validation (i.e., ‘are we addressing the correct problem?’) and verification (i.e., ‘have we addressed the problem correctly?’). Now, software requirements engineering led from system requirements to software requirements specifications on which architectural analysis could be performed.

Things did not go smoothly with early development processes. Already in 1970 Royce saw the need – based on his extensive experience in developing embedded software – to iterate between software requirements and program design (and between program design and testing). For software, the early stages of development could not always be completed before commencing the later stages. Indeed, it was Royce’s enhanced waterfall model, able to cope with requirements incompleteness, that inspired a generation of iterative and incremental software processes (Royce, 1970).

The waterfall model also assumed that a stable problem exists on the basis of which to begin development. This proved to be adequate in early applications. For instance, a very early business use of computers came about in 1951, when the Lyons Electronic Office (LEO) was first used for order control and payroll administration (Bird, 1994). Early applications typically involved scheduling, reporting and ‘number-crunching’, and although they provided many technical challenges that were complex for that time, what amounts to a first-mover advantage (Grant, 2007) meant the Lyons company enjoyed a stable environment in which to explore its use of computers. However, the drive for more sophisticated business computing meant that developers now had to deal with the volatility of the business world and the way this affected its relationship with the formal world of the computer. Therefore, the waterfall assumption soon showed itself to be inadequate for many real-world applications.

Volatility also arises from the nature of technology: rapid technological change also drives changes in the context in which software is deployed, and any connection between the informal world outside the computer and the formal world of the computer must link two volatile targets. Due to volatility, the resulting relationship between requirements and architectures is an order of magnitude more complex than for stable systems, and the treatment of it offered by traditional techniques, such as those derived from Royce’s processes, are increasingly seen as inadequate. Indeed, such volatility is now recognised as one of the greatest challenges software development faces.