1.1 What can classical computers do?
In this course, the term classical computers is used to refer to computers which use bits to encode information and carry out computations; i.e. they are binary based.
The complexity of tasks that any computer can complete is limited by the available time and computing resources. There are two routes to increasing the power of classical computing: one is to improve the hardware, which means increasing the size of the memory, the number of gates on a processor chip, or improving the speed of those elements. The other approach seeks to improve the software, i.e. the algorithms.

Thinking about the hardware, the improvement in computing power over the past 50 years has been enormous. The number of transistors on a microprocessor chip has doubled approximately every two years, a fact known as Moore’s law. Unfortunately, this trend cannot continue indefinitely, because one of the main ways that these improvements are realised is by shrinking the size of the circuit elements.
Improving the algorithms is the second possibility. The power of an algorithm can be expressed by stating how the ‘run-time’ which is the number of steps required to implement the algorithm, scales with the size of the task. Some algorithms have a polynomial run-time meaning that if a procedure is to be carried out on a number, n, of elements, the run-time scales as nx where x is typically a small integer. Other algorithms have an exponential run-time, scaling as en.
The algorithms with polynomial run-time are considered to be much more useful than algorithms with exponential run-time, because exponential scaling means that only modest increases in the task size can exhaust available resources.
Throughout this course there are a series of exercises for you to work through. Some of these exercises you can supply your answer to in the response boxes provided. Others will require you to work through your calculations on paper.
Exercise 1
Consider some algorithms that are carried out on two data sets – one with n = 10 elements and another with n = 100 elements. The run-time of one algorithm scales as n3 and the run-time of another algorithm scales as en. How do the run-times of the two algorithms compare for the smaller data set? How do the run-times compare for the larger data set?
Answer
For the smaller data set, the run-times for the two algorithms are in the ratio
, whereas for the larger data set, the run-times for the two algorithms are in the ratio
. The time to carry out the algorithm with the exponential run-time soon becomes unfeasibly long as the size of the data set increases.
OpenLearn - Introduction to quantum computing
Except for third party materials and otherwise, this content is made available under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Licence, full copyright detail can be found in the acknowledgements section. Please see full copyright statement for details.