Computational science underpins modern science, engineering and finance. It provides numerical solutions to problems that can't be solved analytically, and explores problems that are not amenable to experiments. This unit focuses on the foundation of numerical computing: how numbers are represented and manipulated by computers. Understanding the representation of integers and real numbers, and their fundamental limitations is critical for accurate numerical calculations. For example, if you add the value 0.1 a total of one million times, the exact answer is 1,000,000 x 0.1 = 100,000. However, when you do this on a computer the answer might be 100,958.3. This is a limitation of the floating-point representation of numbers in every modern computer - but most people are unaware of it! In this Unit you will learn about number systems and binary, two's complement representation for integers; fixed and floating-point representations for real numbers; precision and overflow, rounding and truncation errors. We will illustrate these with practical examples, and show how mistakes in computational calculations can result in catastrophes such as the explosion of the Ariane 5 rocket. All activities will be done in Python 3, a widely used modern programming language.
4 weeks of online tutorials, programming activities and online video lectures
online quizzes and a computational exam
COSC1003 or COSC1903