Unit outline_

COMP5426: Parallel and Distributed Computing

Semester 1, 2026 [Normal evening] - Camperdown/Darlington, Sydney

This unit is intended to introduce and motivate the study of high performance computer systems. The student will be presented with the foundational concepts pertaining to the different types and classes of high performance computers. The student will be exposed to the description of the technological context of current high performance computer systems. Students will gain skills in evaluating, experimenting with, and optimising the performance of high performance computers. The unit also provides students with the ability to undertake more advanced topics and courses on high performance computing.

Unit details and rules

Academic unit Computer Science
Credit points 6
Prerequisites
? 
None
Corequisites
? 
None
Prohibitions
? 
COMP4426 or OCMP5426
Assumed knowledge
? 

Experience with algorithm design and software development as covered in (COMP2017 or COMP9017) and COMP3027 (or equivalent UoS from different institutions)

Available to study abroad and exchange students

Yes

Teaching staff

Coordinator Albert Zomaya, albert.zomaya@sydney.edu.au
The census date for this unit availability is 31 March 2026
Type Description Weight Due Length Use of AI
Written exam Final exam
Supervised
60% Formal exam period 2 hours AI prohibited
Outcomes assessed: LO3 LO1 LO2 LO4 LO5 LO6
Written work Assignment 1
Develop parallel algorithm, implementing on shared-memory architectures. Evaluate its performance via rigorous testing, including measuring speedup, scalability, utilization, and correctness. submit detailed report/methodlogy/performance result
20% Week 08
Due date: 26 Apr 2026 at 23:59

Closing date: 26 Apr 2026
several tasks AI allowed
Outcomes assessed: LO4 LO6 LO1 LO2 LO3 LO5
Written work Assignment 2
Develop parallel/distributed alg, implementing on shared-memory or distributed-memory architectures. Evaluate its performance via rigorous testing, including measuring speedup, scalability, utilization, and correctness. submit detailed report/methodlogy
20% Week 13
Due date: 31 May 2026 at 23:59

Closing date: 31 May 2026
several tasks AI allowed
Outcomes assessed: LO2 LO4 LO6 LO1 LO3 LO5

Assessment summary

There are two programming assignments, and one final exam.

To pass the unit a student must score (1) an overall score of 50 or better and (2) at least 40% of the available marks on the final exam.

Assessment criteria

The University awards common result grades, set out in the Coursework Policy 2014 (Schedule 1).

As a general guide, a high distinction indicates work of an exceptional standard, a distinction a very high standard, a credit a good standard, and a pass an acceptable standard.

Result name

Mark range

Description

High distinction

85 - 100

 

Distinction

75 - 84

 

Credit

65 - 74

 

Pass

50 - 64

 

Fail

0 - 49

When you don’t meet the learning outcomes of the unit to a satisfactory standard.

For more information see guide to grades.

Use of generative artificial intelligence (AI)

You can use generative AI tools for open assessments. Restrictions on AI use apply to secure, supervised assessments used to confirm if students have met specific learning outcomes.

Refer to the assessment table above to see if AI is allowed, for assessments in this unit and check Canvas for full instructions on assessment tasks and AI use.

If you use AI, you must always acknowledge it. Misusing AI may lead to a breach of the Academic Integrity Policy.

Visit the Current Students website for more information on AI in assessments, including details on how to acknowledge its use.

Late submission

In accordance with University policy, these penalties apply when written work is submitted after 11:59pm on the due date:

  • Deduction of 5% of the maximum mark for each calendar day after the due date.
  • After ten calendar days late, a mark of zero will be awarded.

This unit has an exception to the standard University policy or supplementary information has been provided by the unit coordinator. This information is displayed below:

1. For every calendar day up to and including ten calendar days after the due date, a penalty of 5% of the maximum awardable marks will be applied to late work. 2. For work submitted more than ten calendar days after the due date a mark of zero will be awarded. The marker may elect to, but is not required to, provide feedback on such work.

Academic integrity

The University expects students to act ethically and honestly and will treat all allegations of academic integrity breaches seriously.

Our website provides information on academic integrity and the resources available to all students. This includes advice on how to avoid common breaches of academic integrity. Ensure that you have completed the Academic Honesty Education Module (AHEM) which is mandatory for all commencing coursework students

Penalties for serious breaches can significantly impact your studies and your career after graduation. It is important that you speak with your unit coordinator if you need help with completing assessments.

Visit the Current Students website for more information on AI in assessments, including details on how to acknowledge its use.

Simple extensions

If you encounter a problem submitting your work on time, you may be able to apply for an extension of five calendar days through a simple extension.  The application process will be different depending on the type of assessment and extensions cannot be granted for some assessment types like exams.

Special consideration

If exceptional circumstances mean you can’t complete an assessment, you need consideration for a longer period of time, or if you have essential commitments which impact your performance in an assessment, you may be eligible for special consideration or special arrangements.

Special consideration applications will not be affected by a simple extension application.

Using AI responsibly

Co-created with students, AI in Education includes lots of helpful examples of how students use generative AI tools to support their learning. It explains how generative AI works, the different tools available and how to use them responsibly and productively.

Support for students

The Support for Students Policy reflects the University’s commitment to supporting students in their academic journey and making the University safe for students. It is important that you read and understand this policy so that you are familiar with the range of support services available to you and understand how to engage with them.

The University uses email as its primary source of communication with students who need support under the Support for Students Policy. Make sure you check your University email regularly and respond to any communications received from the University.

Learning resources and detailed information about weekly assessment and learning activities can be accessed via Canvas. It is essential that you visit your unit of study Canvas site to ensure you are up to date with all of your tasks.

If you are having difficulties completing your studies, or are feeling unsure about your progress, we are here to help. You can access the support services offered by the University at any time:

Support and Services (including health and wellbeing services, financial support and learning support)
Course planning and administration
Meet with an Academic Adviser

WK Topic Learning activity Learning outcomes
Week 01 Introduction to High-Performance Computing and Key Challenges; C Programming Fundamentals; Function Pointers in C; Multithreading in C using pthread_create and pthread_join; Thread Synchronization Mechanisms. Lecture (2 hr) LO1 LO2 LO4 LO5 LO6
Parallel architectures (1), C Programming Fundamentals Tutorial (1 hr) LO1 LO2 LO5 LO6
Week 02 Parallel and Distributed-Memory Architectures; Multicore Systems; Pipelining; Computer Clusters and GPUs; Interconnection Networks; Race Conditions; Synchronization Mechanisms; Mutexes. Lecture (2 hr) LO1 LO2 LO3 LO4 LO5 LO6
Parallel and Distributed-Memory Architectures, Multithreading in C Tutorial (1 hr) LO1 LO5 LO6
Week 03 Parallel computing architectures including SIMD, GPUs, shared-memory multiprocessors, and distributed-memory multicomputers; performance optimisation techniques such as computational intensity analysis and loop unrolling; and classic parallel programming problems including the producer–consumer problem, Semaphores, and Condition variables Lecture (2 hr) LO1 LO2 LO3 LO4 LO5 LO6
Parallel architectures , computational intensity analysis and loop unrolling, MPI and OPENMP programming Tutorial (1 hr) LO1 LO2 LO3 LO4 LO5 LO6
Week 04 Design and optimisation of parallel algorithms, including efficient matrix multiplication techniques, an in-depth study of CPU cache hierarchies, and methods for improving performance through temporal and spatial locality. Lecture (2 hr) LO1 LO2 LO3 LO4 LO5 LO6
Parallel algorithm design (general), efficient matrix multiplication techniques, MPI and OPENMP programming Tutorial (1 hr) LO1 LO2 LO3 LO4 LO5 LO6
Week 05 Parallel algorithm design for shared-memory architectures, including analysis of computational intensity; programming with OpenMP; synchronization mechanisms and deadlock conditions; and cache-aware optimisations such as scan (prefix sum) operations. Lecture (2 hr) LO1 LO2 LO3 LO4 LO5 LO6
Parallel algorithm design for shared memory machines, programming with OpenMP; synchronization mechanisms Tutorial (1 hr) LO1 LO2 LO3 LO4 LO5 LO6
Week 06 Parallel algorithm design for shared-memory machines, parallel computing architectures, parallel scan (prefix sum) operations, performance models including Amdahl’s Law, task-dependency graphs, matrix partitioning techniques, and SIMD architectures. Lecture (2 hr) LO1 LO2 LO3 LO4 LO5 LO6
Parallel algorithm design for shared memory machines, parallel scan (prefix sum) operations, task-dependency graphs, matrix partitioning techniques, MPI and OPENMP programming Tutorial (1 hr) LO1 LO2 LO3 LO4 LO5 LO6
Week 07 GPU programming (1) Lecture (2 hr) LO1 LO4 LO5 LO6
GPU programming (1) Tutorial (1 hr) LO1 LO4 LO5 LO6
Week 08 GPU programming (2) Lecture (2 hr) LO1 LO4 LO5 LO6
GPU programming (2) Tutorial (1 hr) LO1 LO4 LO5 LO6
Week 09 Analytical modeling of parallel systems, including performance evaluation and scalability analysis. Study of concurrency issues such as race conditions and critical sections, and the use of synchronization mechanisms including mutexes, semaphores, and condition variables. Implementation of parallel patterns and operations such as reductions and the producer-consumer problem, with emphasis on deadlock identification and avoidance in synchronization. Lecture (2 hr) LO1 LO2 LO3 LO4 LO5 LO6
Parallel algorithm design for distributed memory machines , concurrency issue, usage of mutexes, semaphores, and condition variables, MPI and OPEN MP programming Tutorial (1 hr) LO1 LO3 LO4 LO5 LO6
Week 10 Design and analysis of parallel algorithms for distributed-memory systems, focusing on thread synchronization and coordination. Study of classic concurrency problems such as the producer-consumer problem, and strategies to detect, prevent, and resolve deadlocks in synchronization. Lecture (2 hr) LO1 LO2 LO3 LO4 LO5 LO6
Parallel algorithm design for distributed memory machines, thread synchronization, deadlocks, MPI and OPEN MP programming Tutorial (1 hr) LO1 LO3 LO4 LO5 LO6
Week 11 Design and analysis of parallel algorithms for shared-memory and distributed-memory machines. Study and application of BLAS and LAPACK libraries for high-performance linear algebra operations, including Gaussian elimination. Introduction to MPI groups and communication contexts, with a focus on creating and managing MPI process topologies for efficient parallel computation. Lecture (2 hr) LO1 LO2 LO3 LO4 LO5 LO6
distributed memory machines using MPI, MPI Groups and Communication Contexts, MPI Process Topologies Tutorial (1 hr) LO1 LO2 LO3 LO4 LO5 LO6
Week 12 Analysis of consistency models in shared-memory architectures and cache coherency, including an in-depth study of coherence protocols and memory consistency guarantees. Practical implementation of parallel programs using OpenMP, emphasizing thread synchronization, avoidance of race conditions, and efficient execution of reduction operations. Techniques for performance optimization while maintaining correctness in multi-threaded programs are also covered. Lecture (2 hr) LO1 LO2 LO3 LO4 LO5 LO6
MPI and OPENMP programming, avoidance of race conditions Tutorial (1 hr) LO1 LO2 LO3 LO4 LO5 LO6
Week 13 Review Lecture (2 hr) LO1 LO2 LO3 LO4 LO5 LO6
Review Tutorial (1 hr) LO1 LO2 LO3 LO4 LO5 LO6

Attendance and class requirements

One 2-hour lecture per week

One 1-hour tutorial per week

Study commitment

Typically, there is a minimum expectation of 1.5-2 hours of student effort per week per credit point for units of study offered over a full semester. For a 6 credit point unit, this equates to roughly 120-150 hours of student effort in total.

Required readings

Lecture Notes

References:

- W. Gropp, Using MPI: Portable Parallel Programming with the Message-Passing Interface, 3rd ed. MIT Press, 2014.

- Lewis, B., & Berg, D. J. (1998). PThreads primer: A guide to multithreaded programming. Prentice Hall

- A. Grama, A. Gupta, G. Karypis, and V. Kumar, Introduction to Parallel Computing, Second Edition, Addison Wesley, 2003

- Michael J. Quinn, Parallel Programming in C with MPI and OpenMP, Mcgraw-Hill, 2003

- David B. Kirk and Wen-mei W. Hwu, Programming Massively Parallel Processors: A Hands-on Approach, 3nd edition, Morgan Kaufmann, 2016

- Kirk, D. B., Hwu, W.-M. W., & El Hajj, I. (2022). Programming massively parallel processors (4th ed.). Morgan Kaufmann.

- Soyata, T. (2018). GPU parallel program development using CUDA. CRC Press.

- Nemirovsky, M., & Tullsen, D. M. (2013). Multithreading architecture Morgan & Claypool Publishers.

- V. Eijkhout, Parallel Programming in MPI and OpenMP: The Art of HPC, Volume 2, 2022

- Behrooz Parhami, Introduction to Parallel Processing: Algorithms and Architectures, Plenum Press, 1999

- Parallel Programming in C with MPI and OpenMP, Michael J. Quinn, McGraw-Hill, 2004.
- MPI Programming Model: Desert Islands Analogy, Henry Neeman, University of Oklahoma Supercomputing Center.
- An Introduction to MPI, William Gropp and Ewing Lusk, Argonne National Laboratory.
- Introduction to High-Performance Scientific Computing, Victor Eijkhout, 2016.
- Parallel Computing for Science and Engineering, Victor Eijkhout, 2017.
- Designing and Building Parallel Programs, Ian Foster, 1995.
- Introduction to Parallel Computing Tutorial, HPC@LLNL, https://hpc.llnl.gov.
- Programming on Parallel Machines Norm Matloff. University of California, Davis
- Manual Pages for MPI on Linux.
- A Primer on Memory Consistency and Cache Coherence, 2nd Ed., Vijay Nagarajan, Daniel J. Sorin, Mark D. Hill, and
David A. Wood.
- Shared-Memory Synchronization. M. L. Scott. , Synthesis Lectures on Computer Architecture. Morgan & Claypool Publishers

- Many related materials which are on the Internet

 

Learning outcomes are what students know, understand and are able to do on completion of a unit of study. They are aligned with the University's graduate qualities and are assessed as part of the curriculum.

At the completion of this unit, you should be able to:

  • LO1. Develop the ability to design, analyse, and optimize parallel and distributed high-performance computing algorithms.
  • LO2. Gain a solid understanding of fundamental concepts in high-performance parallel and distributed computing, such as performance tuning, parallel execution models, communication mechanisms, computational intensity, deadlocks, Amdahl’s Law, memory consistency models and cache behavior.
  • LO3. Efficiently implement and optimize deadlock-free high-performance computing (parallel and distributed) algorithms using multithreaded programming models (Pthreads, OpenMP, MPI) and synchronization mechanisms such as mutexes, condition variables, and semaphores.
  • LO4. Demonstrate knowledge of a range of high-performance computing architectures, including parallel architectures, multicore systems, clusters, GPUs, interconnection networks, SIMD systems, distributed-memory multicomputers, shared-memory multiprocessors, and vector processing units.
  • LO5. demonstrate technical writing to communicate complex ideas clearly
  • LO6. understand the significance of high performance computing and its impact on the whole computer systems.

Graduate qualities

The graduate qualities are the qualities and skills that all University of Sydney graduates must demonstrate on successful completion of an award course. As a future Sydney graduate, the set of qualities have been designed to equip you for the contemporary world.

GQ1 Depth of disciplinary expertise

Deep disciplinary expertise is the ability to integrate and rigorously apply knowledge, understanding and skills of a recognised discipline defined by scholarly activity, as well as familiarity with evolving practice of the discipline.

GQ2 Critical thinking and problem solving

Critical thinking and problem solving are the questioning of ideas, evidence and assumptions in order to propose and evaluate hypotheses or alternative arguments before formulating a conclusion or a solution to an identified problem.

GQ3 Oral and written communication

Effective communication, in both oral and written form, is the clear exchange of meaning in a manner that is appropriate to audience and context.

GQ4 Information and digital literacy

Information and digital literacy is the ability to locate, interpret, evaluate, manage, adapt, integrate, create and convey information using appropriate resources, tools and strategies.

GQ5 Inventiveness

Generating novel ideas and solutions.

GQ6 Cultural competence

Cultural Competence is the ability to actively, ethically, respectfully, and successfully engage across and between cultures. In the Australian context, this includes and celebrates Aboriginal and Torres Strait Islander cultures, knowledge systems, and a mature understanding of contemporary issues.

GQ7 Interdisciplinary effectiveness

Interdisciplinary effectiveness is the integration and synthesis of multiple viewpoints and practices, working effectively across disciplinary boundaries.

GQ8 Integrated professional, ethical, and personal identity

An integrated professional, ethical and personal identity is understanding the interaction between one’s personal and professional selves in an ethical context.

GQ9 Influence

Engaging others in a process, idea or vision.

Outcome map

Learning outcomes Graduate qualities
GQ1 GQ2 GQ3 GQ4 GQ5 GQ6 GQ7 GQ8 GQ9

This section outlines changes made to this unit following staff and student reviews.

Changed the assignment due dates

Additional costs

There are no additional costs for this unit.

Site visit guidelines

There are no site visit guidelines for this unit.

Disclaimer

Important: the University of Sydney regularly reviews units of study and reserves the right to change the units of study available annually. To stay up to date on available study options, including unit of study details and availability, refer to the relevant handbook.

To help you understand common terms that we use at the University, we offer an online glossary.