Jo Martin
Skip to main content
People_

Ms Jo Martin

Thesis work

Thesis title: Creating Deep Learning Enabled Computer Vision Models for the Interpretation of Embodied Interaction in Performance

Supervisors: Kazjon Grace, Lian Loke

Thesis abstract:

�p�Keywords: embodied interaction, deep learning, computer vision, human activity and motion recognition. Recent advances in artificial intelligence, particularly in computer vision, driven by the increasing volumes of visual data produced by new media technologies, and the increasing success of convolutional neural networks to recognise people and objects, has initiated a growth in research into how machine learning can be extended to everyday problems. This research, through experimentation with machine learning models, specifically deep learning, investigates how the technology can be used in design, to produce better outcomes for users. Following the identification of a gap in literature investigating the application of machine learning models in the design process, the study explores the current state of machine-learning-enabled, computer vision models that could potentially be used in design, to improve the user experience. Specifically, the research centers on computer vision and gesturing defined as a semiotic pose, movement or action that can be used as an instruction to technology.This study initially explores pose recognition, specifically training customised pose models using OpenPtrack V2, an open source, scalable multi-camera multi-person tracker for creatives and educationalists to develop interactive experiences (OpenPTrack.org). This research will be extended to include person to person and person to object interactions in performance, expanding pose recognition to human activity recognition and also including object recognition, to test if this can have a positive impact on recognising meaningful movements. Testing and evaluation, determines if generalised datasets or custom datasets are more successful in training individualised embodied interactions, including the recognition of variations of movement or meaningful actions, within a more generalised human activity. These findings, in respect of the suitability of these datasets to a unique performer have wider application to other fields, such as sports performance training, industrial safety, and assistive technology. It is also necessary to consider the confidence levels of models and how they apply to specific tasks. In particular, what level of confidence of the accuracy of a pose is required for a performance, determining if a higher degree of accuracy is required for more subtle or regionalised movement. These findings have wider application, particularly the accuracy required for assistive technologies or for testing safety protocol in virtual reality. The first case study investigates how data derived from these machine learning models, via software and hardware such as OpenPTrack and Kinect, together with other devices, such as Inertial Measurement Units (IMU), that can track human movement, forms an intrinsic component of scenography and visually interpret meaningful movement to create an artwork. The study will produce artefacts that demonstrate and explore the application and future potential of this technology. Another potential by-product of the research is the foundation of a dataset for embodied movement, that can be utilised by other designers and researchers and the framework for further extension.«/p»�/p�