ACAIRA Summer School
ACAIRA Summer School
11-13 September 2024
The Aston Centre for Artificial Intelligence Research and Application (ACAIRA) proudly hosted its first AI for Healthcare-themed summer school, supported by the Engineering for Health Centre (E4H). There was a packed programme over three days. Participants were inspired by lectures from invited experts on trustworthy and ethical AI, AI-driven healthcare solutions, and real-world insights from clinicians at the frontline of medicine. They then immersed themselves in a team competition, developing and pitching their own ideas for innovative research projects at the intersection of AI and healthcare, ensuring the responsible application of AI.
Out of 100+ applicants from Aston, Keele, London, Loughborough, Sheffield and Buckingham, 26 bright minds were selected to attend. They collaborated, learned and networked, capping off the experience with a social dinner that fostered lasting connections.
The event was part of ACAIRA’s mission to educate and inspire the next generation of AI leaders, equipping participants with the skills to develop responsible and impactful AI systems.
"I had the privilege of attending the first ACAIRA Summer School 2024, and it was truly an enriching experience in every sense. Both professionally and personally, it exceeded my expectations and offered invaluable opportunities for learning and growth.
"As an attendee, I had the opportunity to engage with leading experts in the field, network with like-minded professionals from both the technology (AI) and healthcare sectors, and immerse myself in a wealth of new ideas and cutting-edge developments."
Paula Atim
MSc Artificial Intelligence student at Aston University
Participant of the ACAIRA Summer School 2024
09:00-09:30 | Registration |
09:30-09:45 | Welcome - Prof Aniko Ekart, Dr Ulysses Bernardet (Aston University) |
09:45-10:15 | AI Foundations - Dr Shereen Fouad (Aston University) |
10:15-10:45 | Project Team Building and Briefing - Dr Shereen Fouad, Dr Martin Rudorfer (Aston University) |
10:45-11:00 | Coffee Break |
11:00-12:00 | Talk 1 - Dr Jianbo Jiao (University of Birmingham) |
12:00-13:00 | Talk 2 - Dr Denis Newman-Griffis (University of Sheffield) |
13:00-14:00 | Lunch Break |
14:00-15:00 | Talk 3 - Dr Arvind Rajasekaran (Sandwell and West Birmingham NHS Hospitals Trust) |
15:00-16:00 | Talk 4 - Dr Peter Lewis, Prof Stephen Marsh (Ontario Tech University) |
16:00-18:00 | Project Work Kick-Off |
09:00-10:00 | Project Work |
10:00-11:00 | Talk 5 - Dr Heather Rose (Aston University) |
11:00-12:00 | Talk 6 - Dr Joseph Alderman (University of Birmingham, The Alan Turing Institute) |
12:00-13:00 | Lunch Break |
13:00-18:00 | Project Work |
09:00-12:00 | Project Pitches |
12:00-13:00 | Panel Deliberation |
13:00-14:00 | Lunch Break |
14:00-16:00 | Feedback and Discussion |
17:00-20:00 | Dinner |
Self-Supervised Learning and Applications to Healthcare
Abstract
With the development of computing power and the availability of large-scale data, modern machine learning, especially deep learning, has been demonstrated to be super effective in many areas, even performing better than human experts in some cases. However, most existing deep models rely on human-annotated data to train and do not generalise very well. Whereas in many scenarios (e.g. healthcare) such human annotation is difficult and even infeasible to acquire. As a result, how to learn representations purely from the data itself is crucial. In this talk, I will give an introduction to the technique to achieve this -- self-supervised learning, with corresponding applications to healthcare. General self-supervised visual representation learning will be introduced with typical approaches including but not limited to pretext-task design and contrastive learning. Self-supervised learning with sequential video data and multi-modal data will be introduced as well. Finally, applications to various healthcare scenarios will be introduced.
Bio
Dr Jianbo Jiao is currently an Assistant Professor in the School of Computer Science at the University of Birmingham, a Royal Society Short Industry Fellow, and a Visiting Researcher at the University of Oxford, United Kingdom. Before joining Birmingham, he was a Postdoctoral Researcher in the Department of Engineering Science at the University of Oxford. He was the recipient of the Hong Kong PhD Fellowship Scheme (HKPFS). He was a Visiting Scholar with the Beckman Institute at the University of Illinois at Urbana-Champaign. His research interests include Machine Learning, Computer Vision and their applications to Healthcare.
For more details please refer to his personal homepage and the web page of his research group.
Equity by design: Representation and ethics in health AI systems
Abstract
AI research and development often takes data as a starting point and focuses on building solutions to specific problems. But what happens when the data you have don’t tell you what you think, or if the solution you’re building is addressing the wrong problem? Developing equitable AI for healthcare requires taking a holistic approach that accounts for the data we use, the decisions we make designing AI systems, and the way those systems are used in practice, to make AI applications ethical, equitable, and effective. Drawing on work in developing and analysing AI systems for information about human function and disability, I will illustrate often-overlooked aspects of AI design and research practice with an outsize impact on how AI systems materialise people’s health, well-being, and disability in the world, and discuss emerging critical tools to inform work as a responsible health AI practitioner.
Bio
Denis Newman-Griffis (they/them) is a Lecturer in Data Science in the University of Sheffield Information School, a Research Fellow of the Research on Research Institute, and Co-Chair of the UK Young Academy. Their work investigates principles and practices of responsible data science and AI, with a particular focus on healthcare and disability. They are leading funded projects on responsible AI in research funding and in organisational practice across public and private sectors. Denis’ work on NLP for information on human function and disability was recognised with the American Medical Informatics Association’s Doctoral Dissertation Award. Denis is a proudly queer and neurodivergent academic and committed to fostering diversity of identity, perspective, and experience around the AI table.Denis Newman-Griffis (they/them) is a Lecturer in Data Science in the University of Sheffield Information School, a Research Fellow of the Research on Research Institute, and Co-Chair of the UK Young Academy. Their work investigates principles and practices of responsible data science and AI, with a particular focus on healthcare and disability. They are leading funded projects on responsible AI in research funding and in organisational practice across public and private sectors. Denis’ work on NLP for information on human function and disability was recognised with the American Medical Informatics Association’s Doctoral Dissertation Award. Denis is a proudly queer and neurodivergent academic and committed to fostering diversity of identity, perspective, and experience around the AI table.
Opportunities and Challenges for Health AI
Abstract
The talk will present a Health Leader's perspective on the AI landscape in the NHS. The opportunities to harness the enormous potential for AI applications in Health for the betterment of public health, improved efficiency of Health care delivery, acceleration of drug discovery will be balanced with the considerable challenges posed by digital infrastructure, knowledge gap, uncertainties around regulatory processes and the complexities of the socio-technical landscape of health care delivery.
Bio
Dr Arvind Rajasekaran is a Consultant Respiratory Physician at Sandwell and West Birmingham Hospitals NHS Trust. He has held leadership roles in the fields of Health Education, operational management and clinical services development. He is an alumnus of the NHS Digital Academy and collaborates with Aston University on developing and establishing standards for explainable AI. He is a Deputy Chief Medical Officer for the Trust with Quality and Safety of Care as the principal portfolio.
Trust: I know you think you understand what you thought I said but I'm not sure you realize that what you heard is not what I meant
Abstract
Trust. Trustworthiness. Trustworthy AI. Trusted platforms and trustless systems, and zero-trust architectures. The language of trust permeates the design and use of technology. But what does it really mean? In this session, Steve Marsh, Professor of Trust Systems, and Peter Lewis, Canada Research Chair in Trustworthy AI will discuss the phenomenon of trust in society and technology, including in today's AI systems. We will explore why much of the discourse is based on misunderstandings, and, hopefully, leave you with some new ways of thinking about technology, its failings, and people's attitudes towards it.
Bio - Stephen Marsh
Stephen Marsh is a Professor of Trust Systems at Ontario Tech University. His research expertise covers areas as diverse as human-computer interaction, wisdom, trust, regret, forgiveness, energy management, hope, privacy, communications security, socially adept technology, and democracy. He thinks about Trustworthy AI from the perspective of AI trusting people as well as the other way around. His seminal work on Computational Trust brought together disciplines of cognitive science, psychology, philosophy, sociology and computational sciences , founded a new research field in Computational Trust, and has continued to influence the field for almost three decades.
He worries that we still haven’t got it right.
His current work examines the intersection of hope, grace and technology for people – what it means, why it is important and how to make it work so that hope isn’t crushed and people feel grace when they interact with artificial systems. It’s a work in progress.
Steve has taught online, face to face and hybrid courses (but he loves to teach online!), with a mix of Ludic, Soctratic and Ipsative teaching and learning methods. In 2022 Steve was awarded the Faculty’s Teaching Excellence Award, for which he is truly grateful because it means his colleagues notice and care. He was also nominated for the Student Teaching Award in 2021 and 2022, which is also amazing because he tries very hard to be a good teacher!
Steve is neurodivergent, lives on a nano-farm in Eastern Ontario, from where he builds stuff, teaches, makes music (he even has a couple of tunes on Spotify and Apple Music!), draws (badly), writes (Trust Systems the textbook is freely available as an Open Educational Resource, he is currently working on a fiction trilogy and a non-fiction book about Hope), blogs occasionally, writes an occasional newspaper column about motorcycle life, and shares life with people, dogs, cats, horses, pigs, chickens and a hamster. Erstwhile sheep goats and lizards occupied space too. He quite possibly also has bats in the belfry.
Bio - Peter Lewis
Dr. Peter Lewis holds a Canada Research Chair in Trustworthy Artificial Intelligence, at Ontario Tech University, Canada, where he is an Associate Professor and Director of the Trustworthy AI Lab. Peter’s research advances both foundational and applied aspects of AI and draws on extensive experience applying AI commercially and in the non-profit sector. He is interested in where AI meets society, and how to help that relationship work well. His current research is concerned with challenges of trust, bias, and accessibility in AI, as well as how to create more socially intelligent AI systems, such that they work well as part of society, explicitly taking into account human factors such as norms, values, social action, and trust.
He is Associate Editor of IEEE Transactions on Technology & Society, IEEE Technology & Society Magazine (TSM) and ACM Transactions on Autonomous and Adaptive Systems (TAAS), a board member of the International Society for Artificial Life (ISAL) with responsibility for Social Impact, and Co-Chair of the Steering Committee for the IEEE International Conference on Autonomic and Self-organizing Systems (ACSOS). He has published over 100 papers in academic journals and conference proceedings, as well as the foundational book Self-aware Computing Systems: An Engineering Approach, in 2016. He has a PhD in Computer Science from the University of Birmingham, UK.
Tackling algorithmic bias: start with the data
Abstract
There’s a lot of excitement about medical artificial intelligence, but what does the evidence tell us? This talk will explore some surprising ways AI algorithms have been shown to fail, and what we can do to improve the situation moving forwards.
Bio
Dr Joseph Alderman is a PhD student at the University of Birmingham and an anaesthesia and intensive care doctor in the NHS. He is a researcher for the STANDING Together initiative, focusing on understanding and tackling algorithmic bias. Joe is also co-organiser of the Alan Turing Institute’s Clinical AI interest group, and co-lead for the Participatory Research Theme at Data Science for Health Equity (DSxHE).
Trials and Tribulations of AI in healthcare
Potential difficulties and Implementing Solutions
Abstract
Coupling AI with traditional clinical methodologies has the potential to improve speed, diagnostic accuracy and accessibility within healthcare. For example, advances in analysis of medical imaging and AI have been seen across a multitude of clinical areas including brain tumours, dementia, breast cancer and kidney disease and span all stages of life.
To successfully implement AI in healthcare there are key points in the development pathway that require careful thought and planning to achieve the desired benefit for patients and clinicians. From conception to clinical implementation, we will explore the roadmap for development of a medical AI tool, identify potential pitfalls and exploring current solutions and mitigation planning.
Key areas of discussion will focus on:
1. Research accessible data, in high volumes with appropriate standardised acquisitions.
2. Meaningful measures that reflect disease metrics and patient experience and defining “validated biomarkers” and “scientific validity”.
3. Appropriate software development for the clinical space.
4. Legality of development of AI in healthcare.
Bio
Heather Rose is a research scientist specialising in the integration of AI technologies within clinical environments. Focusing on medical imaging Heather has extensive experience in AI biomarker development, the digitization of clinical endpoints and evaluating digital health technologies to support clinical development. Her projects cover the full ecosystem of clinical research, ranging from proof-of-concept to devising roadmaps for AI biomarker implementation and collaborating with stakeholders to achieve program objectives.
Heather has experience in both academic and industry settings. Currently, she collaborates with scientific, clinical, and industrial partners on cutting-edge projects in medical imaging and AI applications.
Ulysses Bernardet, Aniko Ekart, Alexandros Giagkos, Alina Patelli, Amit Chattopadhyay, Antonio Fratini, Chloe Barnes, Farzaneh Farhadi, Harry Goldingay, Hassan Khan, Martin Rudorfer, Shereen Fouad, and Zhuangzhuang Dai
If you have any questions or queries, please get in touch via email.