On May 21 and 22, Professor Sir Nigel Shadbolt, PI of the EWADA project, gave two public lectures about AI, risks and regulatiosn.
On May 21, 2024, Professor Nigel spoke at the prestigious Lord Renwick Memorial Lecture, and talked about ‘As If Human: the Regulations, Governance, and Ethics of Artificial Intelligence’.
In this hour-long seminar, Nigel discussed the decades-long history of alternating enthusiasm and disillusionment for AI, as well as its more recent achievements and deployments. As we all know, these recent developments have led to renewed claims about the transformative and disruptive effects of AI. However, there is growing concern about how we regulate and govern AI systems and ensure that such systems align with human values and ethics. In this lecture, Nigel provided a review of the history and current state of the art in AI and considered how we address the challenges of regulation, governance, and ethical alignment of current and imminent AI systems.
On May 22, 2024, Nigel spoke at Lord Mayor’s Online Lecture `The Achilles’ Heel of AI: How a major tech risk to your business could be one you haven’t heard of—and what you should do.
In this talk, Nigel discussed the the critical challenge related to the risk of model collapse. This phenomenon, where AI becomes unstable or ceases to function effectively, is a looming threat with profound implications for our reliance on this critical technology.
Model collapse stems from using AI-generated data when training or refining models rather than relying on information directly generated by human beings or devices other than the AI systems themselves. It comes about when AI models create terabytes of new data, which contain little of the originality, innovation, or variety possessed by the original information used to “train” them. Or when AI models are weaponised to generate misinformation, deep fakes, or “poison” data. A downward spiral can result in progressively degraded output, leading to model collapse. The consequences could be far-reaching, potentially resulting in financial setbacks, reputational damage and job losses.
In this talk, Nigel dived into this little-known risk, drawing on insights from his research and that of others by exploring how the quality and provenance of data are too often overlooked in business decisions about the implementation and use of AI tools. Yet data plays a pivotal role in determining these systems’ reliability, effectiveness - and value to the bottom line.
At the end of the talk, Nigel also talked about potential solutions for mitigating model collapse and outlined a roadmap for businesses to foster a strong data infrastructure on which to base their AI strategies. These strategies provide powerful knowledge, understanding, and tools for us to navigate the complexities of this new frontier of technology safely and effectively.