Look back at EWADA 2024

A reflection of 2024 and a look forward of 2025

by: Jun Zhao

 
19 Dec 2024

The year 2023 has witnessed the rapid rise of generative AI technologies and the first global AI Safety Summit hosted by the UK government at Bletchley Park. Amidst this unprecedented change, debates about data, AI algorithmic processing, and their impacts on national safety, citizen healthcare, and career and education opportunities have intensified like never before.

The goal of the EWADA project is to develop new technical and legal infrastructures that enable more equitable and ethical experiences for users. Following two successful years of foundational technology development, the third year has seen the refinement of privacy-preserving AI applications, the implementation of a protocol and schema for users’ data terms of use, and an ongoin large-scale, cross-cultural study on users’ values regarding data autonomy.

Additionally, multiple studies have been conducted with specific user groups, such as social media users, health trackers, and children. Seven new prototypes were produced to develop capabilities in areas like personal data queries, children’s data autonomy, new ways of social interactions and personal health tracking.

These technical explorations allow us to delve deeply into the open challenges related to scalability, trade-off between ethical computing and utility, and barriers for users to opt for ethical alternatives. Against the backdrop of generative AI (genAI) development, which has intensified concerns over data privacy, fairness, and control over personal information, EWADA’s work offers critical inputs in creating user-centred, transparent frameworks for enabling ethical AI interactions. This is particularly relevant as genAI’s rapid adoption raises pressing questions about user consent, data rights, and the ethical use of personal information in automated systems.

Our research so far has shown that users, including children, generally welcome the data autonomy and control provided by EWADA and Solid. However, challenges remain, especially in supporting users in exercising data autonomy, helping users navigate decentralised data governance models like data trust or data commons, and addressing new issues arising from genAI adoption.

Beyond the project development, our team has made significant policy impacts through our involvement in the Data Bill revision and various national reports on Data Governance, contributing to the national conversation about the new need for better data governance and infrastructure, especially in the context of several national emergencies and new legislation developments.

We have also seen increased leadership from our early-career researchers, engaging with various communities, including industry, partner projects, and open-source communities. These impact deliveries led to a flurry of successful grant applications, ranging from small bids awarded to individual researchers to larger grants for full student scholarships and follow-up research. It has been particularly pleasant to see the undertaking of Solid stewardship by the Open Data Institutue, a cornerstone development for both the project and the community.

Building on our successful progress, in the coming year, our team aims to make further strides in:

  • Promoting our capability of delivering ethical computing applications to key stakeholders
  • Assessing our technical capabilities in the context of emerging genAI technologies
  • Enhancing the capability of data accountability and its ease of use for users.
  • Informing national and global policies on data autonomy and governance

For further information, please see all our 2024 publications and new code bases.

Sir Tim Berners-Lee speaks at Web Summit 2024

I invented the web: Here's how to make it better

by: Jun Zhao

 
12 Nov 2024

On November 12, 2024, Sir Tim, together with John Bruce, the co-founder and CEO of Inrupt, had a fireside conversation at the Web Summit 2024, discussing how to make the Web better.

Digital wallets are fast becoming the most compelling way to serve customers and citizens. Over 60% of the world’s population is expected to use digital wallets regularly by 2026. Hear from the inventor of the World Wide Web himself, Sir Tim Berners-Lee, on why this moment is a pivotal opportunity for businesses to embrace change, enhance privacy, and help shape the next era of the web.

The recording is available on Vimeo link.

by: Jun Zhao

 
17 Oct 2024

We are very excited to share that from October 2024, the ODI will bring Solid into its broader data stewardship activities.

The Solid project and protocol have been a core part of the EWADA technical development. This partnership means that all Solid protocols and community will now become part of the ODI’s activities to promote secure, ethical data sharing and build a more transparent, secure, and user-centric data ecosystem.

We are also quite pleased that our DPhil student Jesse Wright will act as the Solid Lead for this partnership, bridging the dialogue between academia, community and innovation.

Read more abou this from the ODI’s blog post.

For more information, contact solid@theodi.org

by: Jun Zhao

 
30 Sep 2024

Led by our PI, Professor Sir Nigel Shadbolt, EWADA team contributed to the Open Data Institute’s landmark report on “Five years Strategy 2023-2028” on a trusted data infrastructure, and their newly updated policy manifesto, published in September 2024.

With the rapid advancement of AI technologies and their growing application in critical public sectors in the UK, such as healthcare and education, the need for a scalable, open, and trustworthy data ecosystem has never been greater. EWADA’s core mission is to empower individuals to take control and derive maximum value from all types of data.

This aligns closely with ODI’s latest policy manifesto, which calls for the following six principles:

  • Principle 1: Strong data infrastructure
  • Principle 2: Open data as a foundation
  • Principle 3: Building trust in data
  • Principle 4: Supporting trusted, independent organisations
  • Principle 5: Fostering a diverse, equitable, and inclusive data ecosystem
  • Principle 6: Enhancing data knowledge and skills

The cutting-edge decentralised data infrastructure and privacy-preserving AI computation capabilities developed by EWADA researchers over the past three years hold immense potential to support the new government’s ambition for a national renewal. Initiatives like “Citizen-Centric Public Services” have the potential to place citizens at the heart of digital service delivery through enhanced data infrastructure for public services and the creation of a new National Data Library. By leveraging innovative technologies and fostering collaboration, EWADA is well-positioned to drive transformative change in the way public services are delivered. Together, we can ensure that data-driven solutions prioritise citizens’ needs, uphold privacy, and pave the way for a more inclusive and efficient digital future.

EWADA team members receive a large UKRI research funding

Children's digtial agency in the age of AI

by: Jun Zhao

 
03 Sep 2024

Professor Sir Nigel Shadbolt and Senior Researcher Dr Jun Zhao are to lead a new project, togeter with UCI and Oxford Philosophy, to address the pressing issue of fostering children’s digital autonomy in societies where childhood has become intricately intertwined with Artificial Intelligence (AI) systems, for instance through connected toys, apps, voice assistants, and online learning platforms.

The two-year project, CHAILD – Children’s Agency In the age of AI: Leveraging InterDisciplinarity, is funded by the first round of UKRI’s new cross research council responsive mode (CRCRM) pilot scheme. The CRCRM scheme has been developed to support emerging ideas from the research community that transcend, combine or significantly span disciplines, to ensure all forms of interdisciplinary research have a home within UKRI. This provides unique opportunities for interdisciplinary research projects like CHAILD.

Find out more about CHAILD.

Three demo papers accepted by ISWC 2024

Enabling semi-autonomous AI agents

by: Jun Zhao

 
26 Aug 2024

Led by our first-year DPhil student Jesse Wright, three poster/demo papers were accepted by ISWC 2024. Many congratulations to Jesse and his collaborators!

Jesse Wright. Here’s Charlie! Realising the Semantic Web vision of Agents in the age of LLMs

This paper presents our research towards a near-term future in which legal entities, such as individuals and organisations can entrust semi-autonomous AI-driven agents to carry out online interactions on their behalf. The author’s research concerns the development of semi-autonomous Web agents, which consult users if and only if the system does not have sufficient context or confidence to proceed working autonomously. This creates a user-agent dialogue that allows the user to teach the agent about the information sources they trust, their data-sharing preferences, and their decision-making preferences. Ultimately, this enables the user to maximise control over their data and decisions while retaining the convenience of using agents, including those driven by LLMs.

In view of developing near-term solutions, the research seeks to answer the question: “How do we build a trustworthy and reliable network of semi-autonomous agents which represent individuals and organisations on the Web?”. After identifying key requirements, the paper presents a demo for a sample use case of a generic personal assistant. This is implemented using (Notation3) rules to enforce safety guarantees around belief, data sharing and data usage and LLMs to allow natural language interaction with users and serendipitous dialogues between software agents.

Here’s Charlie can be found at arxiv.

Jesse Wright, Jos De Roo and Ieben Smessaert. EYE JS: A client-side reasoning engine supporting Notation3, RDF Surfaces and RDF Lingua

The Web is transitioning away from centralised services to a re-emergent decentralised platform. This movement generates demand for infrastructure that hides the complexities of decentralisation so that Web developers can easily create rich applications for the next generation of the internet.

This paper introduces EYE JS, an RDFJS-compliant TypeScript library that supports reasoning using Notation3 and RDF Surfaces from browsers and NodeJS.

By developing EYE JS, we fill a gap in existing research and infrastructure, creating a reasoning engine for the Resource Description Framework (RDF) that can reason over decentralised documents in a Web client.

Jesse Wright. N3.js Reasoner: Implementing reasoning in N3.js

In addition, Jesse had the following paper accepted by Semantics 2024 NeXt-generation Data Governance workshop 2024.

This paper presents a sociotechnical vision for managing personal data, including cookies, within Web browsers. We first present our vision for a future of semi-automated data governance on the Web, using policy languages to describe data terms of use, and having browsers act on behalf of users to enact policy-based controls. Then, we present an overview of the technical research required to prove that existing policy languages express a sufficient range of concepts for describing cookie policies on the Web today. We view this work as a stepping stone towards a future of semi-automated data governance at Web-scale, which in the long term will also be used by next-generation Web technologies such as Web agents and Solid.

This paper can be found at arxiv.

by: Jun Zhao

 
04 Aug 2024

The paper “Trouble in Paradise? Understanding Mastodon Admin’s Motivations, Experiences, and Challenges Running Decentralised Social Media” has been accepted for publication by CSCW 2024 and to be presented in November.

Led by our second-year DPhil student Zhilin Zhang, the paper discusses the motivations, experiences, and challenges faced by administrators of the prominent decentralized social media platform, Mastodon.

Decentralised social media platforms are increasingly being recognised as viable alternatives to their centralised counterparts. Among these, Mastodon stands out as a popular alternative, offering a citizen-powered option distinct from larger and centralised platforms like Twitter/X. However, the future path of Mastodon remains uncertain, particularly in terms of its challenges and the long-term viability of a more citizen-powered internet. In this paper, following a pre-study survey, we conducted semi-structured interviews with 16 Mastodon instance administrators, including those who host instances to support marginalised and stigmatised communities, to understand their motivations and lived experiences of running decentralised social media. Our research indicates that while decentralised social media offers significant potential in supporting the safety, identity and privacy needs of marginalised and stigmatised communities, they also face considerable challenges in content moderation, community building and governance. We emphasise the importance of considering the community’s values and diversity when designing future support mechanisms.

A full blog post about the paper is upcoming.

by: Jun Zhao

 
30 Jun 2024

In academic year 2023-24 EWADA team supervised two undergraduate students for their final year projects: one related to creating a solid-based fitness tracking application, and another leading to an autonomous social media prototype.

SolidFitness allows users to upload their fitness and diet tracking data to their Solid pods (currently supporting Fitbit only). They can then choose from various recommendation algorithms to receive suggestions on how to improve their diet or exercise routines. These range from simple threshold-based algorithms to age and gender-based recommendations, cluster-based algorithms, and personalised recommendations. Through this process, users gain much more transparency and control over the data used by the Fitness app, compared to what is available on the current marketplace.

solid-based fitness app

SolidGram is based on a similar concept to Instagram but allows users to keep their posts in their own Solid pods, along with personal information such as their age, gender, interests, and browsing history. Leveraging on this control over personal data, SolidGram lets users choose from various recommendation algorithms to receive social media feeds based on their interests, location, or interactions with the the fees (likes or dislikes). Additionally, users can control which data is used by the algorithms to generate recommendations, giving them far greater data autonomy compared to traditional social media platforms.

solid-based social media app

It has been amazing to see both projects through from design to end-user evaluations over two academic terms. This demonstrates the flexibility of working with the Solid toolkit and protocol to build ethical applications that align with students’ own interests. It has also been exciting to observe how user studies from both applications have shown a positive perception of better control over personal data and the ability to choose between different recommendation algorithms. We hope to extend both projects for wider deployment and testing in the coming months. Please get in touch if you would like to know more.

by: Jun Zhao

 
06 Jun 2024

On May 21 and 22, Professor Sir Nigel Shadbolt, PI of the EWADA project, gave two public lectures about AI, risks and regulatiosn.

On May 21, 2024, Professor Nigel spoke at the prestigious Lord Renwick Memorial Lecture, and talked about ‘As If Human: the Regulations, Governance, and Ethics of Artificial Intelligence’.

In this hour-long seminar, Nigel discussed the decades-long history of alternating enthusiasm and disillusionment for AI, as well as its more recent achievements and deployments. As we all know, these recent developments have led to renewed claims about the transformative and disruptive effects of AI. However, there is growing concern about how we regulate and govern AI systems and ensure that such systems align with human values and ethics. In this lecture, Nigel provided a review of the history and current state of the art in AI and considered how we address the challenges of regulation, governance, and ethical alignment of current and imminent AI systems.

On May 22, 2024, Nigel spoke at Lord Mayor’s Online Lecture `The Achilles’ Heel of AI: How a major tech risk to your business could be one you haven’t heard of—and what you should do.

In this talk, Nigel discussed the the critical challenge related to the risk of model collapse. This phenomenon, where AI becomes unstable or ceases to function effectively, is a looming threat with profound implications for our reliance on this critical technology.

Model collapse stems from using AI-generated data when training or refining models rather than relying on information directly generated by human beings or devices other than the AI systems themselves. It comes about when AI models create terabytes of new data, which contain little of the originality, innovation, or variety possessed by the original information used to “train” them. Or when AI models are weaponised to generate misinformation, deep fakes, or “poison” data. A downward spiral can result in progressively degraded output, leading to model collapse. The consequences could be far-reaching, potentially resulting in financial setbacks, reputational damage and job losses.

In this talk, Nigel dived into this little-known risk, drawing on insights from his research and that of others by exploring how the quality and provenance of data are too often overlooked in business decisions about the implementation and use of AI tools. Yet data plays a pivotal role in determining these systems’ reliability, effectiveness - and value to the bottom line.

At the end of the talk, Nigel also talked about potential solutions for mitigating model collapse and outlined a roadmap for businesses to foster a strong data infrastructure on which to base their AI strategies. These strategies provide powerful knowledge, understanding, and tools for us to navigate the complexities of this new frontier of technology safely and effectively.

by: Jun Zhao

 
17 May 2024

In today’s digital age, social media has emerged as a ubiquitous platform for children worldwide, to socialise, entertain and learn. Recent studies show that 38% of US and 42% of UK kids aged 5-13 are using these platforms, despite the common minimum age restriction of 13 set by social media companies for account registration.

However, amidst the plethora of legislation discussions, a crucial concern often remains overlooked: the pervasive data harvesting practices that underpin social media platforms and their potential to undermine children’s autonomy. It is for this reason Computer Science researchers working on the Oxford Martin Programme on Ethical Web and Data Architectures developed CHAITok, an innovative Android mobile app designed to empower children with greater control and autonomy over their data on social media.

When individuals interact on social media, they produce vast data streams that platform owners harvest. This process, often referred to as “datafication”, involves recording, tracking, aggregating, analysing, and capitalising on users’ data. This is the practice that essentially empowers social media giants to predict and influence children’s personal attributes, behaviours, and preferences. This then shapes their online engagement and content choices, contributing to increased dependence on these platforms and potentially shaping how children view and engage with the world while they are in vital stages of cognitive and emotional development.

The recent UK Online Safety Act is a pioneering movement addressing this outstanding challenge. However, it is crucial that while we are regulating and enforcing changes in the current platform-driven digital ecosystem, we realise it is now a critical time to put children’s voices at the heart of our design and innovations, respecting their needs and how they would like to be supported and equipped with better digital resilience and autonomy.

CHAITok’s interface is similar to that of TikTok’s, but while children browse video recommendations, they have many opportunities to control what data is used by CHAITok and keep all their data safe (including interaction data, personal preferences, etc.) in their own personal data store.

It offers three distinctive features:

  • Respecting children’s values: CHAITok prioritises the preservation of children’s values and preferences by having carried out an extensive co-design activities with 50 children [1] to inform our design, ensuring that CHAITok reflects children’s values for having better autonomy and agency over their digital footprint.
  • Supporting evolving autonomy: Grounded upon our theoretical understanding of how children’s autonomy involves their cognitive, behavioural and emotional autonomy, and how their development of autonomy is an evolving process throughout childhood, CHAITok provides tools and resources for children to develop their sense of autonomy from multiple aspects in an age-appropriate way, supporting their journey towards greater autonomy in navigating the digital landscape.
  • Actively foster autonomy instead of focusing on minimising harms: CHAITok advocates for children’s digital rights and emphasises the importance of respecting their privacy and autonomy in online interactions. Unlike existing approaches, we took a proactive approach in our design to explicitly nudge, prompt and scaffold child’s critical thinking, action taking and reflection.

Our 27 user study sessions involving 109 children aged 10–13 gave us a deep insight of children’s current experiences and perceptions of social media platforms:

  • Almost all of these children feel a lack of autonomy (‘don’t have autonomy at all’) over their data.
  • One in three children found their experience with data on social media platforms as quite a passive experience, and often felt ‘being tricked’ by these platforms.
  • About a third found it hard to disengage from these platforms, and some even reported sleep issues when using phones before bedtime; and many of them felt ‘helpless’ against resisting these platforms.

By interacting with our app prototype as a group for about one-hour at their schools, most children felt more safe, empowered, and respected. This provides encouraging results for our research, helping children overcome difficulties associated with feeling unsupported and unconfident.

Our results are in contrast to the common perception that children are incapable of making autonomous decisions. This provides critical inputs for us to reflect on the current ethics principles for creating AI technologies for children and an urgent need to further explore wider mechanisms to incorporate autonomy fostering in children’s digital lives.

We look forward to continuing our exploration of how we may deploy CHAITok as an app in the wild, to provide an alternative social media experience for children in a safer and more autonomy-respectful environment.

Read the paper, ‘CHAITok: A Proof-of-Concept System Supporting Children’s Sense of Data Autonomy on Social Media’.

For further information on the Oxford Child-Centred AI (Oxford CCAI) Design Lab.

Report of link.