Sir Tim Berners-Lee speaks at Web Summit 2024

I invented the web: Here's how to make it better

by: Jun Zhao

 
12 Nov 2024

On November 12, 2024, Sir Tim, together with John Bruce, the co-founder and CEO of Inrupt, had a fireside conversation at the Web Summit 2024, discussing how to make the Web better.

Digital wallets are fast becoming the most compelling way to serve customers and citizens. Over 60% of the world’s population is expected to use digital wallets regularly by 2026. Hear from the inventor of the World Wide Web himself, Sir Tim Berners-Lee, on why this moment is a pivotal opportunity for businesses to embrace change, enhance privacy, and help shape the next era of the web.

The recording is available on Vimeo link.

by: Jun Zhao

 
17 Oct 2024

We are very excited to share that from October 2024, the ODI will bring Solid into its broader data stewardship activities.

The Solid project and protocol have been a core part of the EWADA technical development. This partnership means that all Solid protocols and community will now become part of the ODI’s activities to promote secure, ethical data sharing and build a more transparent, secure, and user-centric data ecosystem.

We are also quite pleased that our DPhil student Jesse Wright will act as the Solid Lead for this partnership, bridging the dialogue between academia, community and innovation.

Read more abou this from the ODI’s blog post.

For more information, contact solid@theodi.org

by: Jun Zhao

 
30 Sep 2024

Led by our PI, Professor Sir Nigel Shadbolt, EWADA team contributed to the Open Data Institute’s landmark report on “Five years Strategy 2023-2028” on a trusted data infrastructure, and their newly updated policy manifesto, published in September 2024.

With the rapid advancement of AI technologies and their growing application in critical public sectors in the UK, such as healthcare and education, the need for a scalable, open, and trustworthy data ecosystem has never been greater. EWADA’s core mission is to empower individuals to take control and derive maximum value from all types of data.

This aligns closely with ODI’s latest policy manifesto, which calls for the following six principles:

  • Principle 1: Strong data infrastructure
  • Principle 2: Open data as a foundation
  • Principle 3: Building trust in data
  • Principle 4: Supporting trusted, independent organisations
  • Principle 5: Fostering a diverse, equitable, and inclusive data ecosystem
  • Principle 6: Enhancing data knowledge and skills

The cutting-edge decentralised data infrastructure and privacy-preserving AI computation capabilities developed by EWADA researchers over the past three years hold immense potential to support the new government’s ambition for a national renewal. Initiatives like “Citizen-Centric Public Services” have the potential to place citizens at the heart of digital service delivery through enhanced data infrastructure for public services and the creation of a new National Data Library. By leveraging innovative technologies and fostering collaboration, EWADA is well-positioned to drive transformative change in the way public services are delivered. Together, we can ensure that data-driven solutions prioritise citizens’ needs, uphold privacy, and pave the way for a more inclusive and efficient digital future.

EWADA team members receive a large UKRI research funding

Children's digtial agency in the age of AI

by: Jun Zhao

 
03 Sep 2024

Professor Sir Nigel Shadbolt and Senior Researcher Dr Jun Zhao are to lead a new project, togeter with UCI and Oxford Philosophy, to address the pressing issue of fostering children’s digital autonomy in societies where childhood has become intricately intertwined with Artificial Intelligence (AI) systems, for instance through connected toys, apps, voice assistants, and online learning platforms.

The two-year project, CHAILD – Children’s Agency In the age of AI: Leveraging InterDisciplinarity, is funded by the first round of UKRI’s new cross research council responsive mode (CRCRM) pilot scheme. The CRCRM scheme has been developed to support emerging ideas from the research community that transcend, combine or significantly span disciplines, to ensure all forms of interdisciplinary research have a home within UKRI. This provides unique opportunities for interdisciplinary research projects like CHAILD.

Find out more about CHAILD.

Three demo papers accepted by ISWC 2024

Enabling semi-autonomous AI agents

by: Jun Zhao

 
26 Aug 2024

Led by our first-year DPhil student Jesse Wright, three poster/demo papers were accepted by ISWC 2024. Many congratulations to Jesse and his collaborators!

Jesse Wright. Here’s Charlie! Realising the Semantic Web vision of Agents in the age of LLMs

This paper presents our research towards a near-term future in which legal entities, such as individuals and organisations can entrust semi-autonomous AI-driven agents to carry out online interactions on their behalf. The author’s research concerns the development of semi-autonomous Web agents, which consult users if and only if the system does not have sufficient context or confidence to proceed working autonomously. This creates a user-agent dialogue that allows the user to teach the agent about the information sources they trust, their data-sharing preferences, and their decision-making preferences. Ultimately, this enables the user to maximise control over their data and decisions while retaining the convenience of using agents, including those driven by LLMs.

In view of developing near-term solutions, the research seeks to answer the question: “How do we build a trustworthy and reliable network of semi-autonomous agents which represent individuals and organisations on the Web?”. After identifying key requirements, the paper presents a demo for a sample use case of a generic personal assistant. This is implemented using (Notation3) rules to enforce safety guarantees around belief, data sharing and data usage and LLMs to allow natural language interaction with users and serendipitous dialogues between software agents.

Here’s Charlie can be found at arxiv.

Jesse Wright, Jos De Roo and Ieben Smessaert. EYE JS: A client-side reasoning engine supporting Notation3, RDF Surfaces and RDF Lingua

The Web is transitioning away from centralised services to a re-emergent decentralised platform. This movement generates demand for infrastructure that hides the complexities of decentralisation so that Web developers can easily create rich applications for the next generation of the internet.

This paper introduces EYE JS, an RDFJS-compliant TypeScript library that supports reasoning using Notation3 and RDF Surfaces from browsers and NodeJS.

By developing EYE JS, we fill a gap in existing research and infrastructure, creating a reasoning engine for the Resource Description Framework (RDF) that can reason over decentralised documents in a Web client.

Jesse Wright. N3.js Reasoner: Implementing reasoning in N3.js

In addition, Jesse had the following paper accepted by Semantics 2024 NeXt-generation Data Governance workshop 2024.

This paper presents a sociotechnical vision for managing personal data, including cookies, within Web browsers. We first present our vision for a future of semi-automated data governance on the Web, using policy languages to describe data terms of use, and having browsers act on behalf of users to enact policy-based controls. Then, we present an overview of the technical research required to prove that existing policy languages express a sufficient range of concepts for describing cookie policies on the Web today. We view this work as a stepping stone towards a future of semi-automated data governance at Web-scale, which in the long term will also be used by next-generation Web technologies such as Web agents and Solid.

This paper can be found at arxiv.

by: Jun Zhao

 
04 Aug 2024

The paper “Trouble in Paradise? Understanding Mastodon Admin’s Motivations, Experiences, and Challenges Running Decentralised Social Media” has been accepted for publication by CSCW 2024 and to be presented in November.

Led by our second-year DPhil student Zhilin Zhang, the paper discusses the motivations, experiences, and challenges faced by administrators of the prominent decentralized social media platform, Mastodon.

Decentralised social media platforms are increasingly being recognised as viable alternatives to their centralised counterparts. Among these, Mastodon stands out as a popular alternative, offering a citizen-powered option distinct from larger and centralised platforms like Twitter/X. However, the future path of Mastodon remains uncertain, particularly in terms of its challenges and the long-term viability of a more citizen-powered internet. In this paper, following a pre-study survey, we conducted semi-structured interviews with 16 Mastodon instance administrators, including those who host instances to support marginalised and stigmatised communities, to understand their motivations and lived experiences of running decentralised social media. Our research indicates that while decentralised social media offers significant potential in supporting the safety, identity and privacy needs of marginalised and stigmatised communities, they also face considerable challenges in content moderation, community building and governance. We emphasise the importance of considering the community’s values and diversity when designing future support mechanisms.

A full blog post about the paper is upcoming.

by: Jun Zhao

 
30 Jun 2024

In academic year 2023-24 EWADA team supervised two undergraduate students for their final year projects: one related to creating a solid-based fitness tracking application, and another leading to an autonomous social media prototype.

SolidFitness allows users to upload their fitness and diet tracking data to their Solid pods (currently supporting Fitbit only). They can then choose from various recommendation algorithms to receive suggestions on how to improve their diet or exercise routines. These range from simple threshold-based algorithms to age and gender-based recommendations, cluster-based algorithms, and personalised recommendations. Through this process, users gain much more transparency and control over the data used by the Fitness app, compared to what is available on the current marketplace.

solid-based fitness app

SolidGram is based on a similar concept to Instagram but allows users to keep their posts in their own Solid pods, along with personal information such as their age, gender, interests, and browsing history. Leveraging on this control over personal data, SolidGram lets users choose from various recommendation algorithms to receive social media feeds based on their interests, location, or interactions with the the fees (likes or dislikes). Additionally, users can control which data is used by the algorithms to generate recommendations, giving them far greater data autonomy compared to traditional social media platforms.

solid-based social media app

It has been amazing to see both projects through from design to end-user evaluations over two academic terms. This demonstrates the flexibility of working with the Solid toolkit and protocol to build ethical applications that align with students’ own interests. It has also been exciting to observe how user studies from both applications have shown a positive perception of better control over personal data and the ability to choose between different recommendation algorithms. We hope to extend both projects for wider deployment and testing in the coming months. Please get in touch if you would like to know more.

by: Jun Zhao

 
06 Jun 2024

On May 21 and 22, Professor Sir Nigel Shadbolt, PI of the EWADA project, gave two public lectures about AI, risks and regulatiosn.

On May 21, 2024, Professor Nigel spoke at the prestigious Lord Renwick Memorial Lecture, and talked about ‘As If Human: the Regulations, Governance, and Ethics of Artificial Intelligence’.

In this hour-long seminar, Nigel discussed the decades-long history of alternating enthusiasm and disillusionment for AI, as well as its more recent achievements and deployments. As we all know, these recent developments have led to renewed claims about the transformative and disruptive effects of AI. However, there is growing concern about how we regulate and govern AI systems and ensure that such systems align with human values and ethics. In this lecture, Nigel provided a review of the history and current state of the art in AI and considered how we address the challenges of regulation, governance, and ethical alignment of current and imminent AI systems.

On May 22, 2024, Nigel spoke at Lord Mayor’s Online Lecture `The Achilles’ Heel of AI: How a major tech risk to your business could be one you haven’t heard of—and what you should do.

In this talk, Nigel discussed the the critical challenge related to the risk of model collapse. This phenomenon, where AI becomes unstable or ceases to function effectively, is a looming threat with profound implications for our reliance on this critical technology.

Model collapse stems from using AI-generated data when training or refining models rather than relying on information directly generated by human beings or devices other than the AI systems themselves. It comes about when AI models create terabytes of new data, which contain little of the originality, innovation, or variety possessed by the original information used to “train” them. Or when AI models are weaponised to generate misinformation, deep fakes, or “poison” data. A downward spiral can result in progressively degraded output, leading to model collapse. The consequences could be far-reaching, potentially resulting in financial setbacks, reputational damage and job losses.

In this talk, Nigel dived into this little-known risk, drawing on insights from his research and that of others by exploring how the quality and provenance of data are too often overlooked in business decisions about the implementation and use of AI tools. Yet data plays a pivotal role in determining these systems’ reliability, effectiveness - and value to the bottom line.

At the end of the talk, Nigel also talked about potential solutions for mitigating model collapse and outlined a roadmap for businesses to foster a strong data infrastructure on which to base their AI strategies. These strategies provide powerful knowledge, understanding, and tools for us to navigate the complexities of this new frontier of technology safely and effectively.

by: Jun Zhao

 
17 May 2024

In today’s digital age, social media has emerged as a ubiquitous platform for children worldwide, to socialise, entertain and learn. Recent studies show that 38% of US and 42% of UK kids aged 5-13 are using these platforms, despite the common minimum age restriction of 13 set by social media companies for account registration.

However, amidst the plethora of legislation discussions, a crucial concern often remains overlooked: the pervasive data harvesting practices that underpin social media platforms and their potential to undermine children’s autonomy. It is for this reason Computer Science researchers working on the Oxford Martin Programme on Ethical Web and Data Architectures developed CHAITok, an innovative Android mobile app designed to empower children with greater control and autonomy over their data on social media.

When individuals interact on social media, they produce vast data streams that platform owners harvest. This process, often referred to as “datafication”, involves recording, tracking, aggregating, analysing, and capitalising on users’ data. This is the practice that essentially empowers social media giants to predict and influence children’s personal attributes, behaviours, and preferences. This then shapes their online engagement and content choices, contributing to increased dependence on these platforms and potentially shaping how children view and engage with the world while they are in vital stages of cognitive and emotional development.

The recent UK Online Safety Act is a pioneering movement addressing this outstanding challenge. However, it is crucial that while we are regulating and enforcing changes in the current platform-driven digital ecosystem, we realise it is now a critical time to put children’s voices at the heart of our design and innovations, respecting their needs and how they would like to be supported and equipped with better digital resilience and autonomy.

CHAITok’s interface is similar to that of TikTok’s, but while children browse video recommendations, they have many opportunities to control what data is used by CHAITok and keep all their data safe (including interaction data, personal preferences, etc.) in their own personal data store.

It offers three distinctive features:

  • Respecting children’s values: CHAITok prioritises the preservation of children’s values and preferences by having carried out an extensive co-design activities with 50 children [1] to inform our design, ensuring that CHAITok reflects children’s values for having better autonomy and agency over their digital footprint.
  • Supporting evolving autonomy: Grounded upon our theoretical understanding of how children’s autonomy involves their cognitive, behavioural and emotional autonomy, and how their development of autonomy is an evolving process throughout childhood, CHAITok provides tools and resources for children to develop their sense of autonomy from multiple aspects in an age-appropriate way, supporting their journey towards greater autonomy in navigating the digital landscape.
  • Actively foster autonomy instead of focusing on minimising harms: CHAITok advocates for children’s digital rights and emphasises the importance of respecting their privacy and autonomy in online interactions. Unlike existing approaches, we took a proactive approach in our design to explicitly nudge, prompt and scaffold child’s critical thinking, action taking and reflection.

Our 27 user study sessions involving 109 children aged 10–13 gave us a deep insight of children’s current experiences and perceptions of social media platforms:

  • Almost all of these children feel a lack of autonomy (‘don’t have autonomy at all’) over their data.
  • One in three children found their experience with data on social media platforms as quite a passive experience, and often felt ‘being tricked’ by these platforms.
  • About a third found it hard to disengage from these platforms, and some even reported sleep issues when using phones before bedtime; and many of them felt ‘helpless’ against resisting these platforms.

By interacting with our app prototype as a group for about one-hour at their schools, most children felt more safe, empowered, and respected. This provides encouraging results for our research, helping children overcome difficulties associated with feeling unsupported and unconfident.

Our results are in contrast to the common perception that children are incapable of making autonomous decisions. This provides critical inputs for us to reflect on the current ethics principles for creating AI technologies for children and an urgent need to further explore wider mechanisms to incorporate autonomy fostering in children’s digital lives.

We look forward to continuing our exploration of how we may deploy CHAITok as an app in the wild, to provide an alternative social media experience for children in a safer and more autonomy-respectful environment.

Read the paper, ‘CHAITok: A Proof-of-Concept System Supporting Children’s Sense of Data Autonomy on Social Media’.

For further information on the Oxford Child-Centred AI (Oxford CCAI) Design Lab.

Report of link.

by: Jun Zhao

 
16 May 2024

In today’s digital age, children are growing up surrounded by technology, with their online activities often being tracked, analysed, and often monetised. While the digital landscape offers countless opportunities for learning and exploration, it also exposes children to a myriad of datafication risks, including harmful profiling, micro-targeting, and behavioural manipulation.

It is for this reason that Computer Science researchers working on the Oxford Martin Programme on Ethical Web and Data Architectures developed the KOALA Hero Toolkit. It has been co-developed with families and children by Oxford researchers over several years in response to increasing concerns from families about the risks associated with extensive use of the digital technologies.

Digital monitoring-based technologies, enabling parents to restrict, monitor or track children’s online activities, dominate the market space. Popular apps such as Life 360, Google Family Link, Apple Maps, Qustodio, and Apple screen time, are widespread. According to an Ofcom report, in the UK 70% of parents with children aged 3-17 have used technology to control their child’s access to online content. A similar report is found in the US, with 86% of parents with children aged 5-11 years having reported restricting when and for how long kids can use screens, and 72% using parental controls to restrict how much their child uses screens.

Research has shown that such approaches have limited efficacy in keeping children out of the boundaries of the digital space or reduce screen time usage. At the same time, the risks associated with these approaches are much less discussed, such as their potential to undermine family trust relationships or prevent the development of children’s self-regulation skills. With modern families increasingly struggling with their children’s relationship with digital technologies and lack of effective and clear guidance for them, new approaches are urgently needed.

The KOALA Hero toolkit has several key features:

  • Promote family awareness development: By providing families with insights into datafication risks, i.e. how children’s data may be collected and processed, and used to affect what they see online, the toolkit empowers families to make informed decisions about their online activities.
  • Support interactive learning: Through both a digital and physical component, and the provision of interactive activities and discussion sheets, the toolkit facilitates meaningful conversations between children and parents, fostering a deeper understanding of digital privacy and ethics.
  • Encourage family engagement: By providing worksheets that guide conversations and interactions with the toolkit among families, with both children and parents involved in the learning process, the toolkit strengthens familial bonds and promotes collaborative problem-solving.

We assessed the toolkit with 17 families, involving 23 children aged 10-14. We found that families developed better awareness of the implications related to datafication, in comparison to their prior understandings. The toolkit also enabled families to feel more equipped to discuss datafication risks and have more balanced and joint family conversations.

These findings provide positive indications for our approach of encouraging proactive family engagement, instead of focusing on controls and monitoring. We hope to improve the toolkit and work with a larger sample through a longer-term study before sharing the toolkit on popular app stores.

Read the paper, ‘KOALA Hero Toolkit: A New Approach to Inform Families of Mobile Datafication Risks’.

For further information on the Oxford Child-Centred AI (Oxford CCAI) Design Lab.

Report of link.