[NEW PAPER] Three demo papers accepted by ISWC 2024

Enabling semi-autonomous AI agents

by: Jun Zhao

 
26 Aug 2024

Led by our first-year DPhil student Jesse Wright, three poster/demo papers were accepted by ISWC 2024. Many congratulations to Jesse and his collaborators!

Jesse Wright. Here’s Charlie! Realising the Semantic Web vision of Agents in the age of LLMs

This paper presents our research towards a near-term future in which legal entities, such as individuals and organisations can entrust semi-autonomous AI-driven agents to carry out online interactions on their behalf. The author’s research concerns the development of semi-autonomous Web agents, which consult users if and only if the system does not have sufficient context or confidence to proceed working autonomously. This creates a user-agent dialogue that allows the user to teach the agent about the information sources they trust, their data-sharing preferences, and their decision-making preferences. Ultimately, this enables the user to maximise control over their data and decisions while retaining the convenience of using agents, including those driven by LLMs.

In view of developing near-term solutions, the research seeks to answer the question: “How do we build a trustworthy and reliable network of semi-autonomous agents which represent individuals and organisations on the Web?”. After identifying key requirements, the paper presents a demo for a sample use case of a generic personal assistant. This is implemented using (Notation3) rules to enforce safety guarantees around belief, data sharing and data usage and LLMs to allow natural language interaction with users and serendipitous dialogues between software agents.

Here’s Charlie can be found at arxiv.

Jesse Wright, Jos De Roo and Ieben Smessaert. EYE JS: A client-side reasoning engine supporting Notation3, RDF Surfaces and RDF Lingua

The Web is transitioning away from centralised services to a re-emergent decentralised platform. This movement generates demand for infrastructure that hides the complexities of decentralisation so that Web developers can easily create rich applications for the next generation of the internet.

This paper introduces EYE JS, an RDFJS-compliant TypeScript library that supports reasoning using Notation3 and RDF Surfaces from browsers and NodeJS.

By developing EYE JS, we fill a gap in existing research and infrastructure, creating a reasoning engine for the Resource Description Framework (RDF) that can reason over decentralised documents in a Web client.

Jesse Wright. N3.js Reasoner: Implementing reasoning in N3.js

In addition, Jesse had the following paper accepted by Semantics 2024 NeXt-generation Data Governance workshop 2024.

This paper presents a sociotechnical vision for managing personal data, including cookies, within Web browsers. We first present our vision for a future of semi-automated data governance on the Web, using policy languages to describe data terms of use, and having browsers act on behalf of users to enact policy-based controls. Then, we present an overview of the technical research required to prove that existing policy languages express a sufficient range of concepts for describing cookie policies on the Web today. We view this work as a stepping stone towards a future of semi-automated data governance at Web-scale, which in the long term will also be used by next-generation Web technologies such as Web agents and Solid.

This paper can be found at arxiv.

by: Jun Zhao

 
04 Aug 2024

The paper “Trouble in Paradise? Understanding Mastodon Admin’s Motivations, Experiences, and Challenges Running Decentralised Social Media” has been accepted for publication by CSCW 2024 and to be presented in November.

Led by our second-year DPhil student Zhilin Zhang, the paper discusses the motivations, experiences, and challenges faced by administrators of the prominent decentralized social media platform, Mastodon.

Decentralised social media platforms are increasingly being recognised as viable alternatives to their centralised counterparts. Among these, Mastodon stands out as a popular alternative, offering a citizen-powered option distinct from larger and centralised platforms like Twitter/X. However, the future path of Mastodon remains uncertain, particularly in terms of its challenges and the long-term viability of a more citizen-powered internet. In this paper, following a pre-study survey, we conducted semi-structured interviews with 16 Mastodon instance administrators, including those who host instances to support marginalised and stigmatised communities, to understand their motivations and lived experiences of running decentralised social media. Our research indicates that while decentralised social media offers significant potential in supporting the safety, identity and privacy needs of marginalised and stigmatised communities, they also face considerable challenges in content moderation, community building and governance. We emphasise the importance of considering the community’s values and diversity when designing future support mechanisms.

A full blog post about the paper is upcoming.

by: Jun Zhao

 
30 Jun 2024

In academic year 2023-24 EWADA team supervised two undergraduate students for their final year projects: one related to creating a solid-based fitness tracking application, and another leading to an autonomous social media prototype.

SolidFitness allows users to upload their fitness and diet tracking data to their Solid pods (currently supporting Fitbit only). They can then choose from various recommendation algorithms to receive suggestions on how to improve their diet or exercise routines. These range from simple threshold-based algorithms to age and gender-based recommendations, cluster-based algorithms, and personalised recommendations. Through this process, users gain much more transparency and control over the data used by the Fitness app, compared to what is available on the current marketplace.

solid-based fitness app

SolidGram is based on a similar concept to Instagram but allows users to keep their posts in their own Solid pods, along with personal information such as their age, gender, interests, and browsing history. Leveraging on this control over personal data, SolidGram lets users choose from various recommendation algorithms to receive social media feeds based on their interests, location, or interactions with the the fees (likes or dislikes). Additionally, users can control which data is used by the algorithms to generate recommendations, giving them far greater data autonomy compared to traditional social media platforms.

solid-based social media app

It has been amazing to see both projects through from design to end-user evaluations over two academic terms. This demonstrates the flexibility of working with the Solid toolkit and protocol to build ethical applications that align with students’ own interests. It has also been exciting to observe how user studies from both applications have shown a positive perception of better control over personal data and the ability to choose between different recommendation algorithms. We hope to extend both projects for wider deployment and testing in the coming months. Please get in touch if you would like to know more.

by: Jun Zhao

 
06 Jun 2024

On May 21 and 22, Professor Sir Nigel Shadbolt, PI of the EWADA project, gave two public lectures about AI, risks and regulatiosn.

On May 21, 2024, Professor Nigel spoke at the prestigious Lord Renwick Memorial Lecture, and talked about ‘As If Human: the Regulations, Governance, and Ethics of Artificial Intelligence’.

In this hour-long seminar, Nigel discussed the decades-long history of alternating enthusiasm and disillusionment for AI, as well as its more recent achievements and deployments. As we all know, these recent developments have led to renewed claims about the transformative and disruptive effects of AI. However, there is growing concern about how we regulate and govern AI systems and ensure that such systems align with human values and ethics. In this lecture, Nigel provided a review of the history and current state of the art in AI and considered how we address the challenges of regulation, governance, and ethical alignment of current and imminent AI systems.

On May 22, 2024, Nigel spoke at Lord Mayor’s Online Lecture `The Achilles’ Heel of AI: How a major tech risk to your business could be one you haven’t heard of—and what you should do.

In this talk, Nigel discussed the the critical challenge related to the risk of model collapse. This phenomenon, where AI becomes unstable or ceases to function effectively, is a looming threat with profound implications for our reliance on this critical technology.

Model collapse stems from using AI-generated data when training or refining models rather than relying on information directly generated by human beings or devices other than the AI systems themselves. It comes about when AI models create terabytes of new data, which contain little of the originality, innovation, or variety possessed by the original information used to “train” them. Or when AI models are weaponised to generate misinformation, deep fakes, or “poison” data. A downward spiral can result in progressively degraded output, leading to model collapse. The consequences could be far-reaching, potentially resulting in financial setbacks, reputational damage and job losses.

In this talk, Nigel dived into this little-known risk, drawing on insights from his research and that of others by exploring how the quality and provenance of data are too often overlooked in business decisions about the implementation and use of AI tools. Yet data plays a pivotal role in determining these systems’ reliability, effectiveness - and value to the bottom line.

At the end of the talk, Nigel also talked about potential solutions for mitigating model collapse and outlined a roadmap for businesses to foster a strong data infrastructure on which to base their AI strategies. These strategies provide powerful knowledge, understanding, and tools for us to navigate the complexities of this new frontier of technology safely and effectively.

by: Jun Zhao

 
17 May 2024

In today’s digital age, social media has emerged as a ubiquitous platform for children worldwide, to socialise, entertain and learn. Recent studies show that 38% of US and 42% of UK kids aged 5-13 are using these platforms, despite the common minimum age restriction of 13 set by social media companies for account registration.

However, amidst the plethora of legislation discussions, a crucial concern often remains overlooked: the pervasive data harvesting practices that underpin social media platforms and their potential to undermine children’s autonomy. It is for this reason Computer Science researchers working on the Oxford Martin Programme on Ethical Web and Data Architectures developed CHAITok, an innovative Android mobile app designed to empower children with greater control and autonomy over their data on social media.

When individuals interact on social media, they produce vast data streams that platform owners harvest. This process, often referred to as “datafication”, involves recording, tracking, aggregating, analysing, and capitalising on users’ data. This is the practice that essentially empowers social media giants to predict and influence children’s personal attributes, behaviours, and preferences. This then shapes their online engagement and content choices, contributing to increased dependence on these platforms and potentially shaping how children view and engage with the world while they are in vital stages of cognitive and emotional development.

The recent UK Online Safety Act is a pioneering movement addressing this outstanding challenge. However, it is crucial that while we are regulating and enforcing changes in the current platform-driven digital ecosystem, we realise it is now a critical time to put children’s voices at the heart of our design and innovations, respecting their needs and how they would like to be supported and equipped with better digital resilience and autonomy.

CHAITok’s interface is similar to that of TikTok’s, but while children browse video recommendations, they have many opportunities to control what data is used by CHAITok and keep all their data safe (including interaction data, personal preferences, etc.) in their own personal data store.

It offers three distinctive features:

  • Respecting children’s values: CHAITok prioritises the preservation of children’s values and preferences by having carried out an extensive co-design activities with 50 children [1] to inform our design, ensuring that CHAITok reflects children’s values for having better autonomy and agency over their digital footprint.
  • Supporting evolving autonomy: Grounded upon our theoretical understanding of how children’s autonomy involves their cognitive, behavioural and emotional autonomy, and how their development of autonomy is an evolving process throughout childhood, CHAITok provides tools and resources for children to develop their sense of autonomy from multiple aspects in an age-appropriate way, supporting their journey towards greater autonomy in navigating the digital landscape.
  • Actively foster autonomy instead of focusing on minimising harms: CHAITok advocates for children’s digital rights and emphasises the importance of respecting their privacy and autonomy in online interactions. Unlike existing approaches, we took a proactive approach in our design to explicitly nudge, prompt and scaffold child’s critical thinking, action taking and reflection.

Our 27 user study sessions involving 109 children aged 10–13 gave us a deep insight of children’s current experiences and perceptions of social media platforms:

  • Almost all of these children feel a lack of autonomy (‘don’t have autonomy at all’) over their data.
  • One in three children found their experience with data on social media platforms as quite a passive experience, and often felt ‘being tricked’ by these platforms.
  • About a third found it hard to disengage from these platforms, and some even reported sleep issues when using phones before bedtime; and many of them felt ‘helpless’ against resisting these platforms.

By interacting with our app prototype as a group for about one-hour at their schools, most children felt more safe, empowered, and respected. This provides encouraging results for our research, helping children overcome difficulties associated with feeling unsupported and unconfident.

Our results are in contrast to the common perception that children are incapable of making autonomous decisions. This provides critical inputs for us to reflect on the current ethics principles for creating AI technologies for children and an urgent need to further explore wider mechanisms to incorporate autonomy fostering in children’s digital lives.

We look forward to continuing our exploration of how we may deploy CHAITok as an app in the wild, to provide an alternative social media experience for children in a safer and more autonomy-respectful environment.

Read the paper, ‘CHAITok: A Proof-of-Concept System Supporting Children’s Sense of Data Autonomy on Social Media’.

For further information on the Oxford Child-Centred AI (Oxford CCAI) Design Lab.

Report of link.

by: Jun Zhao

 
16 May 2024

In today’s digital age, children are growing up surrounded by technology, with their online activities often being tracked, analysed, and often monetised. While the digital landscape offers countless opportunities for learning and exploration, it also exposes children to a myriad of datafication risks, including harmful profiling, micro-targeting, and behavioural manipulation.

It is for this reason that Computer Science researchers working on the Oxford Martin Programme on Ethical Web and Data Architectures developed the KOALA Hero Toolkit. It has been co-developed with families and children by Oxford researchers over several years in response to increasing concerns from families about the risks associated with extensive use of the digital technologies.

Digital monitoring-based technologies, enabling parents to restrict, monitor or track children’s online activities, dominate the market space. Popular apps such as Life 360, Google Family Link, Apple Maps, Qustodio, and Apple screen time, are widespread. According to an Ofcom report, in the UK 70% of parents with children aged 3-17 have used technology to control their child’s access to online content. A similar report is found in the US, with 86% of parents with children aged 5-11 years having reported restricting when and for how long kids can use screens, and 72% using parental controls to restrict how much their child uses screens.

Research has shown that such approaches have limited efficacy in keeping children out of the boundaries of the digital space or reduce screen time usage. At the same time, the risks associated with these approaches are much less discussed, such as their potential to undermine family trust relationships or prevent the development of children’s self-regulation skills. With modern families increasingly struggling with their children’s relationship with digital technologies and lack of effective and clear guidance for them, new approaches are urgently needed.

The KOALA Hero toolkit has several key features:

  • Promote family awareness development: By providing families with insights into datafication risks, i.e. how children’s data may be collected and processed, and used to affect what they see online, the toolkit empowers families to make informed decisions about their online activities.
  • Support interactive learning: Through both a digital and physical component, and the provision of interactive activities and discussion sheets, the toolkit facilitates meaningful conversations between children and parents, fostering a deeper understanding of digital privacy and ethics.
  • Encourage family engagement: By providing worksheets that guide conversations and interactions with the toolkit among families, with both children and parents involved in the learning process, the toolkit strengthens familial bonds and promotes collaborative problem-solving.

We assessed the toolkit with 17 families, involving 23 children aged 10-14. We found that families developed better awareness of the implications related to datafication, in comparison to their prior understandings. The toolkit also enabled families to feel more equipped to discuss datafication risks and have more balanced and joint family conversations.

These findings provide positive indications for our approach of encouraging proactive family engagement, instead of focusing on controls and monitoring. We hope to improve the toolkit and work with a larger sample through a longer-term study before sharing the toolkit on popular app stores.

Read the paper, ‘KOALA Hero Toolkit: A New Approach to Inform Families of Mobile Datafication Risks’.

For further information on the Oxford Child-Centred AI (Oxford CCAI) Design Lab.

Report of link.

EWADA third-year project meeting

EWADA third-year project meeting

by: Jun Zhao

 
26 Apr 2024

We had our third annual project meeting on 23 April, attended by 16 project members and affiliates. During the meeting, we had an exciting list of discussions about our recent research progress over the last year, including

  • SocialGenPod: a privacy-friendly generative AI social web application
  • Various ongoing work related to fairness in decentralised ML
  • The latest perennial data terms of the use vocabulary and protocols
  • The ongoing digital autonomy machine experiment, and
  • Demos of the SolidFitness app and Solid-based social app

While this is not a representation of all work carried out by EWADA last year, the discussions reflected balanced investigations from both the technical and social aspects by the team members. Particularly notable were the two demos, which were given by our fourth-year students and showed great promises of the power of data autonomy that can be enabled by a Solid-like architecture.

Some of the work is already featured by the papers shared on our web site, and we will publish technical notes for the two demos.

In the next few months, we look forward to welcoming several summer interns to join the team and continue to build up our strength in enabling a privacy-preserving, autonomous, decentralised web architecture.

[NEW NATURE PAPER] AI ethics are ignoring children, say Oxford Martin researchers

A report of the Nature Machine Intelligence publication

by: Jun Zhao

 
20 Mar 2024

In a perspective paper published in Nature Machine Intelligence, the authors highlight that although there is a growing consensus around what high-level AI ethical principles should look like, too little is known about how to effectively apply them in principle for children. The study mapped the global landscape of existing ethics guidelines for AI and identified four main challenges in adapting such principles for children’s benefit:

  • A lack of consideration for the developmental side of childhood, especially the complex and individual needs of children, age ranges, development stages, backgrounds, and characters.
  • Minimal consideration for the role of guardians (e.g. parents) in childhood. For example, parents are often portrayed as having superior experience to children, when the digital world may need to reflect on this traditional role of parents.
  • Too few child-centred evaluations that consider children’s best interests and rights. Quantitative assessments are the norm when assessing issues like safety and safeguarding in AI systems, but these tend to fall short when considering factors like the developmental needs and long-term wellbeing of children.
  • Absence of a coordinated, cross-sectoral, and cross-disciplinary approach to formulating ethical AI principles for children that are necessary to effect impactful practice changes.

The researchers also drew on real-life examples and experiences when identifying these challenges. They found that although AI is being used to keep children safe, typically by identifying inappropriate content online, there has been a lack of initiative to incorporate safeguarding principles into AI innovations including those supported by Large Language Models (LLMs). Such integration is crucial to prevent children from being exposed to biased content based on factors such as ethnicity, or to harmful content, especially for vulnerable groups, and the evaluation of such methods should go beyond mere quantitative metrics such as accuracy or precision. Through their partnership with the University of Bristol, the researchers are also designing tools to help children with ADHD, carefully considering their needs and designing interfaces to support their sharing of data with AI-related algorithms, in ways that are aligned with their daily routes, digital literacy skills and need for simple yet effective interfaces.

In response to these challenges, the researchers recommended:

  • increasing the involvement of key stakeholders, including parents and guardians, AI developers, and children themselves;
  • providing more direct support for industry designers and developers of AI systems, especially by involving them more in the implementation of ethical AI principles;
  • establishing legal and professional accountability mechanisms that are child-centred; and
  • increasing multidisciplinary collaboration around a child-centred approach involving stakeholders in areas such as human-computer interaction, design, algorithms, policy guidance, data protection law and education.

Dr Jun Zhao, Oxford Martin Fellow, Senior Researcher at the University’s Department of Computer Science, and lead author of the paper, said:

‘The incorporation of AI in children’s lives and our society is inevitable. While there are increased debates about who should ensure technologies are responsible and ethical, a substantial proportion of such burdens falls on parents and children to navigate this complex landscape.

‘This perspective article examined existing global AI ethics principles and identified crucial gaps and future development directions. These insights are critical for guiding our industries and policymakers. We hope this research will serve as a significant starting point for cross-sectoral collaborations in creating ethical AI technologies for children and global policy development in this space.’

The authors outlined several ethical AI principles that would especially need to be considered for children. They include ensuring fair, equal, and inclusive digital access, delivering transparency and accountability when developing AI systems, safeguarding privacy and preventing manipulation and exploitation, guaranteeing the safety of children, and creating age-appropriate systems while actively involving children in their development.

Professor Sir Nigel Shadbolt, co-author, Director of the EWADA Programme, Principal of Jesus College Oxford and a Professor of Computing Science at the Department of Computer Science, said:

‘In an era of AI powered algorithms, children deserve systems that meet their social, emotional, and cognitive needs. Our AI systems must be ethical and respectful at all stages of development, but this is especially critical during childhood.’

Read ‘Challenges and opportunities in translating ethical AI principles into practice for children’ in Nature Machine Intelligence

Repost of link.

by: Jun Zhao

 
19 Mar 2024

EWADA researchers are trying to better understand people’s values over who manages the sharing of their personal information online through an expansive research project.

The Digital Autonomy Machine Experiment aims to explore how the public would like to exercise their autonomy when it comes to managing their data. It is specifically investigating whether people would like to manage their personal information independently, through a trusted organisation (data trust), or through a semi- or fully-automated system.

Dr Samantha-Kaye Johnston, research lead of the Digital Autonomy Machine Experiment and Research Associate at EWADA, said of the research’s importance: ‘True empowerment starts with awareness, especially in the digital age where critical thinking about personal data management is crucial. Digital autonomy is about giving people a choice in the consent mechanisms that underpin the sharing of their data in digital spaces. At the heart of the Digital Autonomy Machine Experiment is our commitment to providing the public with opportunities to shape how their data is handled in the age of AI.’

Underpinning the Digital Autonomy Machine Experiment is the concept that an individual’s scattered data can be gathered and consolidated in a secure space called a Personal Online Data Store, or Solid Pod. Developed by Sir Tim Berners-Lee – inventor of the World Wide Web, Professorial Research Fellow and director of EWADA – a Solid Pod can accommodate various bits of data such as contacts, files, photos, and everything else about a particular person. The individual can then decide who has access to that data and even what information gets shared. In other words, they have absolute autonomy over what to share, with whom, what to receive, and retract such permissions anytime they want.

A businessman works on his laptop at home with a virtual display showing a symbol to signify cyber security privacy and online data protection. Solid Pods could provide a secure space for individuals to consolidate their various data and decide who can access it. Image credit: napong rattanaraktiya, Getty Images. However, the researchers also understand that autonomy can mean different things to different people, which is why the Digital Autonomy Machine Experiment was launched. ‘We’re thrilled to invite public opinions worldwide to influence the development of Solid Pods, aligning with our goal of fostering digital autonomy,’ said Dr Samantha-Kaye Johnston.

Sir Tim Berners-Lee, Professorial Research Fellow and director of EWADA, said of the Solid Pods that aim to help create a better internet as part of his SOLID protocol: ‘Solid Pods re-organise the global data infrastructure by placing individuals at the centre of their data storage, shifting control away from both applications and centralised data monopolies. With Solid Pods, each individual has a personal data repository, enabling them to dictate access and reverse the current power dynamic. This new model not only fosters cross-platform collaboration but also grants individuals the autonomy to leverage their data for personal insights and benefits.’

`EWADA is uniquely positioned to produce ground-breaking technologies to empower everyone’s data autonomy. However, it’s essential to recognise that preferences regarding the exercise of data autonomy can vary significantly based on cultural contexts. This global experiment will provide the critical insights to inform the design of our technologies and ensure the inclusivity and equality that is central to the vision of EWADA’, said Dr Jun Zhao, research lead of the EWADA project, Oxford Martin Fellow and Senior Researcher at Oxford University’s Department of Computer Science.

The project is hoping to engage with up to 1 million adults (aged 18 and above) across the world to take a carefully designed 10-minute survey. Participants are being invited to thoughtfully consider their values regarding what process is used to manage personal information in each fictional scenario presented in the survey.

The results of the research will provide critical inputs to inform the development of technology in EWADA that respects people’s data autonomy preferences in digital environments and ultimately ensures the internet is a safer, more empowered place.

Take the survey on the Digital Autonomy Machine Experiment website.

Repost of link.

Four major research papers from EWADA accepted for publication

Nature Machine Intelligence, CHI2024 and WWW2024

by: Jun Zhao

 
23 Jan 2024

We are thrilled to announce that the EWADA Team has achieved significant success, with four major research papers accepted for publication by Nature Machine Intelligence, CHI2024, and WWW2024. These prestigious academic venues are highly competitive, and our researchers have put in tremendous effort to achieve these outstanding results. The papers cover a diverse range of topics, including the research agenda for supporting child-centered AI, the development and assessment of new ways to enhance families’ critical thinking regarding datafication, children’s data autonomy, and users’ ability to navigate data terms of use in decentralized settings.

Ge Wang, Jun Zhao, Max Van Kleek and Nigel Shadbolt. Challenges and opportunities in translating ethical AI principles into practice for children. Nature Machine Intelligence. To appear

Led by Tiffany Ge and Dr Jun Zhao, the perspective paper discusses the current global landscape of ethics guidelines for AI and their correlation with children. The article critically assesses the strategies and recommendations proposed by current AI ethics initiatives, identifying the critical challenges in translating such ethical AI principles into practice for children. The article provides timely and crucial recommendations regarding embedding ethics into the development and governance of AI for children.

Ge Wang, Jun Zhao, Max Van Kleek and Nigel Shadbolt. KOALA Hero Toolkit: A New Approach to Inform Families of Mobile Datafication Risks. CHI 2024. Overall acceptance rate 26.3%. To appear

This is the final evaluation study of the KOALA Hero research project, led by Dr Jun Zhao and partially supported by EWADA. In this work we present a new hybrid toolkit, KOALA Hero, designed to help children and parents jointly understand the datafication risks posed by their mobile apps. Through user studies involving 17 families, we assess how the toolkit influenced families’ thought processes, perceptions and decision-making regarding mobile datafication risks. Our findings show that KOALA supports families’ critical thinking and promotes family engagement, providing timely inputs on global efforts aimed at addressing datafication risks and underscoring the importance of strengthening legislative and policy enforcement of ethical data governance.

This work has also contributed to Dr Zhao’s discussion paper to be publised by the British Academy, jointly authored with Dr Ekaterina Hertog from Oxford Internet Institute and Ethics in AI Institute and Professor Netta Weinstein from University of Reading.

Ge Wang, Jun Zhao, Max Van Kleek and Nigel Shadbolt. CHAITok: A Proof-of-Concept System Supporting Children’s Sense of Data Autonomy. CHI 2024. Overall acceptance rate 26.3%. To appear

A core part of EWADA’s mission, CHAITok explores children’s ‘sense of data autonomy’. In this paper, we present CHAITok, a Solid-Based Android mobile application designed to enhance children’s sense of autonomy over their data on social media. Through 27 user study sessions with 109 children aged 10–13, we offer insights into the current lack of data autonomy among children regarding their online information and how we can foster children’s sense of data autonomy through a socio-technical journey. Our findings provide crucial insights into children’s values, how we can better support children’s evolving autonomy, and design for children’s digital rights. We emphasize data autonomy as a fundamental right for children, call for further research, design innovation, and policy changes on this critical issue.

Rui Zhao and Jun Zhao. Perennial Semantic Data Terms of Use for Decentralized Web. WWW 2024. Overall acceptance rate 20.2%. To appear.

Our latest research article address a significant challenge in decentralized Web architectures, such as Solid, specifically focusing on how to help users navigate numerous applications and decide which application can be trusted with access to their data Pods.

Currently, this process often involves reading lengthy and complex Terms of Use agreements, which users often find daunting or simply ignore. This compromises user autonomy and impedes detection of data misuse. To address this issue, EWADA researchers have developed a novel formal description of Data Terms of Use (DToU), along with a DToU reasoner. Users and applications can specify their own parts of the DToU policy with local knowledge, covering permissions, requirements, prohibitions and obligations. Automated reasoning verifies compliance, and also derives policies for output data. This constitutes a perennial DToU language, where the policy authoring occurs only once, allowing ongoing automated checks across users, applications and activity cycles. Our solution has been successfully integrated into the Solid framework with promising performance results. We believe this work demonstrates a practicality of a perennial DToU language and the potential for a paradigm shift in how users interact with data and applications in a decentralized Web, offering both improved privacy and usability.

All papers are currently in preparation for the camera-ready stage. Once finalised, you can find them on our publication page. We welcome your feedback and any follow-up questions.