by: Jun Zhao

 
06 Jun 2024

On May 21 and 22, Professor Sir Nigel Shadbolt, PI of the EWADA project, gave two public lectures about AI, risks and regulatiosn.

On May 21, 2024, Professor Nigel spoke at the prestigious Lord Renwick Memorial Lecture, and talked about ‘As If Human: the Regulations, Governance, and Ethics of Artificial Intelligence’.

In this hour-long seminar, Nigel discussed the decades-long history of alternating enthusiasm and disillusionment for AI, as well as its more recent achievements and deployments. As we all know, these recent developments have led to renewed claims about the transformative and disruptive effects of AI. However, there is growing concern about how we regulate and govern AI systems and ensure that such systems align with human values and ethics. In this lecture, Nigel provided a review of the history and current state of the art in AI and considered how we address the challenges of regulation, governance, and ethical alignment of current and imminent AI systems.

On May 22, 2024, Nigel spoke at Lord Mayor’s Online Lecture `The Achilles’ Heel of AI: How a major tech risk to your business could be one you haven’t heard of—and what you should do.

In this talk, Nigel discussed the the critical challenge related to the risk of model collapse. This phenomenon, where AI becomes unstable or ceases to function effectively, is a looming threat with profound implications for our reliance on this critical technology.

Model collapse stems from using AI-generated data when training or refining models rather than relying on information directly generated by human beings or devices other than the AI systems themselves. It comes about when AI models create terabytes of new data, which contain little of the originality, innovation, or variety possessed by the original information used to “train” them. Or when AI models are weaponised to generate misinformation, deep fakes, or “poison” data. A downward spiral can result in progressively degraded output, leading to model collapse. The consequences could be far-reaching, potentially resulting in financial setbacks, reputational damage and job losses.

In this talk, Nigel dived into this little-known risk, drawing on insights from his research and that of others by exploring how the quality and provenance of data are too often overlooked in business decisions about the implementation and use of AI tools. Yet data plays a pivotal role in determining these systems’ reliability, effectiveness - and value to the bottom line.

At the end of the talk, Nigel also talked about potential solutions for mitigating model collapse and outlined a roadmap for businesses to foster a strong data infrastructure on which to base their AI strategies. These strategies provide powerful knowledge, understanding, and tools for us to navigate the complexities of this new frontier of technology safely and effectively.

by: Jun Zhao

 
17 May 2024

In today’s digital age, social media has emerged as a ubiquitous platform for children worldwide, to socialise, entertain and learn. Recent studies show that 38% of US and 42% of UK kids aged 5-13 are using these platforms, despite the common minimum age restriction of 13 set by social media companies for account registration.

However, amidst the plethora of legislation discussions, a crucial concern often remains overlooked: the pervasive data harvesting practices that underpin social media platforms and their potential to undermine children’s autonomy. It is for this reason Computer Science researchers working on the Oxford Martin Programme on Ethical Web and Data Architectures developed CHAITok, an innovative Android mobile app designed to empower children with greater control and autonomy over their data on social media.

When individuals interact on social media, they produce vast data streams that platform owners harvest. This process, often referred to as “datafication”, involves recording, tracking, aggregating, analysing, and capitalising on users’ data. This is the practice that essentially empowers social media giants to predict and influence children’s personal attributes, behaviours, and preferences. This then shapes their online engagement and content choices, contributing to increased dependence on these platforms and potentially shaping how children view and engage with the world while they are in vital stages of cognitive and emotional development.

The recent UK Online Safety Act is a pioneering movement addressing this outstanding challenge. However, it is crucial that while we are regulating and enforcing changes in the current platform-driven digital ecosystem, we realise it is now a critical time to put children’s voices at the heart of our design and innovations, respecting their needs and how they would like to be supported and equipped with better digital resilience and autonomy.

CHAITok’s interface is similar to that of TikTok’s, but while children browse video recommendations, they have many opportunities to control what data is used by CHAITok and keep all their data safe (including interaction data, personal preferences, etc.) in their own personal data store.

It offers three distinctive features:

  • Respecting children’s values: CHAITok prioritises the preservation of children’s values and preferences by having carried out an extensive co-design activities with 50 children [1] to inform our design, ensuring that CHAITok reflects children’s values for having better autonomy and agency over their digital footprint.
  • Supporting evolving autonomy: Grounded upon our theoretical understanding of how children’s autonomy involves their cognitive, behavioural and emotional autonomy, and how their development of autonomy is an evolving process throughout childhood, CHAITok provides tools and resources for children to develop their sense of autonomy from multiple aspects in an age-appropriate way, supporting their journey towards greater autonomy in navigating the digital landscape.
  • Actively foster autonomy instead of focusing on minimising harms: CHAITok advocates for children’s digital rights and emphasises the importance of respecting their privacy and autonomy in online interactions. Unlike existing approaches, we took a proactive approach in our design to explicitly nudge, prompt and scaffold child’s critical thinking, action taking and reflection.

Our 27 user study sessions involving 109 children aged 10–13 gave us a deep insight of children’s current experiences and perceptions of social media platforms:

  • Almost all of these children feel a lack of autonomy (‘don’t have autonomy at all’) over their data.
  • One in three children found their experience with data on social media platforms as quite a passive experience, and often felt ‘being tricked’ by these platforms.
  • About a third found it hard to disengage from these platforms, and some even reported sleep issues when using phones before bedtime; and many of them felt ‘helpless’ against resisting these platforms.

By interacting with our app prototype as a group for about one-hour at their schools, most children felt more safe, empowered, and respected. This provides encouraging results for our research, helping children overcome difficulties associated with feeling unsupported and unconfident.

Our results are in contrast to the common perception that children are incapable of making autonomous decisions. This provides critical inputs for us to reflect on the current ethics principles for creating AI technologies for children and an urgent need to further explore wider mechanisms to incorporate autonomy fostering in children’s digital lives.

We look forward to continuing our exploration of how we may deploy CHAITok as an app in the wild, to provide an alternative social media experience for children in a safer and more autonomy-respectful environment.

Read the paper, ‘CHAITok: A Proof-of-Concept System Supporting Children’s Sense of Data Autonomy on Social Media’.

For further information on the Oxford Child-Centred AI (Oxford CCAI) Design Lab.

Report of link.

by: Jun Zhao

 
16 May 2024

In today’s digital age, children are growing up surrounded by technology, with their online activities often being tracked, analysed, and often monetised. While the digital landscape offers countless opportunities for learning and exploration, it also exposes children to a myriad of datafication risks, including harmful profiling, micro-targeting, and behavioural manipulation.

It is for this reason that Computer Science researchers working on the Oxford Martin Programme on Ethical Web and Data Architectures developed the KOALA Hero Toolkit. It has been co-developed with families and children by Oxford researchers over several years in response to increasing concerns from families about the risks associated with extensive use of the digital technologies.

Digital monitoring-based technologies, enabling parents to restrict, monitor or track children’s online activities, dominate the market space. Popular apps such as Life 360, Google Family Link, Apple Maps, Qustodio, and Apple screen time, are widespread. According to an Ofcom report, in the UK 70% of parents with children aged 3-17 have used technology to control their child’s access to online content. A similar report is found in the US, with 86% of parents with children aged 5-11 years having reported restricting when and for how long kids can use screens, and 72% using parental controls to restrict how much their child uses screens.

Research has shown that such approaches have limited efficacy in keeping children out of the boundaries of the digital space or reduce screen time usage. At the same time, the risks associated with these approaches are much less discussed, such as their potential to undermine family trust relationships or prevent the development of children’s self-regulation skills. With modern families increasingly struggling with their children’s relationship with digital technologies and lack of effective and clear guidance for them, new approaches are urgently needed.

The KOALA Hero toolkit has several key features:

  • Promote family awareness development: By providing families with insights into datafication risks, i.e. how children’s data may be collected and processed, and used to affect what they see online, the toolkit empowers families to make informed decisions about their online activities.
  • Support interactive learning: Through both a digital and physical component, and the provision of interactive activities and discussion sheets, the toolkit facilitates meaningful conversations between children and parents, fostering a deeper understanding of digital privacy and ethics.
  • Encourage family engagement: By providing worksheets that guide conversations and interactions with the toolkit among families, with both children and parents involved in the learning process, the toolkit strengthens familial bonds and promotes collaborative problem-solving.

We assessed the toolkit with 17 families, involving 23 children aged 10-14. We found that families developed better awareness of the implications related to datafication, in comparison to their prior understandings. The toolkit also enabled families to feel more equipped to discuss datafication risks and have more balanced and joint family conversations.

These findings provide positive indications for our approach of encouraging proactive family engagement, instead of focusing on controls and monitoring. We hope to improve the toolkit and work with a larger sample through a longer-term study before sharing the toolkit on popular app stores.

Read the paper, ‘KOALA Hero Toolkit: A New Approach to Inform Families of Mobile Datafication Risks’.

For further information on the Oxford Child-Centred AI (Oxford CCAI) Design Lab.

Report of link.

AI ethics are ignoring children, say Oxford Martin researchers

A report of the Nature Machine Intelligence publication

by: Jun Zhao

 
20 Mar 2024

In a perspective paper published in Nature Machine Intelligence, the authors highlight that although there is a growing consensus around what high-level AI ethical principles should look like, too little is known about how to effectively apply them in principle for children. The study mapped the global landscape of existing ethics guidelines for AI and identified four main challenges in adapting such principles for children’s benefit:

  • A lack of consideration for the developmental side of childhood, especially the complex and individual needs of children, age ranges, development stages, backgrounds, and characters.
  • Minimal consideration for the role of guardians (e.g. parents) in childhood. For example, parents are often portrayed as having superior experience to children, when the digital world may need to reflect on this traditional role of parents.
  • Too few child-centred evaluations that consider children’s best interests and rights. Quantitative assessments are the norm when assessing issues like safety and safeguarding in AI systems, but these tend to fall short when considering factors like the developmental needs and long-term wellbeing of children.
  • Absence of a coordinated, cross-sectoral, and cross-disciplinary approach to formulating ethical AI principles for children that are necessary to effect impactful practice changes.

The researchers also drew on real-life examples and experiences when identifying these challenges. They found that although AI is being used to keep children safe, typically by identifying inappropriate content online, there has been a lack of initiative to incorporate safeguarding principles into AI innovations including those supported by Large Language Models (LLMs). Such integration is crucial to prevent children from being exposed to biased content based on factors such as ethnicity, or to harmful content, especially for vulnerable groups, and the evaluation of such methods should go beyond mere quantitative metrics such as accuracy or precision. Through their partnership with the University of Bristol, the researchers are also designing tools to help children with ADHD, carefully considering their needs and designing interfaces to support their sharing of data with AI-related algorithms, in ways that are aligned with their daily routes, digital literacy skills and need for simple yet effective interfaces.

In response to these challenges, the researchers recommended:

  • increasing the involvement of key stakeholders, including parents and guardians, AI developers, and children themselves;
  • providing more direct support for industry designers and developers of AI systems, especially by involving them more in the implementation of ethical AI principles;
  • establishing legal and professional accountability mechanisms that are child-centred; and
  • increasing multidisciplinary collaboration around a child-centred approach involving stakeholders in areas such as human-computer interaction, design, algorithms, policy guidance, data protection law and education.

Dr Jun Zhao, Oxford Martin Fellow, Senior Researcher at the University’s Department of Computer Science, and lead author of the paper, said:

‘The incorporation of AI in children’s lives and our society is inevitable. While there are increased debates about who should ensure technologies are responsible and ethical, a substantial proportion of such burdens falls on parents and children to navigate this complex landscape.

‘This perspective article examined existing global AI ethics principles and identified crucial gaps and future development directions. These insights are critical for guiding our industries and policymakers. We hope this research will serve as a significant starting point for cross-sectoral collaborations in creating ethical AI technologies for children and global policy development in this space.’

The authors outlined several ethical AI principles that would especially need to be considered for children. They include ensuring fair, equal, and inclusive digital access, delivering transparency and accountability when developing AI systems, safeguarding privacy and preventing manipulation and exploitation, guaranteeing the safety of children, and creating age-appropriate systems while actively involving children in their development.

Professor Sir Nigel Shadbolt, co-author, Director of the EWADA Programme, Principal of Jesus College Oxford and a Professor of Computing Science at the Department of Computer Science, said:

‘In an era of AI powered algorithms, children deserve systems that meet their social, emotional, and cognitive needs. Our AI systems must be ethical and respectful at all stages of development, but this is especially critical during childhood.’

Read ‘Challenges and opportunities in translating ethical AI principles into practice for children’ in Nature Machine Intelligence

Repost of link.

by: Jun Zhao

 
19 Mar 2024

EWADA researchers are trying to better understand people’s values over who manages the sharing of their personal information online through an expansive research project.

The Digital Autonomy Machine Experiment aims to explore how the public would like to exercise their autonomy when it comes to managing their data. It is specifically investigating whether people would like to manage their personal information independently, through a trusted organisation (data trust), or through a semi- or fully-automated system.

Dr Samantha-Kaye Johnston, research lead of the Digital Autonomy Machine Experiment and Research Associate at EWADA, said of the research’s importance: ‘True empowerment starts with awareness, especially in the digital age where critical thinking about personal data management is crucial. Digital autonomy is about giving people a choice in the consent mechanisms that underpin the sharing of their data in digital spaces. At the heart of the Digital Autonomy Machine Experiment is our commitment to providing the public with opportunities to shape how their data is handled in the age of AI.’

Underpinning the Digital Autonomy Machine Experiment is the concept that an individual’s scattered data can be gathered and consolidated in a secure space called a Personal Online Data Store, or Solid Pod. Developed by Sir Tim Berners-Lee – inventor of the World Wide Web, Professorial Research Fellow and director of EWADA – a Solid Pod can accommodate various bits of data such as contacts, files, photos, and everything else about a particular person. The individual can then decide who has access to that data and even what information gets shared. In other words, they have absolute autonomy over what to share, with whom, what to receive, and retract such permissions anytime they want.

A businessman works on his laptop at home with a virtual display showing a symbol to signify cyber security privacy and online data protection. Solid Pods could provide a secure space for individuals to consolidate their various data and decide who can access it. Image credit: napong rattanaraktiya, Getty Images. However, the researchers also understand that autonomy can mean different things to different people, which is why the Digital Autonomy Machine Experiment was launched. ‘We’re thrilled to invite public opinions worldwide to influence the development of Solid Pods, aligning with our goal of fostering digital autonomy,’ said Dr Samantha-Kaye Johnston.

Sir Tim Berners-Lee, Professorial Research Fellow and director of EWADA, said of the Solid Pods that aim to help create a better internet as part of his SOLID protocol: ‘Solid Pods re-organise the global data infrastructure by placing individuals at the centre of their data storage, shifting control away from both applications and centralised data monopolies. With Solid Pods, each individual has a personal data repository, enabling them to dictate access and reverse the current power dynamic. This new model not only fosters cross-platform collaboration but also grants individuals the autonomy to leverage their data for personal insights and benefits.’

`EWADA is uniquely positioned to produce ground-breaking technologies to empower everyone’s data autonomy. However, it’s essential to recognise that preferences regarding the exercise of data autonomy can vary significantly based on cultural contexts. This global experiment will provide the critical insights to inform the design of our technologies and ensure the inclusivity and equality that is central to the vision of EWADA’, said Dr Jun Zhao, research lead of the EWADA project, Oxford Martin Fellow and Senior Researcher at Oxford University’s Department of Computer Science.

The project is hoping to engage with up to 1 million adults (aged 18 and above) across the world to take a carefully designed 10-minute survey. Participants are being invited to thoughtfully consider their values regarding what process is used to manage personal information in each fictional scenario presented in the survey.

The results of the research will provide critical inputs to inform the development of technology in EWADA that respects people’s data autonomy preferences in digital environments and ultimately ensures the internet is a safer, more empowered place.

Take the survey on the Digital Autonomy Machine Experiment website.

Repost of link.

Four major research papers from EWADA accepted for publication

Nature Machine Intelligence, CHI2024 and WWW2024

by: Jun Zhao

 
23 Jan 2024

We are thrilled to announce that the EWADA Team has achieved significant success, with four major research papers accepted for publication by Nature Machine Intelligence, CHI2024, and WWW2024. These prestigious academic venues are highly competitive, and our researchers have put in tremendous effort to achieve these outstanding results. The papers cover a diverse range of topics, including the research agenda for supporting child-centered AI, the development and assessment of new ways to enhance families’ critical thinking regarding datafication, children’s data autonomy, and users’ ability to navigate data terms of use in decentralized settings.

Ge Wang, Jun Zhao, Max Van Kleek and Nigel Shadbolt. Challenges and opportunities in translating ethical AI principles into practice for children. Nature Machine Intelligence. To appear

Led by Tiffany Ge and Dr Jun Zhao, the perspective paper discusses the current global landscape of ethics guidelines for AI and their correlation with children. The article critically assesses the strategies and recommendations proposed by current AI ethics initiatives, identifying the critical challenges in translating such ethical AI principles into practice for children. The article provides timely and crucial recommendations regarding embedding ethics into the development and governance of AI for children.

Ge Wang, Jun Zhao, Max Van Kleek and Nigel Shadbolt. KOALA Hero Toolkit: A New Approach to Inform Families of Mobile Datafication Risks. CHI 2024. Overall acceptance rate 26.3%. To appear

This is the final evaluation study of the KOALA Hero research project, led by Dr Jun Zhao and partially supported by EWADA. In this work we present a new hybrid toolkit, KOALA Hero, designed to help children and parents jointly understand the datafication risks posed by their mobile apps. Through user studies involving 17 families, we assess how the toolkit influenced families’ thought processes, perceptions and decision-making regarding mobile datafication risks. Our findings show that KOALA supports families’ critical thinking and promotes family engagement, providing timely inputs on global efforts aimed at addressing datafication risks and underscoring the importance of strengthening legislative and policy enforcement of ethical data governance.

This work has also contributed to Dr Zhao’s discussion paper to be publised by the British Academy, jointly authored with Dr Ekaterina Hertog from Oxford Internet Institute and Ethics in AI Institute and Professor Netta Weinstein from University of Reading.

Ge Wang, Jun Zhao, Max Van Kleek and Nigel Shadbolt. CHAITok: A Proof-of-Concept System Supporting Children’s Sense of Data Autonomy. CHI 2024. Overall acceptance rate 26.3%. To appear

A core part of EWADA’s mission, CHAITok explores children’s ‘sense of data autonomy’. In this paper, we present CHAITok, a Solid-Based Android mobile application designed to enhance children’s sense of autonomy over their data on social media. Through 27 user study sessions with 109 children aged 10–13, we offer insights into the current lack of data autonomy among children regarding their online information and how we can foster children’s sense of data autonomy through a socio-technical journey. Our findings provide crucial insights into children’s values, how we can better support children’s evolving autonomy, and design for children’s digital rights. We emphasize data autonomy as a fundamental right for children, call for further research, design innovation, and policy changes on this critical issue.

Rui Zhao and Jun Zhao. Perennial Semantic Data Terms of Use for Decentralized Web. WWW 2024. Overall acceptance rate 20.2%. To appear.

Our latest research article address a significant challenge in decentralized Web architectures, such as Solid, specifically focusing on how to help users navigate numerous applications and decide which application can be trusted with access to their data Pods.

Currently, this process often involves reading lengthy and complex Terms of Use agreements, which users often find daunting or simply ignore. This compromises user autonomy and impedes detection of data misuse. To address this issue, EWADA researchers have developed a novel formal description of Data Terms of Use (DToU), along with a DToU reasoner. Users and applications can specify their own parts of the DToU policy with local knowledge, covering permissions, requirements, prohibitions and obligations. Automated reasoning verifies compliance, and also derives policies for output data. This constitutes a perennial DToU language, where the policy authoring occurs only once, allowing ongoing automated checks across users, applications and activity cycles. Our solution has been successfully integrated into the Solid framework with promising performance results. We believe this work demonstrates a practicality of a perennial DToU language and the potential for a paradigm shift in how users interact with data and applications in a decentralized Web, offering both improved privacy and usability.

All papers are currently in preparation for the camera-ready stage. Once finalised, you can find them on our publication page. We welcome your feedback and any follow-up questions.

EWADA Summer 2023 Internship Report

A summary of the four projects carried out

by: Jun Zhao

 
05 Dec 2023

Summer 2023 marks the third year of our highly successful internship program. We are delighted to host four internships with outstanding candidates, along with a master’s student who conducted their graduate project with us. Each student has made significant contributions to EWADA, and this report provides a summary of the key outcomes from these projects.

Overview of the projects

The four projects addressed various challenges aligned with EWADA’s core vision, including: A Solid-based application designed to assist families in managing children’s health data

  • Extending our previous research on privacy-preserving computation with an ability to generate privacy-preserving synthetic data
  • Extending our earlier work on decentralised recommendation algorithms with an ability to generate privacy-preserving movie recommendations
  • Extending our prior research on supporting gig workers with a Solid-based approach to help workers manage their data

A Solid-based application to assist families in managing children’s health data

The project aimed to ensure that children, especifically those with ADHD, can exercise better control over the sharing of their data within an ecosystem involving parents/guardians, teachers, the broad school community, as well as clinicians or hospital staff. This is crucial challenge as the current scenario sees parents/guardians as the sole stakeholders with access to children’s information, determining how the data is accessed by and shared. Thus, the project seeks to explore a new model, in which children will be equipped with smartwatches and parents/guardians could examine the data through smartphones.

The project focused on building an architecture on top of SOLID, to collect, store and synchronise data generated by children’s smartwatches. It provides a web interface that allows a child with ADHD to share data and control the extent of information to share with requesting stakeholders. Different types of data that can be collected, including emotional dysregulation, medication usage, food intake, sleep and heart rate, step count, and location. A primary objective is to build a more empowered ecosystem of communication within schools regarding how health data may be shared with clinicians.

The approach is grounded in extending the experience sampling method (ESM), a research technique used in psychology and other fields to study individuals’ experiences, behaviours, and thoughts in real-time, as they occur in their natural environment.

For this project, location serves as the primary use case data due to its personal and sensitive nature. We want to explore whether visualization of data sharing could help children decide the extent to which they want to share their location data or any other data.

Privacy-preserving Decentralised Information Filtering

This work is based on our SolidFlix project, which is a Solid-based application allowing friends to share movie interests by storing this information in their individual pods. The movie recommendation algorithm used by SolidFlix is content-based, whereas a collaborative filter could provide more personalised recommendations by suggesting movies based on what a user’s friends are interested in watching.

However, conventionally, this kind of recommendation algorithm requires centralised access to all users’ data. The challenge lies in supporting collaborative recommendations without compromising the decentralised architecture and our commitment to preserve users’ data privacy.

The approach taken by the project team was to first compute similarities between each user’s movie list, and then generate recommendation. In the first step, a hash is created for each user’s movie list, which is then locally stored in their Solid pod. Using the hash code, then users could be categorised into distinct buckets, and individuals within the same bucket are considered similar, thus receiving identical recommendations.

In the context of movie recommendations, when a user, Bob, seeks a recommendation, he fetches the min hashes from all his friends’ pods, which will trigger the delivery of personalised recommendations. Bob can then request access to these movies from friends.

There are several advantages to this approach. To begin with, using collaborative filtering might be more feasible as it does not rely on the use of movie metadata, which is not always provided. Also, the approach is more scalable approach because it is built on pre-computed hashes, although there is a dependency on users sharing their min hashes.

A more detailed technical description and a recorded presentation can be found in Dr Goel’s blog post.

Decentralised Scalable and Privacy Preserving Synthetic Data Generation

For AI model development, we require more diverse datasets. However, sharing real data can become problematic because of privacy-related issues. This is solved by using synthetic data.

The objective of this project is to take a holistic approach to working with synthetic data. However, there is a need to organise the curation of this data. Various models for curating synthetic data exist, including a central differential privacy approach and a local differential privacy approach: the central differential privacy approach assumes a trusted curator collects individual data and then engages in the synthetic dataset generation; and the local differential privacy approach assumes that everyone locally adds noise before sending it to the central curator. The disadvantage of the central approach is that it might be compromised, as someone can gain control of these datasets and compromise its privacy; and that of the local approach is the potential for a significant amount of noise and requires substantial local computational capability.

The approach explored in this project involves curating data from Solid users, with users having the ability to determine their participation in the synthetic data generation process. Importantly, the architecture is based on Solid pods enhanced with a multi-party computation protocol to preserve the security and privacy of this process. Initial results show promising performance, and further details about the approach can be found in the arxiv paper.

A Solid-based approach to help workers manage their data

This project continues last year’s efforts, and its key objective is to determine how we can better manage incompatible datasets across different gig workers contexts and platforms. This is a crucial challenge because gig workers regularly face the task of managing data from different, in compatible platforms. To address this issue, we propose a solution called “Frankenstein drivers”. The goal is to experiment with different methods of managing gig worker data across diverse content using the SOLID protocols. Central to this solution is the use of an embedded model that matches semantic information, and LLM as a data wrangler tool to extract information from different sources and create meaningful visualisations. This transformation has significantly increased the productivity of a previously manual process, and the team is looking into exploring the possibility of establishing a direct integration between the LLM models and Solid pods.

This wide range of summer projects produced rich results, and we hope that he work will continue with the aim of building a community around these topic areas and integrating this work in the core EWADA pipeline. We thank the contributions by Sydney C., Yushi Y, Vishal R, Vid V, and the supervisions by Jake Stein, Rui Zhao, Naman Goel and Jun Zhao.

Welcome our new EWADA DPhil

Welcome our new EWADA DPhil

by: Jun Zhao

 
01 Oct 2023

We are really excited to welcome our new full-time EWADA DPhil joining the project - Jesse Wright.

Jesse is previously a software engineer at Inrupt, a forward-thinking start-up creating data infrastructure software that enables enterprises and governments to deploy and manage Solid-compliant solutions.

Jesse is fully-funded by the prestigious Oxford Computer Science Departmental Studentship. His research will explore how to enable trust reasoning in the decentralised setting in order to empower true data autonomy for the users.

Jesse is co-supervised by Professor Nigel Shadbolt and Dr Jun Zhao.

EWADA second year project meeting

EWADA second year project meeting

by: Jun Zhao

 
22 May 2023

On 22 May 2023, EWADA had our second annual project meeting, attended by 14 project members and affiliates.

We had an exciting list of discussions about our recent research progress over the last year, related to (1) privacy-preserving computation with Solid; (2) decentralised data governance structure for gig workers; (3) design considerations for supporting the expression of data terms of use; (4) social-behavioural challenges for empowering users’ digital autonomy and self-determination; and finally (5) integration of more advanced AI computations with Solid.

Some of these research investigations represent a deeper or more extensive investigation that we started last year; while others are new directions and perspectives that we are expanding into, built on the foundational understanding and technical capabilities that we created last year.

In the next few months, we will be looking forward to welcoming several summer interns to join the team this summer, to further explore some of the open challenges above (particularly items 3-5). We are also hoping to share some ongoing investigations of this work via public blog posts or reports to bootstrap community building.

If you are interested to learn more about any of these activities, please do not hesitate to get in touch with the EWADA team.

Welcome our new EWADA researcher

Welcome our new EWADA researcher

by: Jun Zhao

 
24 Apr 2023

We are really excited to welcome a new full-time EWADA RA joining the project - Dr Samantha-Kaye Johnston.

Dr Johnston is from a psychology and education science background. She is currently a Supernumerary Fellow in Education at Jesus College and her wealth of extensive experience in qualitative and quantitative research in the context of EdTech would undoubtedly provide a great asset to the EWADA project.

Further details about Sam can be found on her college web page.