EWADA third-year project meeting

EWADA third-year project meeting

by: Jun Zhao

 
26 Apr 2024

We had our third annual project meeting on 23 April, attended by 16 project members and affiliates. During the meeting, we had an exciting list of discussions about our recent research progress over the last year, including

  • SocialGenPod: a privacy-friendly generative AI social web application
  • Various ongoing work related to fairness in decentralised ML
  • The latest perennial data terms of the use vocabulary and protocols
  • The ongoing digital autonomy machine experiment, and
  • Demos of the SolidFitness app and Solid-based social app

While this is not a representation of all work carried out by EWADA last year, the discussions reflected balanced investigations from both the technical and social aspects by the team members. Particularly notable were the two demos, which were given by our fourth-year students and showed great promises of the power of data autonomy that can be enabled by a Solid-like architecture.

Some of the work is already featured by the papers shared on our web site, and we will publish technical notes for the two demos.

In the next few months, we look forward to welcoming several summer interns to join the team and continue to build up our strength in enabling a privacy-preserving, autonomous, decentralised web architecture.

[NEW NATURE PAPER] AI ethics are ignoring children, say Oxford Martin researchers

A report of the Nature Machine Intelligence publication

by: Jun Zhao

 
20 Mar 2024

In a perspective paper published in Nature Machine Intelligence, the authors highlight that although there is a growing consensus around what high-level AI ethical principles should look like, too little is known about how to effectively apply them in principle for children. The study mapped the global landscape of existing ethics guidelines for AI and identified four main challenges in adapting such principles for children’s benefit:

  • A lack of consideration for the developmental side of childhood, especially the complex and individual needs of children, age ranges, development stages, backgrounds, and characters.
  • Minimal consideration for the role of guardians (e.g. parents) in childhood. For example, parents are often portrayed as having superior experience to children, when the digital world may need to reflect on this traditional role of parents.
  • Too few child-centred evaluations that consider children’s best interests and rights. Quantitative assessments are the norm when assessing issues like safety and safeguarding in AI systems, but these tend to fall short when considering factors like the developmental needs and long-term wellbeing of children.
  • Absence of a coordinated, cross-sectoral, and cross-disciplinary approach to formulating ethical AI principles for children that are necessary to effect impactful practice changes.

The researchers also drew on real-life examples and experiences when identifying these challenges. They found that although AI is being used to keep children safe, typically by identifying inappropriate content online, there has been a lack of initiative to incorporate safeguarding principles into AI innovations including those supported by Large Language Models (LLMs). Such integration is crucial to prevent children from being exposed to biased content based on factors such as ethnicity, or to harmful content, especially for vulnerable groups, and the evaluation of such methods should go beyond mere quantitative metrics such as accuracy or precision. Through their partnership with the University of Bristol, the researchers are also designing tools to help children with ADHD, carefully considering their needs and designing interfaces to support their sharing of data with AI-related algorithms, in ways that are aligned with their daily routes, digital literacy skills and need for simple yet effective interfaces.

In response to these challenges, the researchers recommended:

  • increasing the involvement of key stakeholders, including parents and guardians, AI developers, and children themselves;
  • providing more direct support for industry designers and developers of AI systems, especially by involving them more in the implementation of ethical AI principles;
  • establishing legal and professional accountability mechanisms that are child-centred; and
  • increasing multidisciplinary collaboration around a child-centred approach involving stakeholders in areas such as human-computer interaction, design, algorithms, policy guidance, data protection law and education.

Dr Jun Zhao, Oxford Martin Fellow, Senior Researcher at the University’s Department of Computer Science, and lead author of the paper, said:

‘The incorporation of AI in children’s lives and our society is inevitable. While there are increased debates about who should ensure technologies are responsible and ethical, a substantial proportion of such burdens falls on parents and children to navigate this complex landscape.

‘This perspective article examined existing global AI ethics principles and identified crucial gaps and future development directions. These insights are critical for guiding our industries and policymakers. We hope this research will serve as a significant starting point for cross-sectoral collaborations in creating ethical AI technologies for children and global policy development in this space.’

The authors outlined several ethical AI principles that would especially need to be considered for children. They include ensuring fair, equal, and inclusive digital access, delivering transparency and accountability when developing AI systems, safeguarding privacy and preventing manipulation and exploitation, guaranteeing the safety of children, and creating age-appropriate systems while actively involving children in their development.

Professor Sir Nigel Shadbolt, co-author, Director of the EWADA Programme, Principal of Jesus College Oxford and a Professor of Computing Science at the Department of Computer Science, said:

‘In an era of AI powered algorithms, children deserve systems that meet their social, emotional, and cognitive needs. Our AI systems must be ethical and respectful at all stages of development, but this is especially critical during childhood.’

Read ‘Challenges and opportunities in translating ethical AI principles into practice for children’ in Nature Machine Intelligence

Repost of link.

by: Jun Zhao

 
19 Mar 2024

EWADA researchers are trying to better understand people’s values over who manages the sharing of their personal information online through an expansive research project.

The Digital Autonomy Machine Experiment aims to explore how the public would like to exercise their autonomy when it comes to managing their data. It is specifically investigating whether people would like to manage their personal information independently, through a trusted organisation (data trust), or through a semi- or fully-automated system.

Dr Samantha-Kaye Johnston, research lead of the Digital Autonomy Machine Experiment and Research Associate at EWADA, said of the research’s importance: ‘True empowerment starts with awareness, especially in the digital age where critical thinking about personal data management is crucial. Digital autonomy is about giving people a choice in the consent mechanisms that underpin the sharing of their data in digital spaces. At the heart of the Digital Autonomy Machine Experiment is our commitment to providing the public with opportunities to shape how their data is handled in the age of AI.’

Underpinning the Digital Autonomy Machine Experiment is the concept that an individual’s scattered data can be gathered and consolidated in a secure space called a Personal Online Data Store, or Solid Pod. Developed by Sir Tim Berners-Lee – inventor of the World Wide Web, Professorial Research Fellow and director of EWADA – a Solid Pod can accommodate various bits of data such as contacts, files, photos, and everything else about a particular person. The individual can then decide who has access to that data and even what information gets shared. In other words, they have absolute autonomy over what to share, with whom, what to receive, and retract such permissions anytime they want.

A businessman works on his laptop at home with a virtual display showing a symbol to signify cyber security privacy and online data protection. Solid Pods could provide a secure space for individuals to consolidate their various data and decide who can access it. Image credit: napong rattanaraktiya, Getty Images. However, the researchers also understand that autonomy can mean different things to different people, which is why the Digital Autonomy Machine Experiment was launched. ‘We’re thrilled to invite public opinions worldwide to influence the development of Solid Pods, aligning with our goal of fostering digital autonomy,’ said Dr Samantha-Kaye Johnston.

Sir Tim Berners-Lee, Professorial Research Fellow and director of EWADA, said of the Solid Pods that aim to help create a better internet as part of his SOLID protocol: ‘Solid Pods re-organise the global data infrastructure by placing individuals at the centre of their data storage, shifting control away from both applications and centralised data monopolies. With Solid Pods, each individual has a personal data repository, enabling them to dictate access and reverse the current power dynamic. This new model not only fosters cross-platform collaboration but also grants individuals the autonomy to leverage their data for personal insights and benefits.’

`EWADA is uniquely positioned to produce ground-breaking technologies to empower everyone’s data autonomy. However, it’s essential to recognise that preferences regarding the exercise of data autonomy can vary significantly based on cultural contexts. This global experiment will provide the critical insights to inform the design of our technologies and ensure the inclusivity and equality that is central to the vision of EWADA’, said Dr Jun Zhao, research lead of the EWADA project, Oxford Martin Fellow and Senior Researcher at Oxford University’s Department of Computer Science.

The project is hoping to engage with up to 1 million adults (aged 18 and above) across the world to take a carefully designed 10-minute survey. Participants are being invited to thoughtfully consider their values regarding what process is used to manage personal information in each fictional scenario presented in the survey.

The results of the research will provide critical inputs to inform the development of technology in EWADA that respects people’s data autonomy preferences in digital environments and ultimately ensures the internet is a safer, more empowered place.

Take the survey on the Digital Autonomy Machine Experiment website.

Repost of link.

Four major research papers from EWADA accepted for publication

Nature Machine Intelligence, CHI2024 and WWW2024

by: Jun Zhao

 
23 Jan 2024

We are thrilled to announce that the EWADA Team has achieved significant success, with four major research papers accepted for publication by Nature Machine Intelligence, CHI2024, and WWW2024. These prestigious academic venues are highly competitive, and our researchers have put in tremendous effort to achieve these outstanding results. The papers cover a diverse range of topics, including the research agenda for supporting child-centered AI, the development and assessment of new ways to enhance families’ critical thinking regarding datafication, children’s data autonomy, and users’ ability to navigate data terms of use in decentralized settings.

Ge Wang, Jun Zhao, Max Van Kleek and Nigel Shadbolt. Challenges and opportunities in translating ethical AI principles into practice for children. Nature Machine Intelligence. To appear

Led by Tiffany Ge and Dr Jun Zhao, the perspective paper discusses the current global landscape of ethics guidelines for AI and their correlation with children. The article critically assesses the strategies and recommendations proposed by current AI ethics initiatives, identifying the critical challenges in translating such ethical AI principles into practice for children. The article provides timely and crucial recommendations regarding embedding ethics into the development and governance of AI for children.

Ge Wang, Jun Zhao, Max Van Kleek and Nigel Shadbolt. KOALA Hero Toolkit: A New Approach to Inform Families of Mobile Datafication Risks. CHI 2024. Overall acceptance rate 26.3%. To appear

This is the final evaluation study of the KOALA Hero research project, led by Dr Jun Zhao and partially supported by EWADA. In this work we present a new hybrid toolkit, KOALA Hero, designed to help children and parents jointly understand the datafication risks posed by their mobile apps. Through user studies involving 17 families, we assess how the toolkit influenced families’ thought processes, perceptions and decision-making regarding mobile datafication risks. Our findings show that KOALA supports families’ critical thinking and promotes family engagement, providing timely inputs on global efforts aimed at addressing datafication risks and underscoring the importance of strengthening legislative and policy enforcement of ethical data governance.

This work has also contributed to Dr Zhao’s discussion paper to be publised by the British Academy, jointly authored with Dr Ekaterina Hertog from Oxford Internet Institute and Ethics in AI Institute and Professor Netta Weinstein from University of Reading.

Ge Wang, Jun Zhao, Max Van Kleek and Nigel Shadbolt. CHAITok: A Proof-of-Concept System Supporting Children’s Sense of Data Autonomy. CHI 2024. Overall acceptance rate 26.3%. To appear

A core part of EWADA’s mission, CHAITok explores children’s ‘sense of data autonomy’. In this paper, we present CHAITok, a Solid-Based Android mobile application designed to enhance children’s sense of autonomy over their data on social media. Through 27 user study sessions with 109 children aged 10–13, we offer insights into the current lack of data autonomy among children regarding their online information and how we can foster children’s sense of data autonomy through a socio-technical journey. Our findings provide crucial insights into children’s values, how we can better support children’s evolving autonomy, and design for children’s digital rights. We emphasize data autonomy as a fundamental right for children, call for further research, design innovation, and policy changes on this critical issue.

Rui Zhao and Jun Zhao. Perennial Semantic Data Terms of Use for Decentralized Web. WWW 2024. Overall acceptance rate 20.2%. To appear.

Our latest research article address a significant challenge in decentralized Web architectures, such as Solid, specifically focusing on how to help users navigate numerous applications and decide which application can be trusted with access to their data Pods.

Currently, this process often involves reading lengthy and complex Terms of Use agreements, which users often find daunting or simply ignore. This compromises user autonomy and impedes detection of data misuse. To address this issue, EWADA researchers have developed a novel formal description of Data Terms of Use (DToU), along with a DToU reasoner. Users and applications can specify their own parts of the DToU policy with local knowledge, covering permissions, requirements, prohibitions and obligations. Automated reasoning verifies compliance, and also derives policies for output data. This constitutes a perennial DToU language, where the policy authoring occurs only once, allowing ongoing automated checks across users, applications and activity cycles. Our solution has been successfully integrated into the Solid framework with promising performance results. We believe this work demonstrates a practicality of a perennial DToU language and the potential for a paradigm shift in how users interact with data and applications in a decentralized Web, offering both improved privacy and usability.

All papers are currently in preparation for the camera-ready stage. Once finalised, you can find them on our publication page. We welcome your feedback and any follow-up questions.

EWADA Summer 2023 Internship Report

A summary of the four projects carried out

by: Jun Zhao

 
05 Dec 2023

Summer 2023 marks the third year of our highly successful internship program. We are delighted to host four internships with outstanding candidates, along with a master’s student who conducted their graduate project with us. Each student has made significant contributions to EWADA, and this report provides a summary of the key outcomes from these projects.

Overview of the projects

The four projects addressed various challenges aligned with EWADA’s core vision, including: A Solid-based application designed to assist families in managing children’s health data

  • Extending our previous research on privacy-preserving computation with an ability to generate privacy-preserving synthetic data
  • Extending our earlier work on decentralised recommendation algorithms with an ability to generate privacy-preserving movie recommendations
  • Extending our prior research on supporting gig workers with a Solid-based approach to help workers manage their data

A Solid-based application to assist families in managing children’s health data

The project aimed to ensure that children, especifically those with ADHD, can exercise better control over the sharing of their data within an ecosystem involving parents/guardians, teachers, the broad school community, as well as clinicians or hospital staff. This is crucial challenge as the current scenario sees parents/guardians as the sole stakeholders with access to children’s information, determining how the data is accessed by and shared. Thus, the project seeks to explore a new model, in which children will be equipped with smartwatches and parents/guardians could examine the data through smartphones.

The project focused on building an architecture on top of SOLID, to collect, store and synchronise data generated by children’s smartwatches. It provides a web interface that allows a child with ADHD to share data and control the extent of information to share with requesting stakeholders. Different types of data that can be collected, including emotional dysregulation, medication usage, food intake, sleep and heart rate, step count, and location. A primary objective is to build a more empowered ecosystem of communication within schools regarding how health data may be shared with clinicians.

The approach is grounded in extending the experience sampling method (ESM), a research technique used in psychology and other fields to study individuals’ experiences, behaviours, and thoughts in real-time, as they occur in their natural environment.

For this project, location serves as the primary use case data due to its personal and sensitive nature. We want to explore whether visualization of data sharing could help children decide the extent to which they want to share their location data or any other data.

Privacy-preserving Decentralised Information Filtering

This work is based on our SolidFlix project, which is a Solid-based application allowing friends to share movie interests by storing this information in their individual pods. The movie recommendation algorithm used by SolidFlix is content-based, whereas a collaborative filter could provide more personalised recommendations by suggesting movies based on what a user’s friends are interested in watching.

However, conventionally, this kind of recommendation algorithm requires centralised access to all users’ data. The challenge lies in supporting collaborative recommendations without compromising the decentralised architecture and our commitment to preserve users’ data privacy.

The approach taken by the project team was to first compute similarities between each user’s movie list, and then generate recommendation. In the first step, a hash is created for each user’s movie list, which is then locally stored in their Solid pod. Using the hash code, then users could be categorised into distinct buckets, and individuals within the same bucket are considered similar, thus receiving identical recommendations.

In the context of movie recommendations, when a user, Bob, seeks a recommendation, he fetches the min hashes from all his friends’ pods, which will trigger the delivery of personalised recommendations. Bob can then request access to these movies from friends.

There are several advantages to this approach. To begin with, using collaborative filtering might be more feasible as it does not rely on the use of movie metadata, which is not always provided. Also, the approach is more scalable approach because it is built on pre-computed hashes, although there is a dependency on users sharing their min hashes.

A more detailed technical description and a recorded presentation can be found in Dr Goel’s blog post.

Decentralised Scalable and Privacy Preserving Synthetic Data Generation

For AI model development, we require more diverse datasets. However, sharing real data can become problematic because of privacy-related issues. This is solved by using synthetic data.

The objective of this project is to take a holistic approach to working with synthetic data. However, there is a need to organise the curation of this data. Various models for curating synthetic data exist, including a central differential privacy approach and a local differential privacy approach: the central differential privacy approach assumes a trusted curator collects individual data and then engages in the synthetic dataset generation; and the local differential privacy approach assumes that everyone locally adds noise before sending it to the central curator. The disadvantage of the central approach is that it might be compromised, as someone can gain control of these datasets and compromise its privacy; and that of the local approach is the potential for a significant amount of noise and requires substantial local computational capability.

The approach explored in this project involves curating data from Solid users, with users having the ability to determine their participation in the synthetic data generation process. Importantly, the architecture is based on Solid pods enhanced with a multi-party computation protocol to preserve the security and privacy of this process. Initial results show promising performance, and further details about the approach can be found in the arxiv paper.

A Solid-based approach to help workers manage their data

This project continues last year’s efforts, and its key objective is to determine how we can better manage incompatible datasets across different gig workers contexts and platforms. This is a crucial challenge because gig workers regularly face the task of managing data from different, in compatible platforms. To address this issue, we propose a solution called “Frankenstein drivers”. The goal is to experiment with different methods of managing gig worker data across diverse content using the SOLID protocols. Central to this solution is the use of an embedded model that matches semantic information, and LLM as a data wrangler tool to extract information from different sources and create meaningful visualisations. This transformation has significantly increased the productivity of a previously manual process, and the team is looking into exploring the possibility of establishing a direct integration between the LLM models and Solid pods.

This wide range of summer projects produced rich results, and we hope that he work will continue with the aim of building a community around these topic areas and integrating this work in the core EWADA pipeline. We thank the contributions by Sydney C., Yushi Y, Vishal R, Vid V, and the supervisions by Jake Stein, Rui Zhao, Naman Goel and Jun Zhao.

Welcome our new EWADA DPhil

Welcome our new EWADA DPhil

by: Jun Zhao

 
01 Oct 2023

We are really excited to welcome our new full-time EWADA DPhil joining the project - Jesse Wright.

Jesse is previously a software engineer at Inrupt, a forward-thinking start-up creating data infrastructure software that enables enterprises and governments to deploy and manage Solid-compliant solutions.

Jesse is fully-funded by the prestigious Oxford Computer Science Departmental Studentship. His research will explore how to enable trust reasoning in the decentralised setting in order to empower true data autonomy for the users.

Jesse is co-supervised by Professor Nigel Shadbolt and Dr Jun Zhao.

EWADA second-year project meeting

EWADA second-year project meeting

by: Jun Zhao

 
22 May 2023

On 22 May 2023, EWADA had our second annual project meeting, attended by 14 project members and affiliates.

We had an exciting list of discussions about our recent research progress over the last year, related to (1) privacy-preserving computation with Solid; (2) decentralised data governance structure for gig workers; (3) design considerations for supporting the expression of data terms of use; (4) social-behavioural challenges for empowering users’ digital autonomy and self-determination; and finally (5) integration of more advanced AI computations with Solid.

Some of these research investigations represent a deeper or more extensive investigation that we started last year; while others are new directions and perspectives that we are expanding into, built on the foundational understanding and technical capabilities that we created last year.

In the next few months, we will be looking forward to welcoming several summer interns to join the team this summer, to further explore some of the open challenges above (particularly items 3-5). We are also hoping to share some ongoing investigations of this work via public blog posts or reports to bootstrap community building.

If you are interested to learn more about any of these activities, please do not hesitate to get in touch with the EWADA team.

Welcome our new EWADA researcher

Welcome our new EWADA researcher

by: Jun Zhao

 
24 Apr 2023

We are really excited to welcome a new full-time EWADA RA joining the project - Dr Samantha-Kaye Johnston.

Dr Johnston is from a psychology and education science background. She is currently a Supernumerary Fellow in Education at Jesus College and her wealth of extensive experience in qualitative and quantitative research in the context of EdTech would undoubtedly provide a great asset to the EWADA project.

Further details about Sam can be found on her college web page.

EWADA project summer internships

EWADA project offers three summer internship positions in 2023

by: Jun Zhao

 
11 Apr 2023

We are very excited to announce 3 summer internships in 2023!!!

Please join us if you want to develop privacy-friendly AI, with a group of world-leading computer scientists!

Detailed job description

You will be working as part of the Oxford Martin School programme EWADA (Ethical Web and Data Infrastructure in the Age of AI) [1]. Data-driven algorithms are positively changing every walk of our life. However, from simple data aggregation algorithms for drawing collective insights to more advanced machine learning algorithms, all involve computations that are currently performed using centralised access to the users’ data. During the internship, you will be responsible for building scalable systems to perform privacy-preserving artificial intelligence (AI) computations in decentralized personal data architectures to contribute to the creation of a more ethical AI ecosystem. Specifically, you will use the Solid (Social Linked Data) architecture [2], upon which to build such AI systems and algorithms. Interns will demonstrate the practical significance of their work in application use cases and we have a range of application use cases for the internship, including but not limited to the:

  • Personalised Recommender Systems
  • Large Language Models like GPT
  • Open (Health) Data
  • Algorithmic Fairness and Transparency

Background of the project

EWADA is an ambitious 3-year programme that aims to reform the concentration of power on the Web by developing and deploying new forms of technical and legal infrastructure. The project is led by Prof Sir Nigel Shadbolt and Prof Sir Tim Berner-Lee and aims to investigate novel re-decentralisation architectures and develop privacy-preserving AI methods to re-establish citizens’ self-autonomy on the Web.

Selection criteria

You must have hands-on programming experience with machine learning, strong problem-solving skills and a demonstrated passion for building large-scale systems and performing comprehensive empirical evaluations. You either have prior experience or are interested and willing to learn quickly about privacy-preserving techniques like multi-party computation and Solid ecosystems. Successful candidates are also expected to be able to work independently.

Application

The post is expected to be full-time (36.5 hours) for 12 weeks, starting mid-July 2023 and ending in September 2023, £14.09 - £15.66 (Grade 3.8 - 4.7) per hour, depending on experience. If you are a student holding a Tier 4 visa, then you are permitted to work full-time for 8 weeks, plus 4 weeks part-time (max 20 hrs per week).

The post does not have to be based in Oxford but will be subject to the right to work in the UK. We CAN NOT sponsor visa applications due to the short duration of the project.

Applications should be submitted to Human Resources Department at hr@cs.ox.ac.uk with a resume or CV. A short paragraph on your background, interests and motivation to apply will be helpful.

The subject of the email should be: “Internship Application for Privacy-Preserving AI in Decentralized Personal Data Architectures”

The closing date for applications is noon on Friday 16th June 2023. Candidates will be shortlisted and invited for an interview in late June.

Selection criteria

Essential

  • Fundamental understanding and hands-on experience with implementing machine learning.
  • Proficiency in Python and ability to work with Linux-based Operating Systems.
  • The ability and desire to learn about Solid, to quickly acquire domain expertise needed for effectively developing new systems on top of Solid.
  • The ability to communicate information clearly, including technical content.
  • The ability to work independently and think creatively.
  • The ability to effectively manage time, to complete projects efficiently.

Desirable

  • Experience with deep learning.
  • Experience in privacy-friendly techniques like multi-party computation, homomorphic encryption, differential privacy, federated learning etc.
  • Experience with distributed and decentralized systems.
  • Familiarity with basic cryptographic techniques.
  • Excellent writing and presentation skills.

[1] https://www.oxfordmartin.ox.ac.uk/ethical-web-and-data-architectures/

[2] https://solidproject.org

First Solid Workshop at Oxford CS

An introduction by Tim about the web and solid

by: Jun Zhao

 
04 Mar 2023

On March 4, 2023, we are excited to have Sir Tim Berners-Lee to host a short introduction workshop about Solid for a small group of undergraduate and postgraduate students in Oxford CS, who will be doing solid-related student projects in Trinity or the academic year of 2023/24.

This is the first year we put up Solid-related projects at our department for UG and Msc students, and they have been extremely well-received. We have received quite a lot of interests from our students, who have shown a strong passion for exploring and building an alternative to the current platform-centric data ecosystem. However, during our initial tutorials, we realised that running an introductory workshop on Solid and its vision and history would be beneficial to all students and lay down a good foundation for their projects. This first Solid hands-on workshop was created for this purpose.

We have been very fortunate to have Tim to open the workshop and provide the introduction about Solid. It has been amazing to see how Tim walked through the beginning of Web 1.0, to the journey from Web 2.0 (the social web) to the so-called Web3.0 (the decentralised web), and how the stack of standard protocols underpinning these technologies have made it possible for us to have an open and interoperable World Wide Web.

Web3

Even for many of the EWADA researchers in the room, it has been exciting to see how Solid is perceived to sit along existing standard protocols, and be the nucleus of a solid ecosystem, with all the possibilities of enabling data autonomy and creating a wave of new, ethical, and open data applications for the governments, companies and individuals.

solid

Following this informative introduction, we continued the workshop with some demonstrations of solid-based web applications by both Prof Ruben Verborgh from Ghent University and the EWADA team.

In the second half of the workshop, we undertook a productive one-hour hands-on exercises with Solid. Everyone managed to create a WebID, log into the SolidFlix application built by EWADA, create and share movie data with everyone in the room. Most excitingly, Tim suggested that why don’t we also take this opportunity and create a Solid chat room, so that it will ease the communication amongst this exciting team and relieve us from proprietary platforms.

solid

We will continue and run another public Solid workshop at Oxford CS in the Trinity Term. All resources used by this workshop can be found below: