n

EarComp 2025

6th International Workshop on Earable Computing
In conjunction with UbiComp 2025, Espoo, Finland
October 13, 2025

(EarComp 2024)
(EarComp 2023)
(EarComp 2022)
(EarComp 2021)
(EarComp 2019)

Overview

Sensory earables are increasingly becoming a mainstream compute platform with a promise to fundamentally transform personal-scale human sensing applications. Over the past few years, a number of research efforts in the ubiquitous computing domain have sought to achieve useful, engaging, and sometimes ambitious behavioural analytics with sensory earables including studies of the human face; of emotion and stress; continuous monitoring of cardiovascular function; oxygen consumption and blood flow; and tracking eating episodes as well as dietary and swallowing activities. At the same time, we have started seeing commercial efforts such as Bragi's The Dash, Bose SoundSport, Jabra Elite Sport, and Sony Xperia offering music experience augmented with sensory services including fitness tracking, real-time translations and conversational agents. Naturally, earables are becoming an intense interdisciplinary area of study with many and diverse applications including HCI, empathic communication, behavioural science, health and wellbeing, entertainment, education, and security.

However, as of today, earable computing lacks an academic forum to bring together researchers, practitioners, and design experts from academia and industry to discuss, share, and shape this exciting new area of research. To this end, we are organising the fifth workshop on Earable Computing with a hope that this workshop will serve as a catalyst for advancements in sensory earable technology as well as present a clear sense of direction for the research community to proceed in this space.

Keynotes

Wearables 101: What Changes (and Doesn’t) as Your Data Scales


Heli Koskimäki
Abstract: Every startup has many stories. This talk tells Oura’s through its data and the algorithms built on it through my lens. I’ll show how scientific and product-facing approaches meet, and how evidence thresholds, ground truth, and validation shift when the target is user value rather than publication. We’ll look at what changes as signals, users, and expectations grow, and what doesn’t. The aim is a practical lens on scaling ring-based data through algorithm highlights grounded in real user stories. I’ll also share what I wouldn’t change through principles that held up at scale to the collaboration that make it possible.

Bio: Heli Koskimäki, PhD works as Senior Director, Future Physiology at Oura. Since joining the Oura Science Team in 2016 she has contributed to several core feature development projects from nocturnal heart rate (HR) and heart rate variability (HRV) studies to sleep staging, chronotype detection, and period prediction. Currently, she is responsible for Oura's long-term roadmap planning from a physiological features perspective.

Earables to Systems: From Open Platforms to Real-World Impact


Michael Beigl
Abstract: Earable devices are emerging as the most pervasive wearable form factor, yet translating promising prototypes into robust systems remains a challenge. This keynote highlights how open, rigorously engineered platforms—exemplified by OpenEarable 2.0—together with transparent machine-learning pipelines can accelerate credible and reproducible progress in the field. We first survey the sensing landscape and available tooling, before turning to three application vignettes: health monitoring, novel interaction through tensor-tympani input, and multisensory audio with ultrasonic spheres. These examples illustrate both the opportunities and the gaps that arise when systems transition from controlled laboratory settings to everyday use. We further discuss the influence of the human body on sensor performance, the importance of privacy-preserving on-device inference, and the evidence needed for both consumer and clinical contexts. Finally, the talk outlines a community roadmap based on open datasets, open technology, and collaborative efforts to move earables from bespoke prototypes toward scalable, trustworthy systems.

Bio: Michael Beigl is Professor of Pervasive Computing Systems at the Karlsruhe Institute of Technology (KIT), Head of the TECO Research Laboratory and Vice Dean of the Department of Computer Science. He received his M.Sc. and Ph.D. degrees from the University of Karlsruhe (now KIT). Previously, he was Professor at TU Braunschweig from 2006-2010, Visiting Associate Professor at Hide Tokuda Labs, Keio University, Japan in 2005, and Research Director of TECO, University of Karlsruhe, Germany from 2001-2005. Since 2014, he leads a national competence centre for big data AI, the Smart Data Innovation Lab (SDIL), and the state competence centre for big data AI in Baden-Württemberg, the Smart Data Solution Center (SDSC-BW), and since 2023, he is co-spokesman of the HealthTech Centre at KIT. His research interests evolve around the fusion of humans and computers, with a special interest in wearable sensor/actuator systems, human-computer interaction, and the fusion of artificial and human intelligence with computer systems. Contact him at Michael.Beigl@kit.edu.

Program

Venue: TBA


08:45 – 09:00, Opening

09:00 - 10:00, Keynote 1: Heli Koskimäki, Oura - Wearables 101: What Changes (and Doesn’t) as Your Data Scales

10:00 - 10:30, Coffee Break

Session 1: Earables and HCI

10:30 - 10:45, Take a Seat: Stand-to-sit Analysis with Earables

Terry Fawden et al.

10:45 - 11:00, EarMag: In-Ear Magnetosensing for Jaw and Head Gesture-Based Human-Computer Interaction

Max van Ort et al.

Session 2: Earables for Sports

11:00 - 11:15, Narrative Feedback via Earable Interaction to Support Embodied Running Experiences

Yihan Dong et al.

11:15 - 11:30, Head Movement-Based Visual Distraction Detection in Cyclists

Sidhharth Balakrishnan et al.

11:30 - 11:45, Beyond Scores: Earables as Active Recovery Boosters for Longevity and Performance

Andrea Ferlini et al.

Session 3: Physiological Sensing

11:45 - 12:00, Earable-based Continuous Blood Pressure Monitoring via a Single-Point Flexible Sensor

Jiao Li et al.

12:00 – 14:00, Lunch and Networking

14:00 - 15:00, Keynote 2: Michael Beigl, KIT - Earables to Systems: From Open Platforms to Real-World Impact

Session 4: Novel sensor modalities in earables

15:00 - 15:15, Automatic Sleep Staging with Wearable Single Channel In-Ear ExG

Philipp Lepold et al.

15:15 - 15:30, Synthesis of Ear-EEG from Scalp EEG Using Deep Learning Architectures

Tanuja Jayas et al.

15:30 – 15:45, Award Ceremony and Closing Remarks



All papers will be presented and presentation time will be 15 minutes (10 minutes for presentation + 5 minutes for Q&A)

Call For Papers

We will solicit three categories of papers.
  • Full papers (up to 6 pages including references) should report a reasonably mature work with earables, and is expected to demonstrate concrete and reproducible results albeit scale may be limited.
  • Experience papers (up to 4 pages including references) that present extensive experiences with implementation, deployment, and operations of earable-based systems. Desirable papers are expected to contain real data as well as descriptions of the practical lessons learned.
  • Short papers (up to 2 pages including references) are encouraged to report novel, and creative ideas that are yet to produce concrete research results but are at a stage where community feedback would be useful.
  • Moreover, we will have a special submission category - "Dataset Paper" - soliciting a 1-2 page document describing a well curated and labelled dataset collected with earables (eventually accompanied by the dataset).

    All papers will be in ACM sigconf template with 2 columns and all of the accepted papers (regardless of category) will be included in the ACM Digital Library. All papers will be digitally available through the workshop website, and the UbiComp/ISWC 2025 Adjunct Proceedings. For each category of papers, we will offer a "Best Paper" and "Best Dataset" awards sponsored by Nokia Bell Labs.


    Topics of interest (NOT an exhaustive list):
  • Acoustic Sensing with Earables
  • Kinetic Sensing with Earables
  • Multi-Modal Learning with Earables
  • Multi-Task Learning with Earables
  • Active Learning with Earables
  • Low-Power Sensing Systems for Earables
  • Authentication & Trust mechanisms for Earables
  • Quality-Aware Data Collection with Earables
  • Experience Sampling with Earables
  • Crowd Sourcing with Earables
  • Novel UI and UX for Earables
  • Auditory Augmented Reality Application with Earables
  • Lightweight Deep Learning on Earables
  • Tiny Machine Learning on Earables
  • Health and Wellbeing Applications of Earables
  • Emerging applications of Earables




  • Submission & Timeline

    While the workshop will accept papers describing completed work as well as work-in-progress, the emphasis is on early discussion of novel and radical ideas (potentially of a controversial nature) rather than detailed description and evaluation of incremental advances.

    Submissions must be no longer than 6 pages (including references) for Full Papers, 4 pages (including references) for Experience Papers, and 2 pages (including references) for Interactive Posters and Vision Papers and must be in PDF format. Reviews will be double-blind: no names or affiliation should be included in the submission.

    The submission template can be downloaded from ACM site.
    Alternatively, the Overleaf version can be found here.

    Latex documents should use the “sigconf” template style. Word users should use the interim template downloadable from the ACM site.

    Submission Site: https://new.precisionconference.com/submissions

    Submission Instructions: to select the appropriate track choose "SIGCHI" in the field Society, "Ubicomp/ISWC 2025" as Conference, and, finally, pick "Ubicomp/ISWC 2025 EarComp" as Track.

    • Submission Deadline: July 11 July 18, 2025 (extended)
    • Acceptance Notification: July 25, 2025
    • Camera Ready Deadline: July 29, 2025
    For any question/concern, get in touch with earcomp@esense.io.

    Organisers

    General Chairs
    Alessandro Montanari, Nokia Bell Labs Cambridge
    Andrea Ferlini, Nokia Bell Labs Cambridge

    Program Chairs
    Mathias Ciliberto, University of Cambridge
    Longfei Shangguan, University of Pittsburgh

    Steering Committee
    Fahim Kawsar, Nokia Bell Labs, Cambridge
    Alessandro Montanari, Nokia Bell Labs Cambridge
    Andrea Ferlini, Nokia Bell Labs Cambridge

    Web, Publicity and Publication
    Jake Stuchbury-Wass, University of Cambridge

    Program Committee
    Tao Chen, Samsung Research America
    Van Fan, Google Research
    Marios Costantinides, CYENS Centre of Excellence
    Dong Ma, Singapore Management University
    Yang Liu, University of Cambridge
    Qiang Yang, University of Cambridge
    Yang Liu, Nokia Bell Labs Cambridge
    Khaldoon Al-Naimi, Nokia Bell Labs Cambridge
    Ashok Thangarajan, Nokia Bell Labs Cambridge
    Jay Prakash, Silence Laboratories, Singapore
    Wen Hu, UNSW Sydney
    Zhenyu Yan, Chinese University of Hong Kong
    Ananta Balaji, Nokia Bell Labs Cambridge
    Tobias Röddiger, Karlsruhe Institute of Technology
    Jake Stuchbury-Wass, University of Cambridge
    Mathias Ciliberto, University of Cambridge
    Shubham Jain, Stony Brook University