EarComp 2023

4th International Workshop on Earable Computing
In conjunction with UbiComp 2023, October 9, 2023

(EarComp 2022)
(EarComp 2021)
(EarComp 2019)


Sensory earables are increasingly becoming a mainstream compute platform with a promise to fundamentally transform personal-scale human sensing applications. Over the past few years, a number of research efforts in the ubiquitous computing domain have sought to achieve useful, engaging, and sometimes ambitious behavioural analytics with sensory earables including studies of the human face; of emotion and stress; continuous monitoring of cardiovascular function; oxygen consumption and blood flow; and tracking eating episodes as well as dietary and swallowing activities. At the same time, we have started seeing commercial efforts such as Bragi's The Dash, Bose SoundSport, Jabra Elite Sport, and Sony Xperia offering music experience augmented with sensory services including fitness tracking, real-time translations and conversational agents. Naturally, earables are becoming an intense interdisciplinary area of study with many and diverse applications including HCI, empathic communication, behavioural science, health and wellbeing, entertainment, education, and security.

However, as of today, earable computing lacks an academic forum to bring together researchers, practitioners, and design experts from academia and industry to discuss, share, and shape this exciting new area of research. To this end, we are organising the third workshop on Earable Computing with a hope that this workshop will serve as a catalyst for advancements in sensory earable technology as well as present a clear sense of direction for the research community to proceed in this space.


EarSwitch: From Dream to Hypothesis; Boot-strapping to Production

Dr. Nick Gompertz
Abstract: EarSwitch® Ltd was founded by Dr Nick Gompertz (a UK NHS doctor of 30 years), to improve the lives of people who are non-verbal, or “locked in” due to severe neuro-disabilities. The concept was conceived whilst a medical student for people with MND/ALS. However, Nick did not achieve this until 30 years later, after he watched a children’s TV documentary about an inspirational boy with severe cerebral palsy. Jonathan Bryan at age 13, despite being non-verbal, had written a book by simply gazing at letters stuck on a board, whilst his mother watched on and wrote down the letters he looked at. Nick will discuss the journey from concept to product, including the current and future potential of EarSwitch technologies; to improve peoples lives ranging from general consumer applications, to accessibility control, and also racially inclusive medical grade real-world data.

Dr Nick Gompertz (a UK NHS doctor of 30 years) founded EarSwitch® Ltd to improve the lives of people who are non-verbal, or “locked in” due to severe neuro-disabilities. Having proven the concept of the EarSwitch® for hands-free control from voluntary eardrum movement, Nick realised the unique characteristics of the ear for racially inclusive core biometric monitoring (EarMetrics®), and for complex control (EarControl™). Nick is driven to improve people's lives through ubiquitous sensing from earables and hearing aids, ranging from general consumer applications to accessibility control, and also racially inclusive medical grade real-world data..

OmniBuds: Your Music, Your Health, Your AI Companion

Dr. Alessandro Montanari
Abstract: Imagine earables that effortlessly monitor vital signs while listening to music; recognise activities and contexts you are immersed in; all with advanced on-device AI features at your fingertips. These are OmniBuds.

Alessandro is a principal research scientist in the Pervasive Systems Department at Nokia Bell Labs, leading the Device Forms team. He works on the architectural and algorithmic challenges of building the next-generation wearable systems for human sensing, exploiting ultra-low-power machine learning and advanced signal processing.


Venue: Cozumel 3

09:00 – 09:05, Opening

09:05 - 10:00, Keynote 1: Dr. Nick Gompertz - EarSwitch: From Dream to Hypothesis; Boot-strapping to Production

10:00 - 10:30, Coffee Break

10:30 - 10:42, Design Earable Sensing Systems: Perspectives and Lessons Learned from Industry

Xiaoran Fan, Trausti Thormundsson

10:42 - 10:50, Earables as Medical Devices: Opportunities and Challenges

Abdul Aziz et al.

10:50 - 11:02, Ear-canal Characterisation for Optimum In-Ear Headset User Experience

Khaldoon Al-naimi et al.

11:02 - 11:10, Vertical Jump Test Using an Earable Accelerometer

Dennis Moschina, Tobias Röddiger, Michael Beigl

11:10 - 11:25, EarBender: Enabling Rich IMU-based Natural Hand-to-Ear Interaction in Commodity Earables

Khaled Alkiek et al.

11:25 - 11:40, EarPass: Continuous User Authentication with In-ear PPG

Yang Liu et al.

11:45 - 12:45, Keynote 2: Dr. Alessandro Montanari - OmniBuds: Your Music, Your Health, Your AI Companion

12:45 – 12:50, Closing Remarks and Best Paper Award announcement

12:50 – 13:30, Lunch and Networking

Call For Papers

We will solicit three categories of papers.
  • Full papers (up to 6 pages including references) should report a reasonably mature work with earables, and is expected to demonstrate concrete and reproducible results albeit scale may be limited.
  • Experience papers (up to 4 pages including references) that present extensive experiences with implementation, deployment, and operations of earable-based systems. Desirable papers are expected to contain real data as well as descriptions of the practical lessons learned.
  • Short papers (up to 2 pages including references) are encouraged to report novel, and creative ideas that are yet to produce concrete research results but are at a stage where community feedback would be useful.
  • Moreover, we will have a special submission category - "Dataset Paper" - soliciting a 1-2 page document describing a well curated and labelled dataset collected with earables (eventually accompanied by the dataset).

    All papers will be in ACM sigconf template with 2 columns and all of the accepted papers (regardless of category) will be included in the ACM Digital Library. All papers will be digitally available through the workshop website, and the UbiComp/ISWC 2023 Adjunct Proceedings. For each category of papers, we will offer a "Best Paper" and "Best Dataset" awards sponsored by Nokia Bell Labs.

    Topics of interest (NOT an exhaustive list):
  • Acoustic Sensing with Earables
  • Kinetic Sensing with Earables
  • Multi-Modal Learning with Earables
  • Multi-Task Learning with Earables
  • Active Learning with Earables
  • Low-Power Sensing Systems for Earables
  • Authentication & Trust mechanisms for Earables
  • Quality-Aware Data Collection with Earables
  • Experience Sampling with Earables
  • Crowd Sourcing with Earables
  • Novel UI and UX for Earables
  • Auditory Augmented Reality Application with Earables
  • Lightweight Deep Learning on Earables
  • Health and Wellbeing Applications of Earables
  • Emerging Applications of Earables
  • Tiny Machine Learning on Earables

  • Submission & Timeline

    While the workshop will accept papers describing completed work as well as work-in-progress, the emphasis is on early discussion of novel and radical ideas (potentially of a controversial nature) rather than detailed description and evaluation of incremental advances.

    Submissions must be no longer than 6 pages (including references) for Full Papers, 4 pages (including references) for Experience Papers, and 2 pages (including references) for Interactive Posters and Vision Papers and must be in PDF format. Reviews will be double-blind: no names or affiliation should be included in the submission.

    The submission template can be downloaded from ACM site.
    Alternatively, the Overleaf version can be found here.

    Latex documents should use the “sigconf” template style. Word users should use the interim template downloadable from the ACM site.

    Submission Site: https://new.precisionconference.com/submissions

    Submission Instructions: to select the appropriate track choose "SIGCHI" in the field Society, "Ubicomp/ISWC 2023" as Conference, and, finally, pick "Ubicomp/ISWC 2023 EarComp" as Track.

    • Submission Deadline: July 14, 2023 (AoE)
    • Acceptance Notification: July 21, 2023 (on a rolling basis)
    • Camera Ready Deadline: July 31, 2023 (AoE) (strict deadline)
    For any question/concern, get in touch with earcomp@esense.io.


    General Chairs
    Alessandro Montanari, Nokia Bell Labs Cambridge
    Katayoun (Kate) Farrahi, University of Southampton

    Program Chairs
    Andrea Ferlini, Nokia Bell Labs Cambridge
    Nirupam Roy, University of Maryland, College Park (UMD)

    Steering Committee
    Michael Beigl, Karlsruhe Institute of Technology
    Fahim Kawsar, Nokia Bell Labs, Cambridge
    Cecilia Mascolo, University of Cambridge

    Web, Publicity and Publication
    Kayla-Jade Butkow, University of Cambridge

    Program Committee
    Xiaoran "Van" Fan, Google
    Dong Ma, Singapore Management University
    Marios Costantinides, Nokia Bell Labs Cambridge
    Longfei Shangguan, University of Pittsburgh
    Yang Liu, University of Cambridge
    Yang Liu, Nokia Bell Labs Cambridge
    Ting Dang, Nokia Bell Labs Cambridge
    Khaldoon Al-Naimi, Nokia Bell Labs Cambridge
    Ashok Thangarajan, Nokia Bell Labs Cambridge
    Shijia Pan, UC Merced
    Jagmohan Chauhan, University of Southampton
    Jay Prakash, Silence Laboratories, Singapore
    Jun Han, Yonsei University
    Wen Hu, UNSW Sydney
    Zhenyu Yan, Chinese University of Hong Kong
    Mi Zhang, Michigan State University
    Rajalakshmi Nandakumar, Cornell Tech
    Bashima Islam, Worcester Polytechnic Institute
    VP Nguyen, University of Texas at Arlington/UMass Amherst
    Ho-Hsiang Wu, Bosch, USA