EarComp 2022

3rd International Workshop on Earable Computing
In conjunction with UbiComp 2022, September 15th, 2022

(EarComp 2021)
(EarComp 2019)


Sensory earables are increasingly becoming a mainstream compute platform with a promise to fundamentally transform personal-scale human sensing applications. Over the past few years, a number of research efforts in the ubiquitous computing domain have sought to achieve useful, engaging, and sometimes ambitious behavioural analytics with sensory earables including studies of the human face; of emotion and stress; continuous monitoring of cardiovascular function; oxygen consumption and blood flow; and tracking eating episodes as well as dietary and swallowing activities. At the same time, we have started seeing commercial efforts such as Bragi's The Dash, Bose SoundSport, Jabra Elite Sport, and Sony Xperia offering music experience augmented with sensory services including fitness tracking, real-time translations and conversational agents. Naturally, earables are becoming an intense interdisciplinary area of study with many and diverse applications including HCI, empathic communication, behavioural science, health and wellbeing, entertainment, education, and security.

However, as of today, earable computing lacks an academic forum to bring together researchers, practitioners, and design experts from academia and industry to discuss, share, and shape this exciting new area of research. To this end, we are organising the third workshop on Earable Computing with a hope that this workshop will serve as a catalyst for advancements in sensory earable technology as well as present a clear sense of direction for the research community to proceed in this space.


Using wearables and theatre to study the social brain.

Dr Jamie A Ward
Abstract: Measuring detailed information on how people move, see, and think during realistic social situations can be a powerful method in studying social behaviour and cognition. However, measurement-driven research can be limited by the available technology, with bulky equipment and rigid constraints often confining such work to the laboratory, thus limiting the ecological validity of any findings. Together with colleagues at Goldsmiths, UCL, and Keio University, I have been working on several projects that use wearable sensing to take this research out of the laboratory and into the real world -- while on the way, stopping off at the theatre. In this talk, I will give a brief overview of some of our work, and try to show how the paradigm of 'theatre as a laboratory', might provide a way forward, both for research in social cognition, and in wearable sensing.

Jamie is a lecturer at the Department of Computing, Goldsmiths, University of London. He received his Ph.D. in electronics from ETH Zurich, where he developed some of the first uses of multi-modal, wearable sensing for human activity recognition. He continued this work as a Marie Curie Research Fellow in Computing at Lancaster University, and later as a postdoc at DFKI Germany, and the UCL Institute for Cognitive Neuroscience. In between these posts he retrained and worked as an actor. His work centres around the convergence of technology, neuroscience, theatre, and autism. For more details, see www.jamieward.net.

Hearing and Well-Being: Hearables as Game-Changer

Andrew Bellavia
Abstract: Connectivity and sensor fusion promise to make hearable use increasingly personal, taking into account location, mood, and overall health. As advanced features proliferate, a key function of any hearable will always be to deliver sound to the user. Understanding the relationship of sound to well-being is critical to anyone developing devices for applications such as remote meetings, health and wellness, or hearing augmentation, as well as music. At EarComp 2022 I'll share more about the surprising variability in people's hearing, how that affects their well-being in unexpected ways, and how hearables of the future will account for hearing variability to deliver the best possible experience.

Andrew has been deeply engaged with the latest in hearing health and the intersection of hearing and well-being. He recently founded a new company, AuraFuturity, to provide go-to-market, branding, and content services to companies in the hearing health and hearable markets. Previously Andrew was the Director of Market Development for Knowles Corp, a leading acoustic solutions provider to the hearing health, hearable, mobile and IoT industries. His experience goes back to the beginnings of hearable development with companies such as Bragi, Doppler labs, Amazon, and many more since. Connect and learn more on LinkedIn and Twitter. Learn more at aurafuturity.com.


Proceedings [draft]:

Date: Thursday, September 15th, 2022
Venue: William Gates Building, room FW26

13:00 – 13:15 (UK TIME), Opening
13:15 - 14:15 (UK TIME), Keynote 1: Dr Jamie A Ward - Using wearables and theatre to study the social brain.
14:15 - 15:15 (UK TIME), Technical Session 1
  • enVolve: Are You Listening? Inertial Sensing to Monitor the Involvement of Silent Listeners during an Online Interaction.

    Mr. Garvit Chugh, Suchetana Chakraborty, Ravi Bhandari, Dr Sandip Chakraborty

  • Excerpt of ToothSonic: Earable Authentication via Acoustic Toothprint.

    Zi Wang, Yili Ren, Professor Yingying Chen, Professor Jie Yang

  • A Taxonomy of Noise in Voice Self-reports while Running.

    Tao Bi, Dr Temitayo Olugbade, Akhil Mathur, Catherine Hollowa, Dr Aneesha Singh, Enrico Costanza, Professor Nadia Berthouze

  • Multimodal Attention Networks for Human Activity Recognition From Earable Devices.

    Jake Stuchbury-Wass, Andrea Ferlini, Cecilia Mascolo

  • Excerpt of Auritus: An Open-Source Optimization Toolkit for Training and Development of Human Movement Models and Filters Using Earables.

    Swapnil Sayan Saha, Sandeep Singh Sandha, Siyou Pei, Vivek Jain, Ziqi Wang, Yuchen Li, Ankur Sarker, Mani Srivastava

  • Excerpt of PPGface: Like What You Are Watching? Earphones Can “Feel” Your Facial Expressions.

    Seokmin Choi, Yang Gao, Yincheng Jin, Se Jun Kim, Jiyang Li, Wenyao Xu, Zhanpeng Jin

15:15 - 15:30 (UK TIME), Break
15:30 - 16:30 (UK TIME), Keynote 2: Andrew Bellavia - Hearing and Well-Being: Hearables as Game-Changer.
16:30 - 17:30 (UK TIME), Technical Session 2
  • Excerpt from "Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena".

    Tobias Röddiger, Christopher Clarke, Paula Breitling, Tim Schneegans, Haibin Zhao, Hans Gellersen, Michael Beigl

  • Using Earables Platforms to Study Verbal Communication.

    Alejandro Perez, Matthew H. Davis

  • OpenEarable: Open Hardware Earable Sensing Platform.

    Tobias Röddiger, Tobias King, Dylan Ray Roodt, Christopher Clarke, Michael Beigl

  • A Preliminary Study for Detecting Visual Search Behaviors During Street Walking Using Earable Device.

    Mr. Kazuki Shimojo, Zengyi Han, Yuuki Nishiyama, Kaoru Sezaki

  • Designing and evaluating a system for studying EarEEG signals.

    A Adarsh, Meghana S, Dr Kartik Muralidharan, Jayavardhana Gubbi, Mr. Ramesh Kumar Ramakrishnan, Dr. Arpan Pal

17:30 – 17:45 (UK TIME), Closing and Best Paper Award announcement

Presentation time will be 13 minutes for long papers and experience papers (10 minutes for presentation + 3 minutes for Q&A) and 8 minutes for short papers and excerpt papers (5 minutes for presentation + 3 minutes for Q&A)

Call For Papers

We will solicit four categories of papers.
  • Full papers (up to 6 pages including references) should report a reasonably mature work with earables, and is expected to demonstrate concrete and reproducible results albeit scale may be limited.
  • Experience papers (up to 4 pages including references) that present extensive experiences with implementation, deployment, and operations of earable-based systems. Desirable papers are expected to contain real data as well as descriptions of the practical lessons learned.
  • Short papers (up to 2 pages including references) are encouraged to report novel, and creative ideas that are yet to produce concrete research results but are at a stage where community feedback would be useful.
  • Short versions (up to 2 pages including references) of papers that have been presented at IMWUT are welcome to obtain feedback from the dedicated earable community. These should be entitled “Excerpt of IMWUT PAPER TITLE”.
  • Moreover, we will have a special submission category - "Dataset Paper" - soliciting a 1-2 page document describing a well curated and labelled dataset collected with earables (eventually accompanied by the dataset).

    All papers will be in ACM sigconf template with 2 columns and all of the accepted papers (regardless of category) will be included in the ACM Digital Library. All papers will be digitally available through the workshop website, and the UbiComp adjunct proceedings. For each category of papers, we will offer a "Best Paper" and "Best Dataset" awards sponsored by Nokia Bell Labs.

    Topics of interest (NOT an exhaustive list):
  • Acoustic Sensing with Earables
  • Kinetic Sensing with Earables
  • Multi-Modal Learning with Earables
  • Multi-Task Learning with Earables
  • Active Learning with Earables
  • Low-Power Sensing Systems for Earables
  • Authentication & Trust mechanisms for Earables
  • Quality-Aware Data Collection with Earables
  • Experience Sampling with Earables
  • Crowd Sourcing with Earables
  • Novel UI and UX for Earables
  • Auditory Augmented Reality Application with Earables
  • Health and Wellbeing Applications of Earables
  • Emerging Applications of Earables
  • Embedded Machine Learning

  • The presentations will be in person, but we will accomodate both live remote presentations and pre-recorded remote presentations.

    Submission & Timeline

    While the workshop will accept papers describing completed work as well as work-in-progress, the emphasis is on early discussion of novel and radical ideas (potentially of a controversial nature) rather than detailed description and evaluation of incremental advances.

    Submissions must be no longer than 6 pages (including references) for Full Papers, 4 pages (including references) for Experience Papers, and 2 pages (including references) for Interactive Posters and Vision Papers and must be in PDF format. Reviews will be double-blind: no names or affiliation should be included in the submission.

    The submission template can be downloaded from ACM site.
    Alternatively, the Overleaf version can be found here.

    Latex documents should use the “sigconf” template style. Word users should use the interim template downloadable from the ACM site.

    Submission Site: https://new.precisionconference.com/submissions

    Submission Instructions: to select the appropriate track choose "SIGCHI" in the field Society, "Ubicomp/ISWC 2022" as Conference, and, finally, pick "Ubicomp/ISWC 2022 Workshop: EarComp" as Track.

    • Submission Deadline: August 31, 2022 (11:59 PM PDT)
    • Acceptance Notification: September 7, 2022
    • Camera Ready Deadline: September 11, 2022
    • Workshop: September 15, 2022
    For any question/concern, get in touch with earcomp@esense.io.


    General Chairs
    Alessandro Montanari, Nokia Bell Labs Cambridge
    Dong Ma, Singapore Management University (SMU)
    Fahim Kawsar, Nokia Bell Labs Cambridge

    Program Chairs
    Andrea Ferlini, Nokia Bell Labs Cambridge
    Xiaoran "Van" Fan, Google, Technology Directions Office (TDO)

    Web, Publicity and Publication
    Kayla-Jade Butkow, University of Cambridge

    Program Committee
    Cecilia Mascolo, University of Cambridge
    Marios Costantinides, Nokia Bell Labs Cambridge
    Michael Beigl, Karlsruhe Institute of Technology
    Ting Dang, University of Cambridge
    Longfei Shangguan, Microsoft Research
    Yang Liu, University of Cambridge
    Shubham Jain, Stony Brook University
    Anran Wang, University of Washington
    Octavio Ponce, Google
    Przemysław Pawełczak, TU Delf