The following ISWC notes and briefs will be presented at the UbiComp / ISWC 2020 virtual conference:
ISWCNotes & Briefs
Obtaining a signal useful for continuous pointing input is still an open problem for wearables. While magnetic field sensing is one promising approach, there are significant limitations. Our key contribution in this work is a simulation of a system that tracks a magnet in 3D while also accounting for the ambient magnetic field. The simulated sensor data is processed and the position and rotation is determined by using magnetic field equations, a particle filter and a kinematic model of the hand.
Eyelid stickers are thin strips that temporarily create a crease when attached to the eyelid. The direct contact with the crease that increases and decreases the pressure on the eyelid sticker provides a novel opportunity for sensing blinking. We present Eslucent, an on-skin wearable capacitive sensing device that affords blink detection, building on the form factor of eyelid stickers. It consists of an art layer, conductive thread, fiber eyelid stickers, coated with conductive liquid, and applied the device onto the eyelid crease with adhesive temporary tattoo paper. Eslucent detects blinks during intentional blinking and four involuntary activities by a falling edge detection algorithm in a user study of 14 participants. The average precision was 82% and recall was 70% while achieving the precision and recall of more than 90% in intentional blinking. By embedding interactive technology into a daily beauty product, Eslucent explores a novel wearable form factor for blink detection.
Glasses are a suitable platform for embedding sensors and displays around our heads to support our daily lives. Furthermore, aesthetic features, durability, and portability are essential properties of glasses. However, designing such smart glasses is challenging, because connecting different glass frames both mechanically and electrically, result in smart glasses with bulky hinges. To overcome this challenge, we propose a new design to embed inductively coupled coil pairs adjacent to glasses hinges to deliver power and data wirelessly to the frames. Positioning the coils next to the hinges creates sufficient area for a large transmission and reception coil while maintaining the utility of the glasses. Consequently, we were able to achieve over 85\% power efficiency and a communication rate of 50~Mbps between coils that are small enough to be embedded inside the frame of conventional glasses, available on the market.
Recent advances in Automated Dietary Monitoring (ADM) with wearables have shown promising results in eating detection in naturalistic environments. However, determining what an individual is consuming remains a significant challenge. In this paper, we present results of a food type classification study based on a sub-centimeter scale wireless intraoral sensor that continuously measures temperature and jawbone movement. We explored the feasibility of classifying nine different types of foods into five classes based on their water-content and typical serving temperature in a controlled environment (n=4). We demonstrated that the system can classify foods into five classes with a weighted accuracy of 77.5% using temperature-derived features only and with a weighted accuracy of 85.0% using both temperature- and acceleration-derived features. Despite the limitations of our study, these results are encouraging and suggest that intraoral computing might be a viable direction for ADM in the future.
Rapid prototyping and fast manufacturing processes are critical drivers for implementing wearable devices. This paper shows an exemplary method for building flexible, fully elastomeric, vibrotactile electromagnetic actuators based on the Lorentz force law. This paper also introduces the design parameters required for well-functioning actuators and studies the properties of such actuators. The crucial element of actuator is a helical planer coil manufactured from "capillary" silver TPU (Thermoplastic polyurethane), an ultra-stretchable conductor. This paper leverages the novel material to manufacture soft vibration actuators in fewer and simpler steps than previous approaches. Best practice and procedure for building a wearable actuator are reported. We show that dimension of actuators are easily configurable and can be printed in batch-size-one using 3D printing. Actuators can be attached directly to the skin as all the components of FLECTILE are made from biocompatible polymers. Tests on the driving properties have confirmed that the actuator could reach a broad scope of frequency up to 200 Hz with a small voltage (5 V) required. A user study showed that vibrations of the actuator are well perceivable by six study participants under an observing, hovering, and resting condition.
The ubiquitous availability of wearable sensing devices has rendered large scale collection of movement data a straightforward endeavor. Yet, annotation of these data remains a challenge and as such, publicly available datasets for human activity recognition (HAR) are typically limited in size as well as in variability, which constrains HAR model training and effectiveness. We introduce masked reconstruction as a viable self-supervised pre-training objective for human activity recognition and explore its effectiveness in comparison to state-of-the-art unsupervised learning techniques. In scenarios with small labeled datasets, the pre-training results in improvements over end-to-end learning on two of the four benchmark datasets. This is promising because the pre-training objective can be integrated "as is" into state-of-the-art recognition pipelines to effectively facilitate improved model robustness, and thus, ultimately, leading to better recognition performance.
Earable computing gains growing attention within research and becomes ubiquitous in society. However, there is an emerging need for prototyping devices as critical drivers of innovation. In our work, we reviewed the features of existing earable platforms. Based on 24 publications, we characterized the design space of earable prototyping. We used the open eSense platform (6-axis IMU, auditory I/O) to evaluate the problem-based learning usability of non-experts. We collected data from 79 undergraduate students who developed 39 projects. Our questionnaire-based results suggest that the platform creates interest in the subject matter and supports self-directed learning. The projects align with the research space, indicating ease of use, but lack contributions for more challenging topics. Additionally, many projects included games not present in current research. The average SUS score of the platform was 67.0. The majority of problems are technical issues (e.g., connecting, playing music).
Our ability to exploit low-cost wearable sensing modalities for critical human behaviour and activity monitoring applications in health and wellness is reliant on supervised learning regimes; here, deep learning paradigms have proven extremely successful in learning activity representations from annotated data. However, the costly work of gathering and annotating sensory activity datasets is labor intensive, time consuming and not scalable to large volumes of data. While existing unsupervised remedies of deep clustering leverage network architectures and optimization objectives that are tailored for static image datasets, deep architectures to uncover cluster structures from raw sequence data captured by on-body sensors remains largely unexplored. In this paper, we develop an unsupervised end-to-end learning strategy for the fundamental problem of human activity recognition (HAR) from wearables. Through extensive experiments, including comparisons with existing methods, we show the effectiveness of our approach to jointly learn unsupervised representations for sensory data and generate cluster assignments with strong semantic correspondence to distinct human activities.
Transfer Learning is becoming increasingly important to the Human Activity Recognition community, as it enables algorithms to reuse what has already been learned from models. It promises shortened training times and increased classification results for new datasets and activity classes. However, the question of what exactly is transferred is not dealt with in detail in many of the recent publications, and it is furthermore often difficult to reproduce the presented results. Therefore we would like to contribute with this paper to the understanding of transfer learning for sensor-based human activity recognition. In our experiment use weight transfer to transfer models between two data sets, as well as between sensors from the same data set. As source- and target- datasets PAMAP2 and Skoda Mini Checkpoint are used. The utilized network architecture is based on a DeepConvLSTM. The result of our investigation shows that transfer learning has to be considered in a very differentiated way, since the desired positive effects by applying the method depend very much on the data and also on the architecture used.
On-skin displays have emerged as a seamless form factor for visualizing information. However, the non-traditional form factor of these on-skin displays and how they present notifications on the skin may raise concerns for public wear. These perceptions will impact whether a device is eventually adopted or rejected by society. Therefore, researchers must consider the societal facets of device design. In this paper, we study social perceptions towards interacting with a color-changing on-skin display. We examined third-person perspectives through a 254-person online survey. The study was conducted in the United States and Taiwan to distill cross-cultural attitudes. This structured study sheds light on designing on-skin displays reflective of cultural considerations.
Fatigue is one of the key factors in the loss of work efficiency and health-related quality of life, and most fatigue assessment methods were based on self-reporting, which may suffer from many factors such as recall bias. To address this issue, we developed an automated system using wearable sensing and machine learning techniques for objective fatigue assessment. ECG/Actigraphy data were collected from subjects in free-living environments. Preprocessing and feature engineering methods were applied, before interpretable solution and deep learning solution were introduced. Specifically, for interpretable solution, we proposed a feature selection approach which can select less correlated and high informative features for better understanding system's decision-making process. For deep learning solution, we used state-of-the-art self-attention model, based on which we further proposed a consistency self-attention (CSA) mechanism for fatigue assessment. Extensive experiments were conducted, and very promising results were achieved.
Encounters with casual acquaintances are common in our daily lives. In such situations, people are sometimes unable to find an appropriate topic for conversation, and as such, an awkward silence follows. However, we believe that this awkward encounter can be an opportunity to build a good relationship with the acquaintance through a brief conversation if an appropriate topic is discovered. In this study, we examined a method to enrich casual conversations for an unintended encounter by following three strategies. (1) an online questionnaire survey that involves 10,750 participants to determine how they experience awkward encounters. (2) the design and implementation of a smartwatch-based topic suggestion that relies on finding a commonality in the users’ video-viewing histories. (3) demos and semi-structured interviews that involves 15 participants to evaluate this approach. This investigation demonstrates that this novel approach can help users overcome the awkwardness of conversations with casual acquaintances.
The COVID-19 pandemic dictated that wearing face masks during public interactions was the new norm across much of the globe. As the masks naturally occlude part of the wearer’s face, the part of communication that occurs through facial expressions is lost, and could reduce acceptance of mask wear. To address the issue, we created 2 face mask prototypes, incorporating simple expressive display elements and evaluated them in a user study. Aiming toexplore the potential for low-cost solutions, suitable for large-scale deployment, our concepts utilized bi-state electrochromic displays. One concept Mouthy Mask aimed to reproduce the image of the wearer’s mouth, whilst the Smiley Mask was symbolic in nature. The smart face masks were considered useful in public contexts to support short socially expected rituals. Generally a visualization directly representing the wearer’s mouth was preferred to an emoji style visualization. As a contribution, our work presents a stepping stone towards productizable solutions for smart face masks that potentially increase the acceptability of face mask wear in public.
This paper investigates the possibility of using soft smart textiles over the hair regions to detect chewing activities under episodes of snacking in a simulated scenario with everyday activities. The planar pressure textile sensors are used to perform mechanomyography of the temporalis muscles in the form of a cap. 10 participants contributed 30 recording sessions with time periods between 30 and 60 minutes. A frequency analysis method is developed to detect moments of snacking events with continuous sliding windows on 1-second time granularity. Our approach results in a baseline 80% accuracy, over 85% after outlier removal, and above 90% accuracy for some of the participants.
We present a wearable, oscillating magnetic field-based proximity sensing system to monitor social distancing as suggested to prevent COVID 19 spread (being between 1.5 and 2.0m) apart. We evaluate the system both in controlled lab experiments and in a real life large hardware store setting. We demonstrate that, due physical properties of the magnetic field, the system is much more robust than current BT based sensing, in particular being nearly 100% correct when it comes to distinguishing between distances above and below the 2.0m threshold.
We present the GastroDigitalShirt, a smart T-Shirt for capturing abdominal sounds produced during digestion. The garment prototype embeds an array of eight miniaturised microphones connected to a low-power wearable computer and is designed for long-term recording. We present the microphone integration and shirt wiring layout. With the GastroDigitalShirt we monitored the different digestion phases over six hours in four healthy participants with no prior gastro-intestinal diseases. The collected data were annotated by two independent raters to mark Bowel Sounds (BS) instances. The interrater agreement was substantial, with Cohen's Kappa of 0.7, confirming a consistent labeling approach. Overall 3046 BS instances were individually annotated. The extracted BS were structured by Hierarchical Agglomerative Clustering. The analysis highlighted the presence of 4 BS types. The results show that our prototype can capture the main BS types reported in literature.
More than one million people in the US suffer from hemianopia, which blinds the vision in one half of the peripheral vision in both eyes. Hemianopic patients are often not aware of what they cannot see and frequently bump into walls, trip over objects, or walk into people on the side where the peripheral vision is diminished. We present an augmented reality based assistive technology that expands the peripheral vision of hemianopic patients at all distances. In a pilot trial, we evaluate the utility of this assistive technology for ten hemianopic patients. We measure and compare outcomes related to target identification and visual search in the participants. Improvements in target identification are noted in all participants ranging from 18% to 72%. Similarly, all the participants benefit from the assistive technology in performing a visual search task with an average increase of 24% in the number of successful searches compared to unaided trials. The proposed technology is the first instance of an electronic vision enhancement tool for hemianopic patients and is expected to maximize the residual vision and quality of life in this growing, yet largely overlooked population.
Sound can provide important information about the environment, human activity, and situational cues but can be inaccessible to deaf or hard of hearing (DHH) people. In this paper, we explore a wearable tactile technology to provide sound feedback to DHH people. After implementing a wrist-worn tactile prototype, we performed a four-week field study with 12 DHH people. Participants reported that our device increased awareness of sounds by conveying actionable cues (e.g., appliance alerts) and ‘experiential’ sound information (e.g., bird chirp patterns).
Theatre provides a unique environment in which to obtain detailed data on social interactions in a controlled and repeatable manner. This work introduces a method for capturing and characterising the underlying emotional intent of performers in a scripted scene using in-ear accelerometers. Each scene is acted with different underlying emotional intentions using the theatrical technique of Actioning. The goal of the work is to uncover characteristics in the joint movement patterns that reveal information on the positive or negative valence of these intentions. Preliminary findings over 3x12 (Covid-19 restricted) non-actor trials suggests people are more energetic and more in-sync when using positive versus negative intentions.
Dental braces are a semi-permanent dental treatment that are in direct contact with our metabolism (saliva), food and liquids we ingest, and our environment while smiling or talking. This paper introduces braceIO, biochemical ligatures on dental braces that change colors depending on saliva concentration levels (pH, nitric oxide and acid uric), and can be read by an external device. This work presents our fabrication process of the ligatures and external device, and the technical evaluation of the absorption time, colorimetric measurement tests and the color map to the biosensor level in the app. This project aims to maintain the shape, wearability and aesthetics of traditional ligatures but with embedded biosensors. We propose a novel device that senses metabolism changes with a different biosensor ligature worn in each tooth to access multiple biodata and create seamless interactive devices.
People who are deaf and hard of hearing often have difficulty realizing when someone is attempting to get their attention, especially when mobile. Speech recognition coupled with a head-worn display (HWD) may aid in awareness of when someone calls the user's name. As our intended users are often oversubscribed with experiments, we chose to test non-deaf and hard of hearing subjects while refining our procedures. Preliminary findings from three hearing participants wearing sound masking headphones and performing a mobile task suggest that a HWD display may be faster than, and preferred to, a smartphone for displaying captions for attending to one's name being called.