Download the ISWC 2012 Adjunct Proceedings.

Download the program booklet.

Program

The preliminary ISWC 2012 program is detailed below. This will be updated as new material is added to the schedule. Pervasive sessions will run concurrently and delegates will be able to attend either session — please register for the conference you wish to primarily attend.

Full paper talks should be 20 minutes long, plus 10 minutes for questions. Short paper talks should be 10 minutes long, plus 5 minutes for questions.

Registration will be open at 8am daily.

Monday and Tuesday

Monday 18th June Tuesday 19th June
Workshops
Doctoral Consortium
Workshops

Wednesday, Thursday and Friday

Time Wednesday 20th June Thursday 21st June Friday 22nd June
9:00 AM -
10:30 AM
Opening Session
Keynote: Sanjiv Nanda
Activity Recognition Wearable Computing Then and Now
10:30 AM -
11:00 AM
Break
Posters
Break
11:00 AM -
12:30 PM
Pervasive session open to ISWC attendees Input+Video Pervasive session open to ISWC attendees
12:30 PM -
2:00 PM
Lunch
2:00 PM -
3:30 PM
Pervasive session open to ISWC attendees Working with People+Video Keynote: Elias Siores
Closing Session
3:30 PM -
4:00 PM
Break
4:00 PM -
5:30 PM
Learning About People+Video Modeling & Learning
Evening Design Exhibition, Demos and Reception Conference Dinner

Keynote Speakers

Sanjiv Nanda

Sanjiv Nanda, vice president of engineering at Qualcomm Research




Abstract:

Putting contextual intelligence in the hands of the average end user requires innovations in many fields beyond traditional sense making or contextual awareness. At one level, this includes transforming data into semantically relevant information and actions with meaningful accuracy while being unobtrusive to the user - something that has already been the focus of traditional contextual intelligence research. We believe that advances in low power implementation and autonomous peer to peer networking will also enable contextual intelligence. Challenges such as low-power, always-on sensing, seamless networking, and pervasive information exchange are key steps on the path to transform contextual intelligence from labs to reality. At Qualcomm, we are working on creating an ecosystem to enable truly information-rich, low-power, pervasive contextual intelligence. This keynote will highlight how the confluence of continuous sensing, smart spaces, and collaborative data brings contextual intelligence to the real world.

Bio:

Sanjiv Nanda is a vice president of engineering in Qualcomm Research and oversees the Systems Engineering department. He currently leads Qualcomm's Aware Project, an initiative that will enable the Company's vision of context-aware personal devices by bringing together research in machine learning and artificial intelligence with advances in resource-efficient implementation.


Elias Siores

Prof. Elias Siores, Provost and Director of Research and Innovation at The University of Bolton

Smart Materials in Energy Conversion for Technical Textile Systems and Devices

Abstract:

Smart materials are capable of sensing the environment within which they are functioning and responding to a stimulus from that environment. Numerous applications have started to emerge and are increasingly finding their way from the laboratory to the commercial world. The research and development work undertaken in the area of smart materials for applications in technical textile systems and devices, especially wearables, are explained and discussed. Systems and devices based on energy conversion such as piezoelectric, photovoltaic and electrorheological as well as passive microwaves are explored for potential industry applications ranging from renewable energy (micropower) by scavenging energy from human motion and the elements (wind, rain, waves and tides), to vehicle and personal protection, healthcare (controlled drug release and and biomedical (tremor suppression, early diagnosis of breast cancer and carotid evaluation). The incorporation of such smart materials into flexible fiber structures and systems integration on wearables is also outlined along with the potential to apply the emerging technologies in other areas.

Bio:

Professor Elias Siores is the Provost and Director of Research and Innovation at The University of Bolton. Educated in the UK (BSc, MSC, MBA, PhD) and pursued his academic career in Australia (Sydney, Brisbane and Melbourne) and Asia (Hong Kong, Dong Guan) before returning to Europe (Germany and UK) as a Marie Curie Fellow. His R&D work concentrated on advancing the science and technology in the field of automated Non-Destructive Testing and Evaluation including Ultrasound, Acoustic Emission, and Microwave Thermography. His recent R&D work focuses on Smart / Functional Materials and Systems development. In this area, he has developed Electromagnetic, Electrorheological, Photovoltaic and Piezoelectric Smart Materials for applications in Micro-Power Re-generators (Energy Conversion Systems) and Medical, Health Care and Wearable Devices, both sensing and actuation. He has published over 300 publications including, presented 15 Keynote Addresses, co-invented 8 International Patents and won more than 15 International Awards and Prizes. He has been a member of editorial boards of international journals, is a Fellow of IOM3, TWI, and IEAust and has served on the Board of Directors of a number of research centers worldwide including UK, Australia, Singapore and Hong Kong.


Sessions

Learning About People

Extracting Mobile Behavioral Patterns with the Distant N-Gram Topic Model
Katayoun Farrahi, Daniel Gatica-Perez

Mining patterns of human behavior from large-scale mobile phone data has potential to understand certain phenomena in society. The study of such human-centric massive datasets requires new mathematical models. In this paper, we propose a probabilistic topic model to address the problem of learning long duration human location sequences. The distant n-gram topic model is based on Latent Dirichlet Allocation.

Socio-Technical Network Analysis from Wearable Interactions
Katayoun Farrahi, Remi Emonet, Alois Ferscha

This paper draws from interaction patterns collected via smartphones and reality mining techniques to explain the dynamics of personal interactions and relationships. Our findings impact a wide range of data-driven applications by providing an overview of community interaction patterns which can be used for applications such as epidemiology, or in understanding the diffusion of opinions and relationships.

Activity Recognition

Energy-Efficient Continuous Activity Recognition on Mobile Phones: An Activity-Adaptive Approach
Zhixian Yan, Vigneshwaran Subbaraju, Dipanjan Chakraborty, Archan Misra, Karl Aberer

We tackle the problem of energy-efficient continuous accelerometer-based activity sensing. After establishing the “energy overheads” vs. “classification accuracy” tradeoff for activity recognition, on a per-activity basis, we design an activity-sensitive strategy (dubbed “A3R”). Experiments on N95 & Android phones show that A3R saves energy by dynamically adapting the accelerometer sampling frequency and the classification features.

Recognizing Daily Life Context using Web-Collected Audio Data
Mirco Rossi, Oliver Amft, Gerhard Tröster

This work presents an approach to model daily life contexts from web-collected audio data. Crowd-sourced textual descriptions (tags) related to individual sound samples were used to model sound context categories. We analysed our approach with dedicated recordings and in a study of full-day recordings of 10 participants using smart phones.

Energy-Efficient Activity Recognition using Prediction
Dawud Gordon, Jürgen Czerny, Takashi Miyaki, Michael Beigl

We present a method for activity recognition leveraging the predictability of human behavior to conserve energy. The algorithm accomplishes this by quantifying activity-sensor dependencies, and using prediction methods to identify likely future activities. Unneeded sensors are then temporarily turned off at little or no recognition cost. The evaluation reveals that large savings in energy are possible at very low cost.

SAMMPLE: Detecting Semantic Indoor Activities in Practical Settings using Locomotive Signatures
Zhixian Yan, Dipanjan Chakraborty, Archan Misra, Hoyoung Jeung, Karl Aberer

We study mobile phone-generated accelerometer data for detecting high-level (i.e., at the semantic level) indoor lifestyle activities, such as cooking at home and working at the workplace. We design a 2-Tier activity extraction framework (called SAMMPLE) and evaluate discriminatory power of the intermediate-level locomotive micro-activities (e.g., sitting, standing). We test SAMMPLE using accelerometer data of 152 days real-life behavioral traces.

Input

Huffman Base-4 Text Entry Glove (H4-TEG)
Bartosz Bajer, Scott Mackenzie, Melanie Baljko

We designed and evaluated a Huffman base-4 Text Entry Glove (H4-TEG). H4-TEG uses pinches between the thumb and fingers on the user’s right hand. Characters and commands use base-4 Huffman codes for efficient input. In a longitudinal study, participants reached 14.0 wpm with error rates <1%. In an added session without visual feedback, entry speed dropped only by 0.4 wpm.

Toe Input using Mobile Projector and Kinect
Daiki Matsuda, Keiji Uemura, Nobuchika Sakata, Shogo Nishida

we present a toe input system that can realize haptic interaction, direct manipulation, and floor projection using a wearable projection system with a large projection surface. It is composed of a mobile projector, Kinect depth camera, and a gyro sensor. It is attached to the user's chest and can detect when the users foot touches the floor.

Airwriting: Hands-free mobile text input by spotting and continuous recognition of 3d-space handwriting with inertial sensors.
Christoph Amma, Marcus Georgi, Tanja Schultz

We present an input method which enables hands-free interaction through 3d handwriting recognition. Users can continuously write text in the air as if they were using an imaginary blackboard. Motion sensing is done wirelessly by accelerometers and gyroscopes. We propose a two-stage approach for spotting and recognition of handwriting gestures. Person-independent performance is evaluated on vocabularies with over 8000 words.

Textile Interfaces: Embroidered Jog-Wheel
Clint Zeagler, Scott Gilliland, Halley Profita, Thad Starner

In our efforts to create new e-textile interfaces and construction techniques for our Electronic Textile Interface Swatch Book (an e-textile toolkit), we have created a multi-use jog wheel using multilayer embroidery, sound sequins from PVDF film, a fabric twisted pair for long leads across the body, and a tilt sensor using a hanging bead, embroidery and capacitive sensing.

Working with People

Garment Positioning and Drift in Garment-Integrated Wearable Sensing
Guido Gioberto, Lucy Dunne

Wearable sensors are notoriously plagued by the error introduced by the movement of the sensor over the body surface. Here, we implement a novel method for analyzing error introduced by garment properties in wearable sensing during body movement, and assess in detail the errors introduced by donning and doffing of a garment and by garment drift during the gait cycle.

GazeCloud: A Thumbnail Extraction Method using Gaze Log Data for Video Life-Log
Yoshio Ishiguro, Jun Rekimoto

GazeCloud is a method for information extraction and presentation using recorded eye gaze data, i.e., personal life-log video data. It calculates the importance of information from gaze data that is consequently used for the generation of thumbnail images. This method performs the calculation using the eye gaze duration. Additionally, we construct a prototype daily-use wearable eye tracker system.

Urban Vibrations: Modeling and Evaluation of Sensitivities in the Lab and Field across a Broad Demographic
Ann Morrison, Lars Knudsen, Hans Jørgen Andersen

We tested vibration intensity sensitivity with a wearable vibration belt on a diverse group with evenly distributed ages and gender (7 to 79 years). We contribute the first vibration sensitivity field testing. We increasingly escalated the level of distraction and busy-ness. Our findings differ from previous lab studies in that we found a decreased detection rate in busy environments.

Modeling & Learning

Pattern-based Alignment of Audio Data for Ad-hoc Secure Device Pairing
Ngu Nguyen, Stephan Sigg, An Huynh, Yusheng Ji

We use fingerprints extracted from ambient audio to generate the common cryptographic key in secure mobile phone pairing. To deal with misalignment in recorded audio data due to the variety of recording hardwares, we propose a pattern-based approximative matching process to achieve synchronisation independently on each device without any inter-device communication other than an initial plain pairing request.

Kinect=IMU? Learning MIMO models to automatically translate activity recognition models across sensor modalities
Oresti Baños, Alberto Calatroni, Miguel Damas, Héctor Pomares, Ignacio Rojas, Hesam Sagha, José del R. Millán, Gerhard Tröster, Ricardo Chavarriaga, Daniel Roggen

A method to translate a preexisting recognition system from a source sensor domain S to a target sensor domain T, possibly of different modality, is presented. MIMO system identification is used to map the signals of S to T and subsequently translate the recognition system. The approach is demonstrated in a gesture recognition problem translating between Kinect and IMUs.

Automatic Synchronization of Wearable Sensors and Video-Cameras for Ground Truth Annotation - A Practical Approach
Thomas Ploetz, Chen Chen, Nils Hammerla, Gregory Abowd

We present a practical approach to automatic cross-modal synchronization. Distinctive gestures, captured by a camera, are matched with recorded acceleration signal(s) using cross-correlation based time-delay estimation. PCA-based data pre-processing makes the procedure robust against orientation mismatches between the marking gesture and the camera plane. We evaluated five different marker gestures and report very promising results for actual use.

Panel: Wearable Computing Then and Now

Have We Achieved the Ultimate Wearable Computer?
Bruce Thomas

Posters

A Textual Analysis of the International Symposium on Wearable Computers: 1997-2011 Proceedings
Adam Martin

Inertial Body-worn Sensor Data Segmentation by Boosting Threshold-based Detectors
Yue Shi, Yuanchun Shi, Xia Wang

Introducing a New Benchmarked Dataset for Activity Monitoring
Attila Reiss, Didier Stricker

iPod for Home Balance Rehabilitation Exercise Monitoring
Kevin Huang, Patrick Sparto, Sara Kiesler, Dan Siewiorek, Asim Smailagic

Studying Order Picking in an Operating Automobile Manufacturing Plant
Hannes Baumann, Thad Starner, Patrick Zschaler

At which station am I?: Identifying subway stations using only a pressure sensor
Takafumi Watanabe, Daisuke Kamisaka, Shigeki Muramatsu, Hiroyuki Yokoyama

Demos

Airwriting: Mobile text-entry by 3d-space handwriting.
Christoph Amma, Marcus Georgi, Tanja Schultz

Icebreaker T-shirt: a Wearable Device for Easing Face-to-Face Interaction.
Nanda Khaorapapong, Matthew Purver

Videos

H4-TEG: A Huffman Base-4 Text Entry Glove Demonstration.
Bartosz Bajer, I. Scott MacKenzie, Melanie Baljko

Icebreaker T-shirt: a Wearable Device for Easing Face-to-Face Interaction.
Nanda Khaorapapong, Matthew Purver

A System for Visualizing Pedestrian Behavior based on Car Metaphors.
Tsutomu Terada, Masahiko Tsukamoto, Hiroaki Sasaki

Design Exhibition

Don’t Break My Heart – wearable distance warning system for cyclists.
Rain Ashford

Temperature Sensing T-shirt (AKA: ‘Yr In Mah Face!’).
Rain Ashford

Twinkle Tartiflette – an Arduino driven interactive word and music artwork.
Rain Ashford

Reconfigurable Electronic Textiles Garment.
Kaila Bibeau, Lucie Mulligan, Ashton Frith

Context aware signal glove for bicycle and motorcycle riders.
Tony Carton

Solar Family.
Silvia Guttmann, Sara Lopez, Dziyana Zhyhar

Fairy Tale Kinetic Dress.
Helen Koo

The Photonic Bike Clothing III – For Enthusiastic Biker.
Sunhee Lee and Kyungha Nam

Wearable Multimodal Warning System.
Jessica Loomis, Grace Lorig, Mai Yang

Flutter.
Halley P. Profita, Nicholas Farrow, Nikolaus Correll