Advance Program - Friday September 4, 2009
The program starts at 2.30 pm with the Tutorials. In the evening - at 8 pm - the "Prix Ars Electronica Gala" takes place in the Brucknerhaus Linz.
12:00 - 18:00 | Registration | |
14:30 - 16:00 | Tutorials | |
"Speckled Computing; (D.K. Arvind) | ||
"The Future of Mobile Map Interaction" (A. Krueger) | ||
"Activity recognition with the iPhone" (K. Kunze, D. Bannach) | ||
16:00 - 16:15 | Coffee Break | |
16:15 - 17:30 | Tutorials | |
"Speckled Computing; (D.K. Arvind) | ||
"The Future of Mobile Map Interaction" (A. Krueger) | ||
"Activity recognition with the iPhone" (K. Kunze, D. Bannach) | ||
18:30 - 21:00 | Prix Ars Electronica Gala |
Tutorials (14:30 - 16:00, 16:15 - 17:30)
It is mandatory to register for the tutorials, download the registration form in the "Registration" category.
Speckled Computing
Abstract: A specknet is a collection of autonomous specks which provides distributed services: each speck is capable of sensing and processing the data under program control; the specks themselves are connected as a mobile wireless network which processes information in a distributed manner.
Specknets link the digital world of computers to the physical world of sensory data. A network of wearable Orient specks, for example, is capable of tracking the orientation of the body parts, or the position of the person in the environment, and this information can be stored, manipulated and accessed remotely over the internet.
The tutorial will cover three aspects of Speckled Computing:
Speck platforms, including the hardware and firmware
Application of the on-body Orient wireless motion capture system in
the healthcare, animation and sports sectors
The science underpinning the interpretation of sensor data in these
applications.
The attendees will have access to the Orient motion capture system and will
get hands-on experience in interactive animation of virtual characters and
in the real-time control of a bipedal robot performing tasks such as walking,
standing on a leg and waving of arms.
Target Audience: The tutorial will benefit the following audiences:
Decision makers in strategic technology developments in the healthcare,
digital media and sports sectors
Researchers in academia and industry with interests in wearable computing
Application developers in wireless sensor networks, with particular
interest in on-body sensor-based wireless motion capture systems
Doctoral students who wish to gain hands-on experience of programming
wearable wireless motion capture systems and exposure to algorithms for analysing
and interpreting the data
Agenda:
Scanning of the application spaces in healthcare, digital media
and sports sectors for wearable wireless motion capture systems
Detailed case studies of deployment in clinical gait analysis, 3-D
animation, biomechanics of golf swings
Unsupervised learning algorithms for the classification of motion data
Hands-on experience in 3-D animation and in programming behaviour of
a bipedal robot
Brief Bio of the Presenter:
DK Arvind is a Reader in the School of Informatics at the University of Edinburgh,
Scotland, United Kingdom, and CITRIS Visiting Professor at the University
of California at Berkeley (2007-11). He was previously for four years a Research
Scientist in the School of Computer Science, Carnegie-Mellon University, Pittsburgh,
USA. He is the founder Director and Principal Investigator of the Research
Consortium in Speckled Computing (www.specknet.org) a multidisciplinary
grouping of computer scientists, electronic engineers, electrochemists and
physicists drawn from five universities, to research the next generation of
miniature wireless sensor networks. The Consortium has attracted research
funding in the excess of £5.2 Million since 2004 from the Scottish Funding
Council, and the UK Engineering and Physical Sciences Research Council (equivalent
of the National Science Foundation in the US). In the past his research has
been funded by EPSRC, US Office of Naval Research, Scottish Enterprise/Cadence
Design Systems, Sharp, Hitachi, Panasonic/Mastushita, Agilent, ARM and SUN
Microsystems. His research interests include the design, analysis and integration
of miniature networked embedded systems which combine sensing, processing
and wireless networking capabilities.
The Future of Mobile Map Interaction - GIS meets wearable computing
Abstract: Since 6000 years humans have used maps to navigate through space. Since then these maps have been supporting humans to perform a variety of other spatial tasks. Until a couple of decades ago most maps were drawn or printed on a piece of paper (or on material like stone or papyrus) of a certain size. This has changed dramatically. Nowadays most consumed maps are digital and presented on electronic devices. In this tutorial we will extrapolate from this situation and we will explore how the next generation of map devices and interaction styles will look like. This tutorial will explore new possibilities to navigate through maps using physical (whole body) gestures (e.g. multi-touch gestures) and discuss the challenges of map design and which arise from these interaction styles and the implications for the design of GIS software for mobile devices, large-scale multi-touch screens and wearable projectors.
Objective(s), significance and addressed audience: This tutorial aims at academics and industrial researchers, who want to learn about future map interaction concepts applicable to mobile and wearable devices as well as to large scale multi-touch surfaces. The tutorial does not require deep technical and formal knowledge and does not expect that the audience is familiar with Geoinformation Systems. All methods are explained by examples. The tutorial is therefore suitable for students, who want more in depth information on the topic of map interaction, senior researchers, who are interested in an overview and industrial researchers who want to learn about possible future applications.
Agenda:
What is a GIS - a brief introduction
Interaction styles for future mobile maps
Mobile Map interaction technologies
Evaluation methods for mobile map interaction
Brief Bio of the Presenter:
Antonio Krüger received a diploma in computer science and economics at
Saarland University in 1995. Afterwards he joined the Cognitive Science Graduate
Programme of the same University and finished it with a doctoral degree in
1999. His doctoral thesis was on the "Automated Abstraction of 3D-Graphics".He
was early involved in several Artificial Intelligence projects at the German
Research Centre for AI (DFKI GmbH), and later from 1999-2003 at the Intelligent
Systems Lab of Saarland University as a Senior Researcher. In 2000 he co-founded
the Universityspin-off Eyeled GmbH, a company focusing on mobile computing
solutions. Within the company he is responsible for the technology transfer
of university research. From 2004 to 2009 he was an associate professor for
Geoinformatics and Computer Science at Münster University, Germany. From
2005 to 2009 he was elected managing director of the institute for Geoinformatics
at the same University. Since 2009 Antonio Krüger has an appointment
as a full professor for Computer Science at Saarland University. At the same
time he is the Scientific Director of the "Innovative Retail Laboratory
(IRL)" at the German Research Center for Artificial Intelligence (DFKI)
in Saarbrücken, Germany.
Antonio's main research areas are Intelligent User Interfaces and mobile and ubiquitous context-aware Systems. He worked on the automatic generation of graphics for technical documentations, intelligent navigation systems and personalized media generation. In this context he looked at generation processes that take into account both the limited technical resources of output devices and the limited cognitive resources of the users. More recent examples of his research come from the domain of mobile and ubiquitous computing. Here, Antonio is involved in projects on interactive display networks, mobile augmented reality and interactive surface computing.
Antonio is co-organising the annual Smart-Graphics Symposium and served on
various program committees in the field of intelligent mobile systems, e.g.
Intelligent User Interfaces, User Modeling, Ubicomp, Mobisys, and Pervasive
Computing.
Building activity recognition applications for the iPhone platform
Abstract: In this tutorial we will discuss how to leverage the iPhone
platform for experimental setups and applications in the field of context
and activity recognition.
Starting with a short intro about the capabilities of the iPhone in terms
of on-board sensors, computing power and accessibility, we discuss how to
deal with context-aware application development for mobile devices. The participants
can try to build a recognition system themselves recording sensor data, training
classifiers, testing them offline and online.
Objectives: The participants will learn how to prototype context aware
applications with focus on the features and limitations of the iphone platform.
To participate, a Macbook (intel based) and an iphone are useful, however
they are not needed. The offline plotting, recognition and training scripts
are in python (enthought suite), therefore some knowledge of python is helpful,
yet also not required.
The Enthought suite can be easily installed on Windows, Mac and Linux.
Both presenters have several applications available on the appstore since
over a year.
Agenda:
Introduction
Overview of the iPhone for activity recognition
Primer into prototyping tools
Recording and analyzing data from iPhone sensors
Building activity recognition classifiers
Testing offline/online
Summary
Brief Bio of the Presenters:
David Bannach is a doctoral candidate and a member of the research staff at
the Embedded Systems Laboratory of the University of Passau.
His research interests focus on software systems for context-aware computing.
He received his diploma in computer science from ETH Zurich.
Kai Kunze is a doctoral candidate and a member of the research staff at the
ESL University Passau. His research focus includes activity recognition in
realistic scenarios, machine learning etc. He received his MSc. in computer
science from IU Bruchsal.
He interned at Sunlabs Europe, France and PARC, USA.
Prix Ars Electronica Gala (18:30 - 21:00)
During the Ars Electronica Festival in September, the Golden Nicas, Awards of Distinction and Honorary Mentions are awarded with pomp at the Ars Electronica Gala. The winners of the Golden Nicas and Awards of Distinction are also invited to present their projects at the Prix Ars Electronica Forums which are held over a number of days. Webcasts of these forums are available at www.aec.at.
The conference registration includes an Ars Festival Pass.
Please visit the Prix Ars Electronica Homepage for more information.