Rebecca Fiebrink et al.: Human Model Evaluation in Interactive Supervised Learning.
Moreover, we invite you to join us for a post-meeting pint at the peasant.
From the Abstract […]
We present work studying the evaluation practices of end users interactively building supervised learning systems for real-world gesture analysis problems. We examine users’ model evaluation criteria, which span conventionally relevant criteria such as accuracy and cost, as well as novel criteria such as unexpectedness.
The authors describe three studies of people applying super- vised learning to their work in computer music:
1. A user-centered design process with seven composers, which focused on the reﬁnement of the Wekinator. (http://wekinator.cs.princeton.edu/)
2. Students using the Wekinator in an assignment focused on supervised learning in interactive music performance systems.
3. A case study in which they worked with a professional cellist/composer to build a gesture recognition system for a sensor-equipped cello bow. (K-Bow)