Exploring Design Principles for Speech-to-Visualization Data Entry Interfaces

Research output: Other contribution

Abstract

We present an interface for automatic transcription and simultaneous visualization of spoken narratives on a timeline. We discuss the results of initial laboratory testing of the interface and interviews with prospective users of the approach in two different domains. Speech transcription and entity recognition errors inherent in machine-learning-based approaches to natural language processing place special requirements on the system-user interaction. We outline the design principles for this kind of interface based on the results of testing and interviews with potential users. The presented approach differs from other state-of-the-art text-to-visualization systems in that it constructs a visualization from a speech narrative, while automatically identifying data of interest, as opposed to building plot visualization for user-specified data in structured form based on user’s spoken instructions.

Original languageEnglish
VolumeJuly (3rd Quarter/Summer)
StatePublished - 2023

Publication series

NameSpringer, Cham

Fingerprint

Dive into the research topics of 'Exploring Design Principles for Speech-to-Visualization Data Entry Interfaces'. Together they form a unique fingerprint.

Cite this