Piwek, Paul; Hernault, Hugo; Prendinger, Helmut and Ishizuka, Mitsuru
This is the latest version of this eprint.
PDF (Not Set)
- Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
|DOI (Digital Object Identifier) Link:||http://doi.org/10.1007/978-3-540-74997-4_16|
|Google Scholar:||Look up in Google Scholar|
The Text2Dialogue (T2D) system that we are developing allows digital content creators to generate attractive multi-modal dialogues presented by two virtual agents–by simply providing textual information as input. We use Rhetorical Structure Theory (RST) to decompose text into segments and to identify rhetorical discourse relations between them. These are then 'acted out' by two 3D agents using synthetic speech and appropriate conversational gestures. In this paper, we present version 1.0 of the T2D system and focus on the novel technique that it uses for mapping rhetorical relations to question–answer pairs, thus transforming (monological) text into a form that supports dialogues between virtual agents.
|Item Type:||Conference Item|
|Keywords:||Embodied Conversational Agents, Virtual Agents, Intelligent Agents, Multimedia, Dialogue, Natural Language Generation|
|Academic Unit/Department:||Mathematics, Computing and Technology > Computing & Communications
Mathematics, Computing and Technology
|Interdisciplinary Research Centre:||Centre for Research in Computing (CRC)|
|Depositing User:||Paul Piwek|
|Date Deposited:||31 Oct 2008 00:32|
|Last Modified:||24 Feb 2016 05:53|
|Share this page:|
Available Versions of this Item
T2D: Generating Dialogues Between Virtual Agents Automatically from Text. (deposited 18 Sep 2007)
- T2D: Generating Dialogues Between Virtual Agents Automatically from Text. (deposited 31 Oct 2008 00:32) [Currently Displayed]
Download history for this item
These details should be considered as only a guide to the number of downloads performed manually. Algorithmic methods have been applied in an attempt to remove automated downloads from the displayed statistics but no guarantee can be made as to the accuracy of the figures.