Copy the page URI to the clipboard
Morales, Rafael; Van Labeke, Nicolas and Brna, Paul
(2006).
DOI: https://doi.org/10.1007/11925231_20
Abstract
We analyse how a learner modelling engine that uses belief functions for evidence and belief representation, called xLM, reacts to different input information about the learner in terms of changes in the state of its beliefs and the decisions that it derives from them. The paper covers xLM induction of evidence with different strengths from the qualitative and quantitative properties of the input, the amount of indirect evidence derived from direct evidence, and differences in beliefs and decisions that result from interpreting different sequences of events simulating learners evolving in different directions. The results here presented substantiate our vision of xLM is a proof of existence for a generic and potentially comprehensive learner modelling subsystem that explicitly represents uncertainty, conflict and ignorance in beliefs. These are key properties of learner modelling engines in the bizarre world of open Web-based learning environments that rely on the content+metadata paradigm.
Viewing alternatives
Download history
Metrics
Public Attention
Altmetrics from AltmetricNumber of Citations
Citations from DimensionsItem Actions
Export
About
- Item ORO ID
- 34581
- Item Type
- Conference or Workshop Item
- ISSN
- 0302-9743
- Extra Information
-
MICAI 2006: Advances in Artificial Intelligence
5th Mexican International Conference on Artificial Intelligence, Apizaco, Mexico, November 13-17, 2006. Proceedings
Editors:Alexander Gelbukh, Carlos Alberto Reyes-Garcia
Lecture Notes in Computer Science, Vol. 4293, 2006
ISBN: 978-3-540-49026-5 (Print)
ISBN: 978-3-540-49058-6 (Online)
pp 208-217 - Academic Unit or School
- Institute of Educational Technology (IET)
- Copyright Holders
- © 2006 Springer-Verlag
- Related URLs
-
- http://nvl.calques3d.org/(Author Website)
- Depositing User
- Nicolas Van Labeke