Copy the page URI to the clipboard
Law, Effie Lai-Chong and Wild, Fridolin
(2014).
DOI: https://doi.org/10.1007/978-3-319-02399-1_3
Abstract
Evaluating highly dynamic and heterogeneous Personal Learning Environments (PLEs) is extremely challenging. Components of PLEs are selected and configured by individual users based on their personal preferences, needs, and goals. Moreover, the systems usually evolve over time based on contextual opportunities and constraints. As such dynamic systems have no predefined configurations and user interfaces, traditional evaluation methods often fall short or are even inappropriate. Obviously, a host of factors influence the extent to which a PLE successfully supports a learner to achieve specific learning outcomes. We categorize such factors along four major dimensions: technological, organizational, psycho-pedagogical, and social. Each dimension is informed by relevant theoretical models (e.g., Information System Success Model, Community of Practice, self-regulated learning) and subsumes a set of metrics that can be assessed with a range of approaches. Among others, usability and user experience play an indispensable role in acceptance and diffusion of the innovative technologies exemplified by PLEs. Traditional quantitative and qualitative methods such as questionnaire and interview should be deployed alongside emergent ones such as learning analytics (e.g., context-aware metadata) and narrative-based methods. Crucial for maximal validity of the evaluation is the triangulation of empirical findings with multi-perspective (end-users, developers, and researchers), mixed-method (qualitative, quantitative) data sources. The framework utilizes a cyclic process to integrate findings across cases with a cross-case analysis in order to gain deeper insights into the intriguing questions of how and why PLEs work.