Copy the page URI to the clipboard
Nguyen, Tu; Power, Richard; Piwek, Paul and Williams, Sandra
(2012).
URL: http://www.ida.liu.se/~patla/conferences/WoDOOM12/...
Abstract
Debugging OWL ontologies can be aided with automated reasoners that generate entailments, including undesirable ones. This information is, however, only useful if developers understand why the entailments hold. To support domain experts (with limited knowledge of OWL), we are developing a system that explains, in English, why an entailment follows from an ontology. In planning such explanations, our system
starts from a justification of the entailment and constructs a proof tree including intermediate statements that link the justification to the entailment. Proof trees are constructed from a set of intuitively plausible deduction rules. We here report on a study in which we collected empirical frequency data on the understandability of the deduction rules, resulting in a facility index for each rule. This measure forms the basis for making a principled choice among alternative explanations, and identifying steps in the explanation that are likely to require extra elucidation.