The Open UniversitySkip to content
 

AutoFocus: Interpreting Attention-based Neural Networks by Code Perturbation

Bui, Nghi D. Q.; Yu, Yijun and Jiang, Lingxiao AutoFocus: Interpreting Attention-based Neural Networks by Code Perturbation. In: The 34th IEEE/ACM International Conference on Automated Software Engineering (ASE 2019) (Lawall, Julia and Marinov, Darko eds.), 11-15 2019, San Diego, California, USA.

Full text available as:
[img]
Preview
PDF (Accepted Manuscript) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (609kB) | Preview
Google Scholar: Look up in Google Scholar

Abstract

Despite being adopted in software engineering tasks, deep neural networks are treated mostly as a black box due to the difficulty in interpreting how the networks infer the outputs from the inputs. To address this problem, we propose AutoFocus, an automated approach for rating and visualizing the importance of input elements based on their effects on the outputs of the networks. The approach is built on our hypotheses that (1) attention mechanisms incorporated into neural networks can generate discriminative scores for various input elements and (2) the discriminative scores reflect the effects of input elements on the outputs of the networks. This paper verifies the hypotheses by applying AutoFocus on the task of algorithm classification (i.e., given a program source code as input, determine the algorithm implemented by the program). AutoFocus identifies and perturbs code elements in a program systematically, and quantifies the effects of the perturbed elements on the network’s classification results. Based on evaluation on more than 1000 programs for 10 different sorting algorithms, we observe that the attention scores are highly correlated to the effects of the perturbed code elements. Such a correlation provides a strong basis for the uses of attention scores to interpret the relations between code elements and the algorithm classification results of a neural network, and we believe that visualizing code elements in an input program ranked according to their attention scores can facilitate faster program comprehension with reduced code.

Item Type: Conference or Workshop Item
Copyright Holders: 2019 ACM, 2019 IEEE
Project Funding Details:
Funded Project NameProject IDFunding Body
Academic Research Fund (AcRF) Tier 1 grant from SIS at SMUNot SetSingapore Ministry of Education (MOE)
SAUSE: Secure, Adaptive, Usable Software EngineeringEP/R013144/1 (previous: EP/R005095/1)EPSRC (Engineering and Physical Sciences Research Council)
Drone IdentityNo 783287EU H2020 SESAR EngageKTN
Keywords: attention mechanisms; neural networks; algorithm classification; interpretability; code perturbation; program comprehension
Academic Unit/School: Faculty of Science, Technology, Engineering and Mathematics (STEM) > Computing and Communications
Faculty of Science, Technology, Engineering and Mathematics (STEM)
Research Group: Centre for Research in Computing (CRC)
Item ID: 66812
Depositing User: Yijun Yu
Date Deposited: 20 Sep 2019 09:44
Last Modified: 21 Sep 2019 04:26
URI: http://oro.open.ac.uk/id/eprint/66812
Share this page:

Download history for this item

These details should be considered as only a guide to the number of downloads performed manually. Algorithmic methods have been applied in an attempt to remove automated downloads from the displayed statistics but no guarantee can be made as to the accuracy of the figures.

Actions (login may be required)

Policies | Disclaimer

© The Open University   contact the OU