Skip to main content

Research Repository

Advanced Search

Reporting Statistical Validity and Model Complexity in Machine Learning based Computational Studies

Olorisade, Babatunde Kazeem; Brereton, Pearl; Andras, PE

Authors

Babatunde Kazeem Olorisade

Pearl Brereton

Profile Image

Prof Peter Andras P.Andras@napier.ac.uk
Dean of School of Computing Engineering and the Built Environment



Abstract

Background:: Statistical validity and model complexity are both important concepts to enhanced understanding and correctness assessment of computational models. However, information about these are often missing from publications applying machine learning.

Aim: The aim of this study is to show the importance of providing details that can indicate statistical validity and complexity of models in publications. This is explored in the context of citation screening automation using machine learning techniques.

Method: We built 15 Support Vector Machine (SVM) models, each developed using word2vec (average word) features --- and data for 15 review topics from the Drug Evaluation Review Program (DERP) of the Agency for Healthcare Research and Quality (AHRQ).

Results: The word2vec features were found to be sufficiently linearly separable by the SVM and consequently we used the linear kernels. In 11 of the 15 models, the negative (majority) class used over 80% of its training data as support vectors (SVs) and approximately 45% of the positive training data.

Conclusions: In this context, exploring the SVs revealed that the models are overly complex against ideal expectations of not more than 2%-5% (and preferably much less) of the training vectors.

Presentation Conference Type Conference Paper (Published)
Conference Name EASE'17: 21st International Conference on Evaluation and Assessment in Software Engineering
Start Date Jun 15, 2017
End Date Jun 16, 2017
Online Publication Date Jun 15, 2017
Publication Date 2017-06
Deposit Date Nov 4, 2021
Publisher Association for Computing Machinery (ACM)
Pages 128-133
Book Title EASE'17: Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering
ISBN 978-1-4503-4804-1
DOI https://doi.org/10.1145/3084226.3084283
Public URL http://researchrepository.napier.ac.uk/Output/2808890