Using Persistence Barcode to Show the Impact of Data Complexity on the Neural Network Architecture
Keywords:ANN Architecture, Alpha Complexity, Topological Data Analysis, Betti Number,, Persistence Barcode
It is so much noticeable that initialization of architectural parameters has a great impact on whole learnability stream so that knowing mathematical properties of dataset results in providing neural network architecture a better expressivity and capacity. In this paper, five random samples of the Volve field dataset were taken. Then a training set was specified and the persistent homology of the dataset was calculated to show impact of data complexity on selection of multilayer perceptron regressor (MLPR) architecture. By using the proposed method that provides a well-rounded strategy to compute data complexity. Our method is a compound algorithm composed of the t-SNE method, alpha-complexity algorithm, and a persistence barcode reading method to extract the Betti number of a dataset. After that, MLPR were trained using that dataset using a single hidden layer with increased hidden neurons. Then, increased both hidden layers and hidden neurons. Our empirical analysis has shown that the training efficiency of MLPR severely depends on its architecture’s ability to express the homology of the dataset.