Experimental analysis starts with very similar premises: given a specific problem, we need to either collect or generate a dataset and to choose the best model according to the performance. A set of techniques can be evaluated (i.e. statistical or metaheuristic approaches) as well as results from previous works that should be taken into account. Thus, it is necessary to analyse the behaviour of a method with respect to the others in equality of conditions. Therefore it is necessary to formalize an experimental design to solve as effectively as possible the problem with different approaches and to estimate the error rate; so different results from different methods can be compared. In this work we propose four phases for any experimental design: extraction of data, pre-processing of data, learning and selection of the best model. These generic phases encapsulate the main operations and steps that should be performed during an experimental analysis (some of them mandatory and other optional), independently of the kind of data or method used and are not mandatory and can be adapted to a new specific domain. The proposed experimental design has proven to be a vital contribution to compare different techniques under the same conditions in different scopes.
Previous Article in event
Previous Article in congress
Next Article in event
Next Article in congress
A Proposal about Normalization of Experimental Designs in Computational Intelligence
Published:
04 December 2015
by MDPI
in MOL2NET'15, Conference on Molecular, Biomed., Comput. & Network Science and Engineering, 1st ed.
congress USEDAT-01: USA-Europe Data Analysis Training Congress, Cambridge, UK-Bilbao, Spain-Miami, USA, 2015
Abstract:
Keywords: Experimental Design; Statistical Analysis; Computational Intelligence