Please login first
Rafael Pino-Mejías  - - - 
Top co-authors
Antonio Blanco

132 shared publications

Esther-Lydia Silva-Ramírez

3 shared publications

Universidad de Cádiz

Manuel López-Coello

3 shared publications

Universidad de Cádiz

María-Dolores Cubiles-De-La-Vega

3 shared publications

Universidad de Sevilla

4
Publications
0
Reads
0
Downloads
31
Citations
Publication Record
Distribution of Articles published per year 
(2004 - 2013)
Total number of journals
published in
 
3
 
Publications
Article 6 Reads 18 Citations Credit scoring models for the microfinance industry using neural networks: Evidence from Peru Antonio Blanco, Salvador Rayo, Antonio Blanco-Oliver, Rafael... Published: 01 January 2013
Expert Systems with Applications, doi: 10.1016/j.eswa.2012.07.051
DOI See at publisher website
ABS Show/hide abstract
Credit scoring systems are currently in common use by numerous financial institutions worldwide. However, credit scoring with the microfinance industry is a relatively recent application, and no model which employs a non-parametric statistical technique has yet, to the best of our knowledge, been published. This lack is surprising since the implementation of credit scoring should contribute towards the efficiency of microfinance institutions, thereby improving their competitiveness in an increasingly constrained environment. This paper builds several non-parametric credit scoring models based on the multilayer perceptron approach (MLP) and benchmarks their performance against other models which employ the traditional linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and logistic regression (LR) techniques. Based on a sample of almost 5500 borrowers from a Peruvian microfinance institution, the results reveal that neural network models outperform the other three classic techniques both in terms of area under the receiver-operating characteristic curve (AUC) and as misclassification costs. Highlights► Multilayer perceptron credit scoring models work well in our data set from the microfinance industry. ► Multilayer perceptron-based models outperform logistic regression and linear and quadratic discriminant analysis. ► Microfinance institutions that apply multilayer perceptron credit scoring models will achieve a competitive advantage. ► The multilayer perceptron credit scoring model with the highest performance uses regularization procedures. ► A freely available software, the R system, can be used to fit these models.
Article 2 Reads 12 Citations Missing value imputation on missing completely at random data using multilayer perceptrons Esther-Lydia Silva-Ramírez, Rafael Pino-Mejías, Manuel López... Published: 01 January 2011
Neural Netw, doi: 10.1016/j.neunet.2010.09.008
DOI See at publisher website
PubMed View at PubMed
ABS Show/hide abstract
Data mining is based on data files which usually contain errors in the form of missing values. This paper focuses on a methodological framework for the development of an automated data imputation model based on artificial neural networks. Fifteen real and simulated data sets are exposed to a perturbation experiment, based on the random generation of missing values. These data set sizes range from 47 to 1389 records. A perturbation experiment was performed for each data set where the probability of missing value was set to 0.05. Several architectures and learning algorithms for the multilayer perceptron are tested and compared with three classic imputation procedures: mean/mode imputation, regression and hot-deck. The obtained results, considering different performance measures, not only suggest this approach improves the quality of a database with missing values, but also the best results are clearly obtained using the Multilayer Perceptron model in data sets with categorical variables. Three learning rules (Levenberg–Marquardt, BFGS Quasi-Newton and Conjugate Gradient Fletcher–Reeves Update) and a small number of hidden nodes are recommended.
BOOK-CHAPTER 1 Read 0 Citations Evaluating the Performance of the Multilayer Perceptron as a Data Editing Tool Ma-Dolores Cubiles-De-La-Vega, Esther-Lydia Silva-Ramírez, R... Published: 01 January 2009
Lecture Notes in Computer Science, doi: 10.1007/978-3-642-02478-8_163
DOI See at publisher website
ABS Show/hide abstract
Usually, the knowledge discovery process is developed using data sets which contain errors in the form of inconsistent values. The activity aimed at detecting and correcting logical inconsistencies in data sets is named as data editing. Traditional tools for this task, as the Fellegi-Holt methodology, require a heavy intervention of subject matter experts. This paper discusses a methodological framework for the development of an automated data editing process which can be accomplished by a general nonlinear approximation model, as an artificial neural network. We have performed and empirical evaluation of the performance of this approach over eight data sets, considering several hidden layer sizes and seven learning algorithms for the multilayer perceptron. The obtained results suggest that this approach offers a hopeful performance, providing a promising data cleaning tool.
BOOK-CHAPTER 1 Read 1 Citation Bagging Classification Models with Reduced Bootstrap Rafael Pino-Mejías, María-Dolores Cubiles-De-La-Vega, Manuel... Published: 01 January 2004
Lecture Notes in Computer Science, doi: 10.1007/978-3-540-27868-9_106
DOI See at publisher website
ABS Show/hide abstract
Bagging is an ensemble method proposed to improve the predictive performance of learning algorithms, being specially effective when applied to unstable predictors. It is based on the aggregation of a certain number of prediction models, each one generated from a bootstrap sample of the available training set. We introduce an alternative method for bagging classification models, motivated by the reduced bootstrap methodology, where the generated bootstrap samples are forced to have a number of distinct original observations between two values k 1 and k 2. Five choices for k 1 and k 2 are considered, and the five resulting models are empirically studied and compared with bagging on three real data sets, employing classification trees and neural networks as the base learners. This comparison reveals for this reduced bagging technique a trend to diminish the mean and the variance of the error rate.