64
263

Robustness of the Random Forest-based gene selection methods

Abstract

Gene selection is an important part of a microarray data analysis, as it can reveal information leading to a better understanding of the mechanisms of the investigated phenomenon. On the same time it is a very hard task due to the noisy nature of this data. To this end, gene selection is often approached through machine learning; in particular with the Random Forest method which has several features crucial for this purpose. In this work, four state-of-art Random Forest-based feature selection methods were compared in a gene selection context; the analysis was focused on the stability of selection, as it is a necessity for the significance of the results, yet it is often ignored in a similar studies. The comparison of post-selection accuracy of a validation Random Forest classifier revealed that all investigated methods were equivalent in this manner. However, the methods substantially differ with respect to the amount of selected genes and the stability of selection. Out of analysed methods, the Boruta algorithm proved to find most possibly important genes. Though frequently used, the post-selection classifier error rate proved to be a possibly deceiving measure of gene selection quality. When the amount of consistently selected genes is concerned, the Boruta algorithm was a clear best. Although it was also the most computationally intensive method, its demand could be reduced to the level comparable with other algorithms by replacing the Random Forest importance with this produced by Random Ferns, a similar but simplified classifier. Despite of their design assumptions, the minimal optimal selection methods RRF and RFE proved to select a high fraction of false positives.

View on arXiv
Comments on this paper