Kilho Shin, Tetsuji Kuboyama, Takako Hashimoto, Dave Shepard
Information (Switzerland), 8(4) 159, Dec 6, 2017 Peer-reviewed
Feature selection is a useful tool for identifying which features, or attributes, of a dataset cause or explain the phenomena that the dataset describes, and improving the efficiency and accuracy of learning algorithms for discovering such phenomena. Consequently, feature selection has been studied intensively in machine learning research. However, while feature selection algorithms that exhibit excellent accuracy have been developed, they are seldom used for analysis of high-dimensional data because high-dimensional data usually include too many instances and features, which make traditional feature selection algorithms inefficient. To eliminate this limitation, we tried to improve the run-time performance of two of the most accurate feature selection algorithms known in the literature. The result is two accurate and fast algorithms, namely SCWC and SLCC. Multiple experiments with real social media datasets have demonstrated that our algorithms improve the performance of their original algorithms remarkably. For example, we have two datasets, one with 15,568 instances and 15,741 features, and another with 200,569 instances and 99,672 features. SCWC performed feature selection on these datasets in 1.4 seconds and in 405 seconds, respectively. In addition, SLCC has turned out to be as fast as SCWC on average. This is a remarkable improvement because it is estimated that the original algorithms would need several hours to dozens of days to process the same datasets. In addition, we introduce a fast implementation of our algorithms: SCWC does not require any adjusting parameter, while SLCC requires a threshold parameter, which we can use to control the number of features that the algorithm selects.