Serious data miners are often faced with thousands of candidate features for their prediction or classification application, with most of the features being of little or no value. Worse still, many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless.
This book presents a variety of algorithms that are useful for selecting small sets of important features from among unwieldy masses of candidates, or extracting useful features from measured variables. The algorithms presented here include the following:
Forward Selection Component Analysis combines principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set.
Local Feature Selection identifies features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods.
Linking Features and a Target with a hidden Markov model is a novel approach to identifying features with predictive power. Instead of looking for a direct relationship between features and a target, we find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets.
Traditional Stepwise Selection is improved in three ways: 1) At each step we examine a collection of 'best-so-far' feature sets instead of just incrementing a single feature set one step at a time. 2) Candidate features for inclusion are tested with cross validation to automatically and effectively limit model complexity. This tremendously improves out-of-sample performance. 3) At each step we estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck.
Nominal-to-Ordinal Conversion lets us take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input.
All algorithms are intuitively justified and supported by all relevant equations and explanatory material. Then complete, highly commented source code is presented and explained. All source code in this book, along with an executable program demonstrating the algorithms, can be downloaded for free from TimothyMasters.info.