Workflows in Python: Curating Features and Thinking Scientifically about Algorithms

December 23, 2015 Katie Malone

This is the second post in a series about end-to-end data analysis in Python using scikit-learn Pipeline and GridSearchCV. In the first post, I got my data formatted for machine learning by encoding string features as integers, and then used the data to build several different models. I got things running really fast, which is great, but at the cost of being a little quick-and-dirty about some details. First, I got the features encoded as integers, but they really should be dummy variables. Second, it’s worth going through the models a little more thoughtfully, to try to understand their performance and if there’s any more juice I can get out of them.

I’ll start with revisiting the way that I transformed my features from strings to integers. Recall that the strings were generally identifying categorical data, like the water source of a well or the village where the well is built. A problem with representing categorical variables as integers is that integers are ordered, while categories are not. The standard way to deal with this is to use dummy variables; one-hot encoding is a very common way of dummying. Each possible category becomes a new boolean feature. For example, if my dataframe looked like this:


index      country
1                "United States"
2                "Mexico"
3                "Mexico"
4                "Canada"
5                "United States"
6                "Canada
then after dummying it will look something like this:
index    country_UnitedStates   country_Mexico   country_Canada
1             1                                    0                          0
2             0                                    1                          0
3             0                                    1                          0
4             0                                    0                          1
5             1                                    0                          0
6             0                                    0                          1

This type of dummying is called one-hot encoding, because the categories are expanded over several boolean columns, only one of which is true (hot). I’ll write a one-hot-encoder function that takes the data frame and the title of a column, and returns the same data frame but one-hot encoding performed on the indicated feature. I’m using the scikit-learn OneHotEncoder object, but pandas also has a function called get_dummies() that does effectively the same thing. In fact, I find get_dummies() easier to use in many cases, but I still find it worthwhile to see a more “manual” version of the transformation at least once.


import sklearn.preprocessing

def hot_encoder(df, column_name): column = df[column_name].tolist() column = np.reshape( column, (len(column), 1) ) ### needs to be an N x 1 numpy array enc = sklearn.preprocessing.OneHotEncoder() enc.fit( column ) new_column = enc.transform( column ).toarray() column_titles = [] ### making titles for the new columns, and appending them to dataframe for ii in range( len(new_column[0]) ): this_column_name = column_name+"_"+str(ii) df[this_column_name] = new_column[:,ii] return df

Now I’ll iterate through a list of the columns that I want to one-hot encode, transforming each one as I go, with the final output of that process being a dataframe where all the categorical features are encoded as booleans.

One note before I code that up: one-hot encoding comes with the baggage that it makes my dataset bigger–sometimes a lot bigger. In the countries example above, one column that encoded the country has now been expanded out to three columns. You can imagine that this can sometimes get really, really big (imagine a column encoding all the counties in the United States, for example).

There are some columns in this example that will really blow up the dataset, so I’ll remove them before proceeding with the one-hot encoding.


print(features_df.columns.values)

features_df.drop( "funder", axis=1, inplace=True ) features_df.drop( "installer", axis=1, inplace=True ) features_df.drop( "wpt_name", axis=1, inplace=True ) features_df.drop( "subvillage", axis=1, inplace=True ) features_df.drop( "ward", axis=1, inplace=True )

names_of_columns_to_transform.remove("funder") names_of_columns_to_transform.remove("installer") names_of_columns_to_transform.remove("wpt_name") names_of_columns_to_transform.remove("subvillage") names_of_columns_to_transform.remove("ward")

for feature in names_of_columns_to_transform: features_df = hot_encoder( features_df, feature )

print( features_df.head() )


['amount_tsh' 'funder' 'gps_height' 'installer' 'longitude' 'latitude'
 'wpt_name' 'num_private' 'basin' 'subvillage' 'region' 'region_code'
 'district_code' 'lga' 'ward' 'population' 'public_meeting' 'recorded_by'
 'scheme_management' 'scheme_name' 'permit' 'construction_year'
 'extraction_type' 'extraction_type_group' 'extraction_type_class'
 'management' 'management_group' 'payment' 'payment_type' 'water_quality'
 'quality_group' 'quantity' 'quantity_group' 'source' 'source_type'
 'source_class' 'waterpoint_type' 'waterpoint_type_group']


       amount_tsh  gps_height  longitude   latitude  num_private  basin  \
id
69572        6000        1390  34.938093  -9.856322            0      5
8776            0        1399  34.698766  -2.147466            0      8
34310          25         686  37.460664  -3.821329            0      0
67743           0         263  38.486161 -11.155298            0      7
19728           0           0  31.130847  -1.825359            0      8

   region  region_code  district_code  lga           ...             \

id ...
69572 13 11 5 75 ...
8776 1 20 2 76 ...
34310 17 21 4 41 ...
67743 19 90 63 82 ...
19728 2 18 1 48 ...

   waterpoint_type_3  waterpoint_type_4  waterpoint_type_5  \

id
69572 1 0 0
8776 1 0 0
34310 0 0 0
67743 0 0 0
19728 1 0 0

   waterpoint_type_6  waterpoint_type_group_0  waterpoint_type_group_1  \

id
69572 0 0 0
8776 0 0 0
34310 0 0 0
67743 0 0 0
19728 0 0 0

   waterpoint_type_group_2  waterpoint_type_group_3  \

id
69572 1 0
8776 1 0
34310 1 0
67743 1 0
19728 1 0

   waterpoint_type_group_4  waterpoint_type_group_5

id
69572 0 0
8776 0 0
34310 0 0
67743 0 0
19728 0 0

[5 rows x 3031 columns]

In practice, I found that dummying my features didn’t make a huge difference in performance when I got to the modeling stage, although this is the kind of thing you generally don’t know before trying it.

I found that dummying had a huge effect on my dataset size in this case””I went from 39 features to over 3 thousand! And that takes into account aggressive trimming of the features that blew up the most. Having so many features invites problems with overfitting, slow and memory-intensive training, and I almost certainly don’t need all 3 thousand features to capture the patterns in my dataset. This is a perfect use case for feature selection, which is supported in scikit-learn by e.g. SelectKBest(), which will do univariate feature selection to get the k features (where k is a number which I have to tell the algorithm). Making a guess, I can ask for the top 100 features, which doesn’t make my performance much worse and speeds things up a lot:


import sklearn.feature_selection
"‹
select = sklearn.feature_selection.SelectKBest(k=100)
selected_X = select.fit_transform(X, y)
"‹
print( selected_X.shape )
(59400, 100)

Now I’ll turn my attention back to the machine learning algorithms””there can be theoretical reasons to suspect that a particular algorithm will do better or worse. I found in the last post that a random forest classifier did the best of all the models I tried, beating a logistic regression (by a lot) and a decision tree classifier (by a slimmer margin). This doesn’t come as a surprise to me, and here are a few reasons why:

  1. A logistic regression is an example of a linear model, which (unless you make special adaptations, which I’ll detail in a moment) assumes that the relationship between each of my features and the output class is a linear one. For example, if one of the features is the depth of a well, a linear model will assume that (all other things being equal) the difference in functionality between a 20-foot-deep well and a 40-foot-deep one will be the same as the difference between 40 feet and 60 feet. This isn’t always a valid assumption. One way to address it is to add extra features like depth squared and the logarithm of depth, which helps a linear model capture nonlinearities, but might not still allow me to get all the nuances of nonlinear relationships.
  2. A logistic regression also doesn’t capture interactions between features, for example that deep wells might be largely functional and wells drilled in rock are largely functional but deep wells in rocky places are largely non-functional. Again, I can explicitly add interaction terms to the logistic regression, but this gets unwieldy fast when I have many features.
  3. A decision tree can capture interactions and nonlinearities much more naturally than logistic regression, because of the binary tree structure of the decision tree algorithm itself. The downside of decision trees is that they can be harder to interpret or assign uncertainties to their predictions.
  4. A random forest is a collection of decision trees, each of which is trained on a subset of the rows/columns of the training data. The randomness in the training set means that the individual trees in a random forest are high-variance, but low-bias, and the final prediction is made by having each tree classify a given event and then using their predictions as “votes,” with the majority opinion being assigned as the label. I have the nonlinearities and interactions being captured by the individual trees, but ensembling many trees into a random forest tends to cancel out the biases/shortcomings of any one tree and I get a stronger predictor overall.
  5. In empirical studies of many algorithms being applied to many supervised learning problems, random forests often come out on top overall. So when in doubt, or if I only have the time/resources to try one model, a random forest is likely to get at or near the peak performance of all the algorithms on the market.
  6. If it was tricky to interpret or compute errors for a decision tree, a random forest is only going to be worse because there’s now 50-100 decision trees to worry about.

With these points in mind, it makes sense that my random forest did so well on this task, although one of the catches with random forests is that they have lots of parameters to optimize. How many trees should there be? How does each tree get trained? How many features get used in training each tree? There usually aren’t formulaic answers to these questions, and part of the craft of machine learning is tuning these parameters to get the best performance that I can out of my model. But with so many parameters, which sometimes interact with each other in complex ways, parameter tuning can be a huge hassle. In the next post, I’ll talk about an extremely powerful pair of tools in scikit-learn, the Pipeline and GridSearchCV, that allow crazy powerful parameter tuning in just a few lines of code.

The post Workflows in Python: Curating Features and Thinking Scientifically about Algorithms appeared first on Civis Analytics.

Previous Article
The Republican Primary So Far, in One GIF
The Republican Primary So Far, in One GIF

Today, the New York Times featured some of our polling data and proprietary algorithm results and their imp...

Next Article
Workflows in Python: Getting data ready to build models
Workflows in Python: Getting data ready to build models

A couple of weeks ago, I had the opportunity to host a workshop at the Open Data Science Conference in San ...