知识库

记录点点滴滴

Kaggle-Titanic competition solution

This document is a thorough overview of my process for building a predictive model for Kaggle’s Titanic competition. I will provide all my essential steps in this model as well as the reasoning behind each decision I made. This model achieves a score of 82.78%, which is in the top 3% of all submissions at the time of this writing. This is a great introductory modeling exercise due to the simple nature of the data, yet there is still a lot to be gleaned from following a process that ultimately yields a high score.

You can get my original code on my GitHub: https://github.com/zlatankr/Projects/tree/master/Titanic
You get also read my write-up on my blog: https://zlatankr.github.io/posts/2017/01/30/kaggle-titanic

The Problem

We are given information about a subset of the Titanic population and asked to build a predictive model that tells us whether or not a given passenger survived the shipwreck. We are given 10 basic explanatory variables, including passenger gender, age, and price of fare, among others. More details about the competition can be found on the Kaggle site, here. This is a classic binary classification problem, and we will be implementing a random forest classifer.

Exploratory Data Analysis

The goal of this section is to gain an understanding of our data in order to inform what we do in the feature engineering section.

We begin our exploratory data analysis by loading our standard modules.

In [1]:

We then load the data, which we have downloaded from the Kaggle website (here is a link to the data if you need it).

In [2]:

First, let’s take a look at the summary of all the data. Immediately, we note that Age, Cabin, and Embarked have nulls that we’ll have to deal with.

In [3]:

It appears that we can drop the PassengerId column, since it is merely an index. Note, however, that some people have reportedly improved their score with the PassengerId column. However, my cursory attempt to do so did not yield positive results, and moreover I would like to mimic a real-life scenario, where an index of a dataset generally has no correlation with the target variable.

In [4]:
Out[4]:
PassengerId Survived Pclass Name Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th… female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry male 35.0 0 0 373450 8.0500 NaN S

Survived

So we can see that 62% of the people in the training set died. This is slightly less than the estimated 67% that died in the actual shipwreck (1500/2224).

In [5]:
Out[5]:
In [6]:
Out[6]:
《Kaggle-Titanic competition solution》

Pclass

Class played a critical role in survival, as the survival rate decreased drastically for the lowest class. This variable is both useful and clean, and I will be treating it as a categorical variable.

In [7]:
Out[7]:
In [8]:
Out[8]:
《Kaggle-Titanic competition solution》

Name

The Name column as provided cannot be used in the model. However, we might be able to extract some meaningful information from it.

In [9]:
Out[9]:

First, we can obtain useful information about the passenger’s title. Looking at the distribution of the titles, it might be useful to group the smaller sized values into an ‘other’ group, although I ultimately choose not to do this.

In [10]:
Out[10]:

I have relatively high hopes for this new variable we created, since the survival rate appears to be either significantly above or below the average survival rate, which should help our model.

In [11]:
Out[11]:

Additionally, looking at the relationship between the length of a name and survival rate appears to indicate that there is indeed a clear relationship. What might this mean? Are people with longer names more important, and thus more likely to be prioritized in a shipwreck?

In [12]:
Out[12]:
In [13]:
Out[13]:

Sex

“Women and children first,” goes the famous saying. Thus, we should expect females to have a higher survival rate than males, and indeed that is the case. We expect this variable to be very useful in our model.

In [14]:
Out[14]:
In [15]:
Out[15]:

Age

There are 177 nulls for Age, and they have a 10% lower survival rate than the non-nulls. Before imputing values for the nulls, we will include an Age_null flag just to make sure we can account for this characteristic of the data.

In [16]:
Out[16]:

Upon first glance, the relationship between age and survival appears to be a murky one at best. However, this doesn’t mean that the variable will be a bad predictor; at deeper levels of a given decision tree, a more discriminant relationship might open up.

In [17]:
Out[17]:
In [18]:
Out[18]:

SibSp

Upon first glance, I’m not too convinced of the importance of this variable. The distribution and survival rate between the different categories does not give me much hope.

In [19]:
Out[19]:
In [20]:
Out[20]:

Parch

Same conclusions as Sibsp: passengers with zero parents or children had a lower likelihood of survival than otherwise, but that survival rate was only slightly less than the overall population survival rate.

In [21]:
Out[21]:
In [22]:
Out[22]:

When we have two seemingly weak predictors, one thing we can do is combine them to get a stronger predictor. In the case of SibSp and Parch, we can combine the two variables to get a ‘family size’ metric, which might (and in fact does) prove to be a better predictor than the two original variables.

Ticket

The Ticket column seems to contain unique alphanumeric values, and is thus not very useful on its own. However, we might be able to extract come predictive power from it.

In [23]:
Out[23]:

One piece of potentially useful informatin is the number of characters in the Ticket column. This could be a reflection of the ‘type’ of ticket a given passenger had, which could somehow indicate their chances of survival. One theory (which may in fact be verifiable) is that some characteristic of the ticket could indicate the location of the passenger’s room, which might be a crucial factor in their escape route, and consequently their survival.

In [24]:
In [25]:
Out[25]:

Another piece of information is the first letter of each ticket, which, again, might be indicative of a certain attribute of the ticketholders or their rooms.

In [26]:
In [27]:
Out[27]:
In [28]:
Out[28]:

Fare

There is a clear relationship between Fare and Survived, and I’m guessing that this relationship is similar to that of Classand Survived.

In [29]:
Out[29]:
In [30]:
Out[30]:

Looking at the relationship between Class and Fare, we do indeed see a clear relationship.

In [31]:
Out[31]:
Pclass 1 2 3
Fare
[0, 7.854] 6 6 167
(7.854, 10.5] 0 24 160
(10.5, 21.679] 0 80 92
(21.679, 39.688] 64 64 52
(39.688, 512.329] 146 10 20

Cabin

This column has the most nulls (almost 700), but we can still extract information from it, like the first letter of each cabin, or the cabin number. The usefulness of this column might be similar to that of the Ticket variable.

Cabin Letter

We can see that most of the cabin letters are associated with a high survival rate, so this might very well be a useful variable. Because there aren’t that many unique values, we won’t do any grouping here, even if some of the values have a small count.

In [32]:
In [33]:
Out[33]:
In [34]:
Out[34]:

Cabin Number

Upon first glance, this appears to be useless. Not only do we have ~700 nulls which will be difficult to impute, but the correlation with Survived is almost zero. However, the cabin numbers as a whole do seem to have a high surival rate compared to the population average, so we might want to keep this just in case for now.

In [35]:
In [36]:
Out[36]:
In [37]:
Out[37]:
In [38]:
Out[38]:

Embarked

Looks like the Cherbourg people had a 20% higher survival rate than the other embarking locations. This is very likely due to the high presence of upper-class passengers from that location.

In [39]:
Out[39]:
In [40]:
Out[40]:
In [41]:
Out[41]:
In [42]:
Out[42]:
《Kaggle-Titanic competition solution》

Feature Engineering

Having done our cursory exploration of the variables, we now have a pretty good idea of how we want to transform our variables in preparation for our final dataset. We will perform our feature engineering through a series of helper functions that each serve a specific purpose.

This first function creates two separate columns: a numeric column indicating the length of a passenger’s Name field, and a categorical column that extracts the passenger’s title.

In [43]:

Next, we impute the null values of the Age column by filling in the mean value of the passenger’s corresponding title and class. This more granular approach to imputation should be more accurate than merely taking the mean age of the population.

In [44]:

We combine the SibSp and Parch columns into a new variable that indicates family size, and group the family size variable into three categories.

In [45]:

The Ticket column is used to create two new columns: Ticket_Lett, which indicates the first letter of each ticket (with the smaller-n values being grouped based on survival rate); and Ticket_Len, which indicates the length of the Ticket field.

In [46]:

The following two functions extract the first letter of the Cabin column and its number, respectively.

In [47]:
In [48]:

We fill the null values in the Embarked column with the most commonly occuring value, which is ‘S.’

In [49]:

We also fill in the one missing value of Fare in our test set with the mean value of Fare from the training set (transformations of test set data must always be fit using training data).

In [50]:

Next, because we are using scikit-learn, we must convert our categorical columns into dummy variables. The following function does this, and then it drops the original categorical columns. It also makes sure that each category is present in both the training and test datasets.

In [51]:

Our last helper function drops any columns that haven’t already been dropped. In our case, we only need to drop the PassengerId column, which we have decided is not useful for our problem (by the way, I’ve confirmed this with a separate test). Note that dropping the PassengerId column here means that we’ll have to load it later when creating our submission file.

In [52]:

Having built our helper functions, we can now execute them in order to build our dataset that will be used in the model:a

In [53]:

We can see that our final dataset has 45 columns, composed of our target column and 44 predictor variables. Although highly dimensional datasets can result in high variance, I think we should be fine here.

In [54]:

Hyperparameter Tuning

We will use grid search to identify the optimal parameters of our random forest model. Because our training dataset is quite small, we can get away with testing a wider range of hyperparameter values. When I ran this on my 8 GB Windows machine, the process took less than ten minutes. I will not run it here for the sake of saving myself time, but I will discuss the results of this grid search.

from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier

rf = RandomForestClassifier(max_features=’auto’, oob_score=True, random_state=1, n_jobs=-1)

param_grid = { “criterion” : [“gini”, “entropy”], “min_samples_leaf” : [1, 5, 10], “min_samples_split” : [2, 4, 10, 12, 16], “n_estimators”: [50, 100, 400, 700, 1000]}

gs = GridSearchCV(estimator=rf, param_grid=param_grid, scoring=’accuracy’, cv=3, n_jobs=-1)

gs = gs.fit(train.iloc[:, 1:], train.iloc[:, 0])

print(gs.bestscore)
print(gs.bestparams)
print(gs.cvresults)

Looking at the results of the grid search:

0.838383838384
{‘min_samples_split’: 10, ‘n_estimators’: 700, ‘criterion’: ‘gini’, ‘min_samples_leaf’: 1}

…we can see that our optimal parameter settings are not at the endpoints of our provided values, meaning that we do not have to test more values. What else can we say about our optimal values? The min_samples_split parameter is at 10, which should help mitigate overfitting to a certain degree. This is especially good because we have a relatively large number of estimators (700), which could potentially increase our generalization error.

Model Estimation and Evaluation

We are now ready to fit our model using the optimal hyperparameters. The out-of-bag score can give us an unbiased estimate of the model accuracy, and we can see that the score is 82.94%, which is only a little higher than our final leaderboard score.

In [55]:

Let’s take a brief look at our variable importance according to our random forest model. We can see that some of the original columns we predicted would be important in fact were, including gender, fare, and age. But we also see title, name length, and ticket length feature prominently, so we can pat ourselves on the back for creating such useful variables.

In [56]:
Out[56]:
variable importance
12 Sex_female 0.111215
11 Sex_male 0.109769
33 Name_Title_Mr. 0.109746
1 Fare 0.088209
2 Name_Len 0.087904
0 Age 0.078651
8 Pclass_3 0.043268
35 Name_Title_Miss. 0.031292
7 Ticket_Len 0.031079
34 Name_Title_Mrs. 0.028852
25 Cabin_Letter_n 0.027893
43 Fam_Size_Big 0.025199
41 Fam_Size_Nuclear 0.022704
9 Pclass_1 0.021810
19 Ticket_Lett_1 0.017999
20 Ticket_Lett_3 0.012902
10 Pclass_2 0.012345
36 Name_Title_Master. 0.012098
23 Ticket_Lett_Low_ticket 0.011723
13 Embarked_S 0.011546

Our last step is to predict the target variable for our test data and generate an output file that will be submitted to Kaggle.

In [57]:

Conclusion

This exercise is a good example of how far basic feature engineering can take you. It is worth mentioning that I did try various other models before arriving at this one. Some of the other variations I tried were different groupings for the categorical variables (plenty more combinations remain), linear discriminant analysis on a couple numeric columns, and eliminating more variables, among other things. This is a competition with a generous allotment of submission attempts, and as a result, it’s quite possible that even the leaderboard score is an overestimation of the true quality of the model, since the leaderboard can act as more of a validation score instead of a true test score.

I welcome any comments and suggestions.

点赞

发表评论

邮箱地址不会被公开。 必填项已用*标注