This document is a thorough overview of my process for building a predictive model for Kaggle’s Titanic competition. I will provide all my essential steps in this model as well as the reasoning behind each decision I made. This model achieves a score of 82.78%, which is in the top 3% of all submissions at the time of this writing. This is a great introductory modeling exercise due to the simple nature of the data, yet there is still a lot to be gleaned from following a process that ultimately yields a high score.
You can get my original code on my GitHub: https://github.com/zlatankr/Projects/tree/master/Titanic
You get also read my write-up on my blog: https://zlatankr.github.io/posts/2017/01/30/kaggle-titanic
The Problem
We are given information about a subset of the Titanic population and asked to build a predictive model that tells us whether or not a given passenger survived the shipwreck. We are given 10 basic explanatory variables, including passenger gender, age, and price of fare, among others. More details about the competition can be found on the Kaggle site, here. This is a classic binary classification problem, and we will be implementing a random forest classifer.
Exploratory Data Analysis
The goal of this section is to gain an understanding of our data in order to inform what we do in the feature engineering section.
We begin our exploratory data analysis by loading our standard modules.
1 2 3 4 5 6 |
<span class="kn">import</span> <span class="nn">os</span> <span class="kn">import</span> <span class="nn">pandas</span> <span class="k">as</span> <span class="nn">pd</span> <span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span> <span class="kn">import</span> <span class="nn">seaborn</span> <span class="k">as</span> <span class="nn">sns</span> <span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="nn">plt</span> <span class="o">%</span><span class="k">matplotlib</span> inline |
We then load the data, which we have downloaded from the Kaggle website (here is a link to the data if you need it).
1 2 |
<span class="n">train</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">read_csv</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="s1">'../input'</span><span class="p">,</span> <span class="s1">'train.csv'</span><span class="p">))</span> <span class="n">test</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">read_csv</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="s1">'../input'</span><span class="p">,</span> <span class="s1">'test.csv'</span><span class="p">))</span> |
First, let’s take a look at the summary of all the data. Immediately, we note that Age
, Cabin
, and Embarked
have nulls that we’ll have to deal with.
1 |
<span class="n">train</span><span class="o">.</span><span class="n">info</span><span class="p">()</span> |
It appears that we can drop the PassengerId
column, since it is merely an index. Note, however, that some people have reportedly improved their score with the PassengerId
column. However, my cursory attempt to do so did not yield positive results, and moreover I would like to mimic a real-life scenario, where an index of a dataset generally has no correlation with the target variable.
1 |
<span class="n">train</span><span class="o">.</span><span class="n">head</span><span class="p">()</span> |
Survived
So we can see that 62% of the people in the training set died. This is slightly less than the estimated 67% that died in the actual shipwreck (1500/2224).
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">]</span><span class="o">.</span><span class="n">value_counts</span><span class="p">(</span><span class="n">normalize</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> |
1 |
<span class="n">sns</span><span class="o">.</span><span class="n">countplot</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">])</span> |
Pclass
Class played a critical role in survival, as the survival rate decreased drastically for the lowest class. This variable is both useful and clean, and I will be treating it as a categorical variable.
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">]</span><span class="o">.</span><span class="n">groupby</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Pclass'</span><span class="p">])</span><span class="o">.</span><span class="n">mean</span><span class="p">()</span> |
1 |
<span class="n">sns</span><span class="o">.</span><span class="n">countplot</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Pclass'</span><span class="p">],</span> <span class="n">hue</span><span class="o">=</span><span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">])</span> |
Name
The Name
column as provided cannot be used in the model. However, we might be able to extract some meaningful information from it.
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Name'</span><span class="p">]</span><span class="o">.</span><span class="n">head</span><span class="p">()</span> |
First, we can obtain useful information about the passenger’s title. Looking at the distribution of the titles, it might be useful to group the smaller sized values into an ‘other’ group, although I ultimately choose not to do this.
1 2 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Name_Title'</span><span class="p">]</span> <span class="o">=</span> <span class="n">train</span><span class="p">[</span><span class="s1">'Name'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="n">x</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s1">','</span><span class="p">)[</span><span class="mi">1</span><span class="p">])</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="n">x</span><span class="o">.</span><span class="n">split</span><span class="p">()[</span><span class="mi">0</span><span class="p">])</span> <span class="n">train</span><span class="p">[</span><span class="s1">'Name_Title'</span><span class="p">]</span><span class="o">.</span><span class="n">value_counts</span><span class="p">()</span> |
I have relatively high hopes for this new variable we created, since the survival rate appears to be either significantly above or below the average survival rate, which should help our model.
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">]</span><span class="o">.</span><span class="n">groupby</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Name_Title'</span><span class="p">])</span><span class="o">.</span><span class="n">mean</span><span class="p">()</span> |
Additionally, looking at the relationship between the length of a name and survival rate appears to indicate that there is indeed a clear relationship. What might this mean? Are people with longer names more important, and thus more likely to be prioritized in a shipwreck?
1 2 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Name_Len'</span><span class="p">]</span> <span class="o">=</span> <span class="n">train</span><span class="p">[</span><span class="s1">'Name'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="nb">len</span><span class="p">(</span><span class="n">x</span><span class="p">))</span> <span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">]</span><span class="o">.</span><span class="n">groupby</span><span class="p">(</span><span class="n">pd</span><span class="o">.</span><span class="n">qcut</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Name_Len'</span><span class="p">],</span><span class="mi">5</span><span class="p">))</span><span class="o">.</span><span class="n">mean</span><span class="p">()</span> |
1 |
<span class="n">pd</span><span class="o">.</span><span class="n">qcut</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Name_Len'</span><span class="p">],</span><span class="mi">5</span><span class="p">)</span><span class="o">.</span><span class="n">value_counts</span><span class="p">()</span> |
Sex
“Women and children first,” goes the famous saying. Thus, we should expect females to have a higher survival rate than males, and indeed that is the case. We expect this variable to be very useful in our model.
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Sex'</span><span class="p">]</span><span class="o">.</span><span class="n">value_counts</span><span class="p">(</span><span class="n">normalize</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> |
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">]</span><span class="o">.</span><span class="n">groupby</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Sex'</span><span class="p">])</span><span class="o">.</span><span class="n">mean</span><span class="p">()</span> |
Age
There are 177 nulls for Age
, and they have a 10% lower survival rate than the non-nulls. Before imputing values for the nulls, we will include an Age_null
flag just to make sure we can account for this characteristic of the data.
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">]</span><span class="o">.</span><span class="n">groupby</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Age'</span><span class="p">]</span><span class="o">.</span><span class="n">isnull</span><span class="p">())</span><span class="o">.</span><span class="n">mean</span><span class="p">()</span> |
Upon first glance, the relationship between age and survival appears to be a murky one at best. However, this doesn’t mean that the variable will be a bad predictor; at deeper levels of a given decision tree, a more discriminant relationship might open up.
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">]</span><span class="o">.</span><span class="n">groupby</span><span class="p">(</span><span class="n">pd</span><span class="o">.</span><span class="n">qcut</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Age'</span><span class="p">],</span><span class="mi">5</span><span class="p">))</span><span class="o">.</span><span class="n">mean</span><span class="p">()</span> |
1 |
<span class="n">pd</span><span class="o">.</span><span class="n">qcut</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Age'</span><span class="p">],</span><span class="mi">5</span><span class="p">)</span><span class="o">.</span><span class="n">value_counts</span><span class="p">()</span> |
SibSp
Upon first glance, I’m not too convinced of the importance of this variable. The distribution and survival rate between the different categories does not give me much hope.
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">]</span><span class="o">.</span><span class="n">groupby</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'SibSp'</span><span class="p">])</span><span class="o">.</span><span class="n">mean</span><span class="p">()</span> |
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'SibSp'</span><span class="p">]</span><span class="o">.</span><span class="n">value_counts</span><span class="p">()</span> |
Parch
Same conclusions as Sibsp
: passengers with zero parents or children had a lower likelihood of survival than otherwise, but that survival rate was only slightly less than the overall population survival rate.
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">]</span><span class="o">.</span><span class="n">groupby</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Parch'</span><span class="p">])</span><span class="o">.</span><span class="n">mean</span><span class="p">()</span> |
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Parch'</span><span class="p">]</span><span class="o">.</span><span class="n">value_counts</span><span class="p">()</span> |
When we have two seemingly weak predictors, one thing we can do is combine them to get a stronger predictor. In the case of SibSp
and Parch
, we can combine the two variables to get a ‘family size’ metric, which might (and in fact does) prove to be a better predictor than the two original variables.
Ticket
The Ticket
column seems to contain unique alphanumeric values, and is thus not very useful on its own. However, we might be able to extract come predictive power from it.
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Ticket'</span><span class="p">]</span><span class="o">.</span><span class="n">head</span><span class="p">(</span><span class="n">n</span><span class="o">=</span><span class="mi">10</span><span class="p">)</span> |
One piece of potentially useful informatin is the number of characters in the Ticket
column. This could be a reflection of the ‘type’ of ticket a given passenger had, which could somehow indicate their chances of survival. One theory (which may in fact be verifiable) is that some characteristic of the ticket could indicate the location of the passenger’s room, which might be a crucial factor in their escape route, and consequently their survival.
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Ticket_Len'</span><span class="p">]</span> <span class="o">=</span> <span class="n">train</span><span class="p">[</span><span class="s1">'Ticket'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="nb">len</span><span class="p">(</span><span class="n">x</span><span class="p">))</span> |
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Ticket_Len'</span><span class="p">]</span><span class="o">.</span><span class="n">value_counts</span><span class="p">()</span> |
Another piece of information is the first letter of each ticket, which, again, might be indicative of a certain attribute of the ticketholders or their rooms.
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Ticket_Lett'</span><span class="p">]</span> <span class="o">=</span> <span class="n">train</span><span class="p">[</span><span class="s1">'Ticket'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="nb">str</span><span class="p">(</span><span class="n">x</span><span class="p">)[</span><span class="mi">0</span><span class="p">])</span> |
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Ticket_Lett'</span><span class="p">]</span><span class="o">.</span><span class="n">value_counts</span><span class="p">()</span> |
1 |
<span class="n">train</span><span class="o">.</span><span class="n">groupby</span><span class="p">([</span><span class="s1">'Ticket_Lett'</span><span class="p">])[</span><span class="s1">'Survived'</span><span class="p">]</span><span class="o">.</span><span class="n">mean</span><span class="p">()</span> |
Fare
There is a clear relationship between Fare
and Survived
, and I’m guessing that this relationship is similar to that of Class
and Survived
.
1 |
<span class="n">pd</span><span class="o">.</span><span class="n">qcut</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Fare'</span><span class="p">],</span> <span class="mi">3</span><span class="p">)</span><span class="o">.</span><span class="n">value_counts</span><span class="p">()</span> |
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">]</span><span class="o">.</span><span class="n">groupby</span><span class="p">(</span><span class="n">pd</span><span class="o">.</span><span class="n">qcut</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Fare'</span><span class="p">],</span> <span class="mi">3</span><span class="p">))</span><span class="o">.</span><span class="n">mean</span><span class="p">()</span> |
Looking at the relationship between Class
and Fare
, we do indeed see a clear relationship.
1 |
<span class="n">pd</span><span class="o">.</span><span class="n">crosstab</span><span class="p">(</span><span class="n">pd</span><span class="o">.</span><span class="n">qcut</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Fare'</span><span class="p">],</span> <span class="mi">5</span><span class="p">),</span> <span class="n">columns</span><span class="o">=</span><span class="n">train</span><span class="p">[</span><span class="s1">'Pclass'</span><span class="p">])</span> |
Cabin
This column has the most nulls (almost 700), but we can still extract information from it, like the first letter of each cabin, or the cabin number. The usefulness of this column might be similar to that of the Ticket
variable.
Cabin Letter
We can see that most of the cabin letters are associated with a high survival rate, so this might very well be a useful variable. Because there aren’t that many unique values, we won’t do any grouping here, even if some of the values have a small count.
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Cabin_Letter'</span><span class="p">]</span> <span class="o">=</span> <span class="n">train</span><span class="p">[</span><span class="s1">'Cabin'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="nb">str</span><span class="p">(</span><span class="n">x</span><span class="p">)[</span><span class="mi">0</span><span class="p">])</span> |
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Cabin_Letter'</span><span class="p">]</span><span class="o">.</span><span class="n">value_counts</span><span class="p">()</span> |
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">]</span><span class="o">.</span><span class="n">groupby</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Cabin_Letter'</span><span class="p">])</span><span class="o">.</span><span class="n">mean</span><span class="p">()</span> |
Cabin Number
Upon first glance, this appears to be useless. Not only do we have ~700 nulls which will be difficult to impute, but the correlation with Survived
is almost zero. However, the cabin numbers as a whole do seem to have a high surival rate compared to the population average, so we might want to keep this just in case for now.
1 2 3 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Cabin_num'</span><span class="p">]</span> <span class="o">=</span> <span class="n">train</span><span class="p">[</span><span class="s1">'Cabin'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="nb">str</span><span class="p">(</span><span class="n">x</span><span class="p">)</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s1">' '</span><span class="p">)[</span><span class="o">-</span><span class="mi">1</span><span class="p">][</span><span class="mi">1</span><span class="p">:])</span> <span class="n">train</span><span class="p">[</span><span class="s1">'Cabin_num'</span><span class="p">]</span><span class="o">.</span><span class="n">replace</span><span class="p">(</span><span class="s1">'an'</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">NaN</span><span class="p">,</span> <span class="n">inplace</span> <span class="o">=</span> <span class="kc">True</span><span class="p">)</span> <span class="n">train</span><span class="p">[</span><span class="s1">'Cabin_num'</span><span class="p">]</span> <span class="o">=</span> <span class="n">train</span><span class="p">[</span><span class="s1">'Cabin_num'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="nb">int</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="k">if</span> <span class="ow">not</span> <span class="n">pd</span><span class="o">.</span><span class="n">isnull</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="ow">and</span> <span class="n">x</span> <span class="o">!=</span> <span class="s1">''</span> <span class="k">else</span> <span class="n">np</span><span class="o">.</span><span class="n">NaN</span><span class="p">)</span> |
1 |
<span class="n">pd</span><span class="o">.</span><span class="n">qcut</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Cabin_num'</span><span class="p">],</span><span class="mi">3</span><span class="p">)</span><span class="o">.</span><span class="n">value_counts</span><span class="p">()</span> |
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">]</span><span class="o">.</span><span class="n">groupby</span><span class="p">(</span><span class="n">pd</span><span class="o">.</span><span class="n">qcut</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Cabin_num'</span><span class="p">],</span> <span class="mi">3</span><span class="p">))</span><span class="o">.</span><span class="n">mean</span><span class="p">()</span> |
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">]</span><span class="o">.</span><span class="n">corr</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Cabin_num'</span><span class="p">])</span> |
Embarked
Looks like the Cherbourg people had a 20% higher survival rate than the other embarking locations. This is very likely due to the high presence of upper-class passengers from that location.
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Embarked'</span><span class="p">]</span><span class="o">.</span><span class="n">value_counts</span><span class="p">()</span> |
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Embarked'</span><span class="p">]</span><span class="o">.</span><span class="n">value_counts</span><span class="p">(</span><span class="n">normalize</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> |
1 |
<span class="n">train</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">]</span><span class="o">.</span><span class="n">groupby</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Embarked'</span><span class="p">])</span><span class="o">.</span><span class="n">mean</span><span class="p">()</span> |
1 |
<span class="n">sns</span><span class="o">.</span><span class="n">countplot</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Embarked'</span><span class="p">],</span> <span class="n">hue</span><span class="o">=</span><span class="n">train</span><span class="p">[</span><span class="s1">'Pclass'</span><span class="p">])</span> |
Feature Engineering
Having done our cursory exploration of the variables, we now have a pretty good idea of how we want to transform our variables in preparation for our final dataset. We will perform our feature engineering through a series of helper functions that each serve a specific purpose.
This first function creates two separate columns: a numeric column indicating the length of a passenger’s Name
field, and a categorical column that extracts the passenger’s title.
1 2 3 4 5 6 |
<span class="k">def</span> <span class="nf">names</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">):</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="p">[</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">]:</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Name_Len'</span><span class="p">]</span> <span class="o">=</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Name'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="nb">len</span><span class="p">(</span><span class="n">x</span><span class="p">))</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Name_Title'</span><span class="p">]</span> <span class="o">=</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Name'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="n">x</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s1">','</span><span class="p">)[</span><span class="mi">1</span><span class="p">])</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="n">x</span><span class="o">.</span><span class="n">split</span><span class="p">()[</span><span class="mi">0</span><span class="p">])</span> <span class="k">del</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Name'</span><span class="p">]</span> <span class="k">return</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> |
Next, we impute the null values of the Age
column by filling in the mean value of the passenger’s corresponding title and class. This more granular approach to imputation should be more accurate than merely taking the mean age of the population.
1 2 3 4 5 6 |
<span class="k">def</span> <span class="nf">age_impute</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">):</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="p">[</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">]:</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Age_Null_Flag'</span><span class="p">]</span> <span class="o">=</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Age'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="mi">1</span> <span class="k">if</span> <span class="n">pd</span><span class="o">.</span><span class="n">isnull</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="k">else</span> <span class="mi">0</span><span class="p">)</span> <span class="n">data</span> <span class="o">=</span> <span class="n">train</span><span class="o">.</span><span class="n">groupby</span><span class="p">([</span><span class="s1">'Name_Title'</span><span class="p">,</span> <span class="s1">'Pclass'</span><span class="p">])[</span><span class="s1">'Age'</span><span class="p">]</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Age'</span><span class="p">]</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="n">transform</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="n">x</span><span class="o">.</span><span class="n">fillna</span><span class="p">(</span><span class="n">x</span><span class="o">.</span><span class="n">mean</span><span class="p">()))</span> <span class="k">return</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> |
We combine the SibSp
and Parch
columns into a new variable that indicates family size, and group the family size variable into three categories.
1 2 3 4 5 6 7 |
<span class="k">def</span> <span class="nf">fam_size</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">):</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="p">[</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">]:</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Fam_Size'</span><span class="p">]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">where</span><span class="p">((</span><span class="n">i</span><span class="p">[</span><span class="s1">'SibSp'</span><span class="p">]</span><span class="o">+</span><span class="n">i</span><span class="p">[</span><span class="s1">'Parch'</span><span class="p">])</span> <span class="o">==</span> <span class="mi">0</span> <span class="p">,</span> <span class="s1">'Solo'</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">where</span><span class="p">((</span><span class="n">i</span><span class="p">[</span><span class="s1">'SibSp'</span><span class="p">]</span><span class="o">+</span><span class="n">i</span><span class="p">[</span><span class="s1">'Parch'</span><span class="p">])</span> <span class="o"><=</span> <span class="mi">3</span><span class="p">,</span><span class="s1">'Nuclear'</span><span class="p">,</span> <span class="s1">'Big'</span><span class="p">))</span> <span class="k">del</span> <span class="n">i</span><span class="p">[</span><span class="s1">'SibSp'</span><span class="p">]</span> <span class="k">del</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Parch'</span><span class="p">]</span> <span class="k">return</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> |
The Ticket
column is used to create two new columns: Ticket_Lett
, which indicates the first letter of each ticket (with the smaller-n values being grouped based on survival rate); and Ticket_Len
, which indicates the length of the Ticket
field.
1 2 3 4 5 6 7 8 9 10 |
<span class="k">def</span> <span class="nf">ticket_grouped</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">):</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="p">[</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">]:</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Ticket_Lett'</span><span class="p">]</span> <span class="o">=</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Ticket'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="nb">str</span><span class="p">(</span><span class="n">x</span><span class="p">)[</span><span class="mi">0</span><span class="p">])</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Ticket_Lett'</span><span class="p">]</span> <span class="o">=</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Ticket_Lett'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="nb">str</span><span class="p">(</span><span class="n">x</span><span class="p">))</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Ticket_Lett'</span><span class="p">]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">where</span><span class="p">((</span><span class="n">i</span><span class="p">[</span><span class="s1">'Ticket_Lett'</span><span class="p">])</span><span class="o">.</span><span class="n">isin</span><span class="p">([</span><span class="s1">'1'</span><span class="p">,</span> <span class="s1">'2'</span><span class="p">,</span> <span class="s1">'3'</span><span class="p">,</span> <span class="s1">'S'</span><span class="p">,</span> <span class="s1">'P'</span><span class="p">,</span> <span class="s1">'C'</span><span class="p">,</span> <span class="s1">'A'</span><span class="p">]),</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Ticket_Lett'</span><span class="p">],</span> <span class="n">np</span><span class="o">.</span><span class="n">where</span><span class="p">((</span><span class="n">i</span><span class="p">[</span><span class="s1">'Ticket_Lett'</span><span class="p">])</span><span class="o">.</span><span class="n">isin</span><span class="p">([</span><span class="s1">'W'</span><span class="p">,</span> <span class="s1">'4'</span><span class="p">,</span> <span class="s1">'7'</span><span class="p">,</span> <span class="s1">'6'</span><span class="p">,</span> <span class="s1">'L'</span><span class="p">,</span> <span class="s1">'5'</span><span class="p">,</span> <span class="s1">'8'</span><span class="p">]),</span> <span class="s1">'Low_ticket'</span><span class="p">,</span> <span class="s1">'Other_ticket'</span><span class="p">))</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Ticket_Len'</span><span class="p">]</span> <span class="o">=</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Ticket'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="nb">len</span><span class="p">(</span><span class="n">x</span><span class="p">))</span> <span class="k">del</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Ticket'</span><span class="p">]</span> <span class="k">return</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> |
The following two functions extract the first letter of the Cabin
column and its number, respectively.
1 2 3 4 5 |
<span class="k">def</span> <span class="nf">cabin</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">):</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="p">[</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">]:</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Cabin_Letter'</span><span class="p">]</span> <span class="o">=</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Cabin'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="nb">str</span><span class="p">(</span><span class="n">x</span><span class="p">)[</span><span class="mi">0</span><span class="p">])</span> <span class="k">del</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Cabin'</span><span class="p">]</span> <span class="k">return</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> |
1 2 3 4 5 6 7 8 9 10 11 12 13 |
<span class="k">def</span> <span class="nf">cabin_num</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">):</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="p">[</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">]:</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Cabin_num1'</span><span class="p">]</span> <span class="o">=</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Cabin'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="nb">str</span><span class="p">(</span><span class="n">x</span><span class="p">)</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s1">' '</span><span class="p">)[</span><span class="o">-</span><span class="mi">1</span><span class="p">][</span><span class="mi">1</span><span class="p">:])</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Cabin_num1'</span><span class="p">]</span><span class="o">.</span><span class="n">replace</span><span class="p">(</span><span class="s1">'an'</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">NaN</span><span class="p">,</span> <span class="n">inplace</span> <span class="o">=</span> <span class="kc">True</span><span class="p">)</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Cabin_num1'</span><span class="p">]</span> <span class="o">=</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Cabin_num1'</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="nb">int</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="k">if</span> <span class="ow">not</span> <span class="n">pd</span><span class="o">.</span><span class="n">isnull</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="ow">and</span> <span class="n">x</span> <span class="o">!=</span> <span class="s1">''</span> <span class="k">else</span> <span class="n">np</span><span class="o">.</span><span class="n">NaN</span><span class="p">)</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Cabin_num'</span><span class="p">]</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">qcut</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Cabin_num1'</span><span class="p">],</span><span class="mi">3</span><span class="p">)</span> <span class="n">train</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">concat</span><span class="p">((</span><span class="n">train</span><span class="p">,</span> <span class="n">pd</span><span class="o">.</span><span class="n">get_dummies</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Cabin_num'</span><span class="p">],</span> <span class="n">prefix</span> <span class="o">=</span> <span class="s1">'Cabin_num'</span><span class="p">)),</span> <span class="n">axis</span> <span class="o">=</span> <span class="mi">1</span><span class="p">)</span> <span class="n">test</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">concat</span><span class="p">((</span><span class="n">test</span><span class="p">,</span> <span class="n">pd</span><span class="o">.</span><span class="n">get_dummies</span><span class="p">(</span><span class="n">test</span><span class="p">[</span><span class="s1">'Cabin_num'</span><span class="p">],</span> <span class="n">prefix</span> <span class="o">=</span> <span class="s1">'Cabin_num'</span><span class="p">)),</span> <span class="n">axis</span> <span class="o">=</span> <span class="mi">1</span><span class="p">)</span> <span class="k">del</span> <span class="n">train</span><span class="p">[</span><span class="s1">'Cabin_num'</span><span class="p">]</span> <span class="k">del</span> <span class="n">test</span><span class="p">[</span><span class="s1">'Cabin_num'</span><span class="p">]</span> <span class="k">del</span> <span class="n">train</span><span class="p">[</span><span class="s1">'Cabin_num1'</span><span class="p">]</span> <span class="k">del</span> <span class="n">test</span><span class="p">[</span><span class="s1">'Cabin_num1'</span><span class="p">]</span> <span class="k">return</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> |
We fill the null values in the Embarked
column with the most commonly occuring value, which is ‘S.’
1 2 3 4 |
<span class="k">def</span> <span class="nf">embarked_impute</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">):</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="p">[</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">]:</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Embarked'</span><span class="p">]</span> <span class="o">=</span> <span class="n">i</span><span class="p">[</span><span class="s1">'Embarked'</span><span class="p">]</span><span class="o">.</span><span class="n">fillna</span><span class="p">(</span><span class="s1">'S'</span><span class="p">)</span> <span class="k">return</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> |
We also fill in the one missing value of Fare
in our test set with the mean value of Fare
from the training set (transformations of test set data must always be fit using training data).
1 |
<span class="n">test</span><span class="p">[</span><span class="s1">'Fare'</span><span class="p">]</span><span class="o">.</span><span class="n">fillna</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Fare'</span><span class="p">]</span><span class="o">.</span><span class="n">mean</span><span class="p">(),</span> <span class="n">inplace</span> <span class="o">=</span> <span class="kc">True</span><span class="p">)</span> |
Next, because we are using scikit-learn, we must convert our categorical columns into dummy variables. The following function does this, and then it drops the original categorical columns. It also makes sure that each category is present in both the training and test datasets.
1 2 3 4 5 6 7 8 9 10 |
<span class="k">def</span> <span class="nf">dummies</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">,</span> <span class="n">columns</span> <span class="o">=</span> <span class="p">[</span><span class="s1">'Pclass'</span><span class="p">,</span> <span class="s1">'Sex'</span><span class="p">,</span> <span class="s1">'Embarked'</span><span class="p">,</span> <span class="s1">'Ticket_Lett'</span><span class="p">,</span> <span class="s1">'Cabin_Letter'</span><span class="p">,</span> <span class="s1">'Name_Title'</span><span class="p">,</span> <span class="s1">'Fam_Size'</span><span class="p">]):</span> <span class="k">for</span> <span class="n">column</span> <span class="ow">in</span> <span class="n">columns</span><span class="p">:</span> <span class="n">train</span><span class="p">[</span><span class="n">column</span><span class="p">]</span> <span class="o">=</span> <span class="n">train</span><span class="p">[</span><span class="n">column</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="nb">str</span><span class="p">(</span><span class="n">x</span><span class="p">))</span> <span class="n">test</span><span class="p">[</span><span class="n">column</span><span class="p">]</span> <span class="o">=</span> <span class="n">test</span><span class="p">[</span><span class="n">column</span><span class="p">]</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="nb">str</span><span class="p">(</span><span class="n">x</span><span class="p">))</span> <span class="n">good_cols</span> <span class="o">=</span> <span class="p">[</span><span class="n">column</span><span class="o">+</span><span class="s1">'_'</span><span class="o">+</span><span class="n">i</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">train</span><span class="p">[</span><span class="n">column</span><span class="p">]</span><span class="o">.</span><span class="n">unique</span><span class="p">()</span> <span class="k">if</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">test</span><span class="p">[</span><span class="n">column</span><span class="p">]</span><span class="o">.</span><span class="n">unique</span><span class="p">()]</span> <span class="n">train</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">concat</span><span class="p">((</span><span class="n">train</span><span class="p">,</span> <span class="n">pd</span><span class="o">.</span><span class="n">get_dummies</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="n">column</span><span class="p">],</span> <span class="n">prefix</span> <span class="o">=</span> <span class="n">column</span><span class="p">)[</span><span class="n">good_cols</span><span class="p">]),</span> <span class="n">axis</span> <span class="o">=</span> <span class="mi">1</span><span class="p">)</span> <span class="n">test</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">concat</span><span class="p">((</span><span class="n">test</span><span class="p">,</span> <span class="n">pd</span><span class="o">.</span><span class="n">get_dummies</span><span class="p">(</span><span class="n">test</span><span class="p">[</span><span class="n">column</span><span class="p">],</span> <span class="n">prefix</span> <span class="o">=</span> <span class="n">column</span><span class="p">)[</span><span class="n">good_cols</span><span class="p">]),</span> <span class="n">axis</span> <span class="o">=</span> <span class="mi">1</span><span class="p">)</span> <span class="k">del</span> <span class="n">train</span><span class="p">[</span><span class="n">column</span><span class="p">]</span> <span class="k">del</span> <span class="n">test</span><span class="p">[</span><span class="n">column</span><span class="p">]</span> <span class="k">return</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> |
Our last helper function drops any columns that haven’t already been dropped. In our case, we only need to drop the PassengerId
column, which we have decided is not useful for our problem (by the way, I’ve confirmed this with a separate test). Note that dropping the PassengerId
column here means that we’ll have to load it later when creating our submission file.
1 2 3 4 5 |
<span class="k">def</span> <span class="nf">drop</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">,</span> <span class="n">bye</span> <span class="o">=</span> <span class="p">[</span><span class="s1">'PassengerId'</span><span class="p">]):</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="p">[</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">]:</span> <span class="k">for</span> <span class="n">z</span> <span class="ow">in</span> <span class="n">bye</span><span class="p">:</span> <span class="k">del</span> <span class="n">i</span><span class="p">[</span><span class="n">z</span><span class="p">]</span> <span class="k">return</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> |
Having built our helper functions, we can now execute them in order to build our dataset that will be used in the model:a
1 2 3 4 5 6 7 8 9 10 11 12 13 |
<span class="n">train</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">read_csv</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="s1">'../input'</span><span class="p">,</span> <span class="s1">'train.csv'</span><span class="p">))</span> <span class="n">test</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">read_csv</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="s1">'../input'</span><span class="p">,</span> <span class="s1">'test.csv'</span><span class="p">))</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> <span class="o">=</span> <span class="n">names</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">)</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> <span class="o">=</span> <span class="n">age_impute</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">)</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> <span class="o">=</span> <span class="n">cabin_num</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">)</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> <span class="o">=</span> <span class="n">cabin</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">)</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> <span class="o">=</span> <span class="n">embarked_impute</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">)</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> <span class="o">=</span> <span class="n">fam_size</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">)</span> <span class="n">test</span><span class="p">[</span><span class="s1">'Fare'</span><span class="p">]</span><span class="o">.</span><span class="n">fillna</span><span class="p">(</span><span class="n">train</span><span class="p">[</span><span class="s1">'Fare'</span><span class="p">]</span><span class="o">.</span><span class="n">mean</span><span class="p">(),</span> <span class="n">inplace</span> <span class="o">=</span> <span class="kc">True</span><span class="p">)</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> <span class="o">=</span> <span class="n">ticket_grouped</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">)</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> <span class="o">=</span> <span class="n">dummies</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">,</span> <span class="n">columns</span> <span class="o">=</span> <span class="p">[</span><span class="s1">'Pclass'</span><span class="p">,</span> <span class="s1">'Sex'</span><span class="p">,</span> <span class="s1">'Embarked'</span><span class="p">,</span> <span class="s1">'Ticket_Lett'</span><span class="p">,</span> <span class="s1">'Cabin_Letter'</span><span class="p">,</span> <span class="s1">'Name_Title'</span><span class="p">,</span> <span class="s1">'Fam_Size'</span><span class="p">])</span> <span class="n">train</span><span class="p">,</span> <span class="n">test</span> <span class="o">=</span> <span class="n">drop</span><span class="p">(</span><span class="n">train</span><span class="p">,</span> <span class="n">test</span><span class="p">)</span> |
We can see that our final dataset has 45 columns, composed of our target column and 44 predictor variables. Although highly dimensional datasets can result in high variance, I think we should be fine here.
1 |
<span class="nb">print</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">train</span><span class="o">.</span><span class="n">columns</span><span class="p">))</span> |
Hyperparameter Tuning
We will use grid search to identify the optimal parameters of our random forest model. Because our training dataset is quite small, we can get away with testing a wider range of hyperparameter values. When I ran this on my 8 GB Windows machine, the process took less than ten minutes. I will not run it here for the sake of saving myself time, but I will discuss the results of this grid search.
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(max_features=’auto’, oob_score=True, random_state=1, n_jobs=-1)
param_grid = { “criterion” : [“gini”, “entropy”], “min_samples_leaf” : [1, 5, 10], “min_samples_split” : [2, 4, 10, 12, 16], “n_estimators”: [50, 100, 400, 700, 1000]}
gs = GridSearchCV(estimator=rf, param_grid=param_grid, scoring=’accuracy’, cv=3, n_jobs=-1)
gs = gs.fit(train.iloc[:, 1:], train.iloc[:, 0])
print(gs.bestscore)
print(gs.bestparams)
print(gs.cvresults)
Looking at the results of the grid search:
0.838383838384
{‘min_samples_split’: 10, ‘n_estimators’: 700, ‘criterion’: ‘gini’, ‘min_samples_leaf’: 1}
…we can see that our optimal parameter settings are not at the endpoints of our provided values, meaning that we do not have to test more values. What else can we say about our optimal values? The min_samples_split
parameter is at 10, which should help mitigate overfitting to a certain degree. This is especially good because we have a relatively large number of estimators (700), which could potentially increase our generalization error.
We are now ready to fit our model using the optimal hyperparameters. The out-of-bag score can give us an unbiased estimate of the model accuracy, and we can see that the score is 82.94%, which is only a little higher than our final leaderboard score.
1 2 3 4 5 6 7 8 9 10 11 12 |
<span class="kn">from</span> <span class="nn">sklearn.ensemble</span> <span class="k">import</span> <span class="n">RandomForestClassifier</span> <span class="n">rf</span> <span class="o">=</span> <span class="n">RandomForestClassifier</span><span class="p">(</span><span class="n">criterion</span><span class="o">=</span><span class="s1">'gini'</span><span class="p">,</span> <span class="n">n_estimators</span><span class="o">=</span><span class="mi">700</span><span class="p">,</span> <span class="n">min_samples_split</span><span class="o">=</span><span class="mi">10</span><span class="p">,</span> <span class="n">min_samples_leaf</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">max_features</span><span class="o">=</span><span class="s1">'auto'</span><span class="p">,</span> <span class="n">oob_score</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">random_state</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">n_jobs</span><span class="o">=-</span><span class="mi">1</span><span class="p">)</span> <span class="n">rf</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">train</span><span class="o">.</span><span class="n">iloc</span><span class="p">[:,</span> <span class="mi">1</span><span class="p">:],</span> <span class="n">train</span><span class="o">.</span><span class="n">iloc</span><span class="p">[:,</span> <span class="mi">0</span><span class="p">])</span> <span class="nb">print</span><span class="p">(</span><span class="s2">"</span><span class="si">%.4f</span><span class="s2">"</span> <span class="o">%</span> <span class="n">rf</span><span class="o">.</span><span class="n">oob_score_</span><span class="p">)</span> |
Let’s take a brief look at our variable importance according to our random forest model. We can see that some of the original columns we predicted would be important in fact were, including gender, fare, and age. But we also see title, name length, and ticket length feature prominently, so we can pat ourselves on the back for creating such useful variables.
1 2 3 |
<span class="n">pd</span><span class="o">.</span><span class="n">concat</span><span class="p">((</span><span class="n">pd</span><span class="o">.</span><span class="n">DataFrame</span><span class="p">(</span><span class="n">train</span><span class="o">.</span><span class="n">iloc</span><span class="p">[:,</span> <span class="mi">1</span><span class="p">:]</span><span class="o">.</span><span class="n">columns</span><span class="p">,</span> <span class="n">columns</span> <span class="o">=</span> <span class="p">[</span><span class="s1">'variable'</span><span class="p">]),</span> <span class="n">pd</span><span class="o">.</span><span class="n">DataFrame</span><span class="p">(</span><span class="n">rf</span><span class="o">.</span><span class="n">feature_importances_</span><span class="p">,</span> <span class="n">columns</span> <span class="o">=</span> <span class="p">[</span><span class="s1">'importance'</span><span class="p">])),</span> <span class="n">axis</span> <span class="o">=</span> <span class="mi">1</span><span class="p">)</span><span class="o">.</span><span class="n">sort_values</span><span class="p">(</span><span class="n">by</span><span class="o">=</span><span class="s1">'importance'</span><span class="p">,</span> <span class="n">ascending</span> <span class="o">=</span> <span class="kc">False</span><span class="p">)[:</span><span class="mi">20</span><span class="p">]</span> |
Our last step is to predict the target variable for our test data and generate an output file that will be submitted to Kaggle.
1 2 3 4 5 |
<span class="n">predictions</span> <span class="o">=</span> <span class="n">rf</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">test</span><span class="p">)</span> <span class="n">predictions</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">DataFrame</span><span class="p">(</span><span class="n">predictions</span><span class="p">,</span> <span class="n">columns</span><span class="o">=</span><span class="p">[</span><span class="s1">'Survived'</span><span class="p">])</span> <span class="n">test</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">read_csv</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="s1">'../input'</span><span class="p">,</span> <span class="s1">'test.csv'</span><span class="p">))</span> <span class="n">predictions</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">concat</span><span class="p">((</span><span class="n">test</span><span class="o">.</span><span class="n">iloc</span><span class="p">[:,</span> <span class="mi">0</span><span class="p">],</span> <span class="n">predictions</span><span class="p">),</span> <span class="n">axis</span> <span class="o">=</span> <span class="mi">1</span><span class="p">)</span> <span class="n">predictions</span><span class="o">.</span><span class="n">to_csv</span><span class="p">(</span><span class="s1">'y_test15.csv'</span><span class="p">,</span> <span class="n">sep</span><span class="o">=</span><span class="s2">","</span><span class="p">,</span> <span class="n">index</span> <span class="o">=</span> <span class="kc">False</span><span class="p">)</span> |
Conclusion
This exercise is a good example of how far basic feature engineering can take you. It is worth mentioning that I did try various other models before arriving at this one. Some of the other variations I tried were different groupings for the categorical variables (plenty more combinations remain), linear discriminant analysis on a couple numeric columns, and eliminating more variables, among other things. This is a competition with a generous allotment of submission attempts, and as a result, it’s quite possible that even the leaderboard score is an overestimation of the true quality of the model, since the leaderboard can act as more of a validation score instead of a true test score.
I welcome any comments and suggestions.