主要講述了如何處理分類型變量,直接上代碼惋鸥。
# Get list of categorical variables
s = (X_train.dtypes == 'object') #判斷類別是否為分類型變量的series
object_cols = list(s[s].index) #s[s]就是找出結(jié)果為真的series碰逸,最后得出對應(yīng)的索引列表,即分類變量的名稱
print("Categorical variables:")
print(object_cols)
定義打分函數(shù),使用MAE來評價(jià)不同方法的分?jǐn)?shù)脑漫。
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
# Function for comparing different approaches
def score_dataset(X_train, X_valid, y_train, y_valid):
model = RandomForestRegressor(n_estimators=100, random_state=0)
model.fit(X_train, y_train)
preds = model.predict(X_valid)
return mean_absolute_error(y_valid, preds)
Score from Approach 1 (Drop Categorical Variables)[刪除分類型變量]
We drop the object
columns with the select_dtypes()
method.
drop_X_train = X_train.select_dtypes(exclude=['object'])
drop_X_valid = X_valid.select_dtypes(exclude=['object'])
print("MAE from Approach 1 (Drop categorical variables):")
print(score_dataset(drop_X_train, drop_X_valid, y_train, y_valid))
Score from Approach 2 (Label Encoding)[類別編碼]
Scikit-learn has a LabelEncoder
class that can be used to get label encodings. We loop over the categorical variables and apply the label encoder separately to each column.對分類型變量進(jìn)行遍歷髓抑,并對每個(gè)列分別使用標(biāo)簽編碼。
from sklearn.preprocessing import LabelEncoder
# Make copy to avoid changing original data 為避免更改原始數(shù)據(jù)創(chuàng)建副本
label_X_train = X_train.copy()
label_X_valid = X_valid.copy()
# Apply label encoder to each column with categorical data
label_encoder = LabelEncoder() #實(shí)例化方法
for col in object_cols:
label_X_train[col] = label_encoder.fit_transform(X_train[col])
label_X_valid[col] = label_encoder.transform(X_valid[col])
print("MAE from Approach 2 (Label Encoding):")
print(score_dataset(label_X_train, label_X_valid, y_train, y_valid))
在上面的代碼塊中优幸,我們把一個(gè)唯一整數(shù)隨機(jī)分配給每一列的不同類別吨拍,這是一種常用方法,比自定義標(biāo)簽更簡單网杆,但是羹饰,如果我們能為有序的分類變量提供靈活的標(biāo)簽,預(yù)期性能會進(jìn)一步提高碳却。
Score from Approach 3 (One-Hot Encoding)?
We use the OneHotEncoder
class from scikit-learn to get one-hot encodings. There are a number of parameters that can be used to customize its behavior.
- We set
handle_unknown='ignore'
to avoid errors when the validation data contains classes that aren't represented in the training data, and - setting
sparse=False
ensures that the encoded columns are returned as a numpy array (instead of a sparse matrix).當(dāng)驗(yàn)證數(shù)據(jù)集中包含訓(xùn)練數(shù)據(jù)中未表示的類別時(shí)队秩,設(shè)置以避免錯(cuò)誤,并設(shè)置可以確保one-hot編碼作為np數(shù)組返回而非稀疏矩陣昼浦。
To use the encoder, we supply only the categorical columns that we want to be one-hot encoded. For instance, to encode the training data, we supply X_train[object_cols]
. (object_cols
in the code cell below is a list of the column names with categorical data, and so X_train[object_cols]
contains all of the categorical data in the training set.)要使用編碼器馍资,我們僅需提供我們想要被one-hot編碼的分類列。例如本例我們提供X_train[object_cols]
from sklearn.preprocessing import OneHotEncoder
# Apply one-hot encoder to each column with categorical data
OH_encoder = OneHotEncoder(handle_unknown='ignore', sparse=False)
#handle_unknown='ignore'為了避免驗(yàn)證集中出現(xiàn)訓(xùn)練集中未出現(xiàn)的類
#sparse=False 確保編碼列作為numpy數(shù)組而非稀疏矩陣返回
OH_cols_train = pd.DataFrame(OH_encoder.fit_transform(X_train[object_cols]))
OH_cols_valid = pd.DataFrame(OH_encoder.transform(X_valid[object_cols]))
# One-hot encoding removed index; put it back【OH編碼會移除行索引关噪,恢復(fù)索引】
OH_cols_train.index = X_train.index
OH_cols_valid.index = X_valid.index
# Remove categorical columns (will replace with one-hot encoding)移除分類列(將使用OH編碼代替)剩余為數(shù)量型變量 num_X
num_X_train = X_train.drop(object_cols, axis=1)
num_X_valid = X_valid.drop(object_cols, axis=1)
# Add one-hot encoded columns to numerical features 將OH編碼添加至數(shù)量型變量鸟蟹,現(xiàn)在的OH_X包含了數(shù)量型變量和OH編碼過的分類型變量
OH_X_train = pd.concat([num_X_train, OH_cols_train], axis=1)
OH_X_valid = pd.concat([num_X_valid, OH_cols_valid], axis=1)
print("MAE from Approach 3 (One-Hot Encoding):")
print(score_dataset(OH_X_train, OH_X_valid, y_train, y_valid))
Which approach is best?
In this case, dropping the categorical columns (Approach 1) performed worst, since it had the highest MAE score. As for the other two approaches, since the returned MAE scores are so close in value, there doesn't appear to be any meaningful benefit to one over the other.
In general, one-hot encoding (Approach 3) will typically perform best, and dropping the categorical columns (Approach 1) typically performs worst, but it varies on a case-by-case basis.
Conclusion
The world is filled with categorical data. You will be a much more effective data scientist if you know how to use this common data type!