欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

数据挖掘Task4 建模调参

程序员文章站 2022-07-14 10:43:40
...

建模与调参

内容介绍 1. 线性回归模型:

线性回归对于特征的要求;
处理长尾分布;
理解线性回归模型;
2. 模型性能验证:
评价函数与目标函数;
交叉验证方法;
留一验证方法;
针对时间序列问题的验证;
绘制学习率曲线;
绘制验证曲线;
3. 嵌入式特征选择:
Lasso回归;
Ridge回归;
决策树;
4. 模型对比:
常用线性模型;
常用非线性模型;
5. 模型调参:
贪心调参方法;
网格调参方法;
贝叶斯调参方法;

载入包

import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')

减少数据占用的内存

def reduce_mem_usage(df):
    """ iterate through all the columns of a dataframe and modify the data type
        to reduce memory usage.        
    """
    start_mem = df.memory_usage().sum() 
    print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
    
    for col in df.columns:
        col_type = df[col].dtype
        
        if col_type != object:
            c_min = df[col].min()
            c_max = df[col].max()
            if str(col_type)[:3] == 'int':
                if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
                    df[col] = df[col].astype(np.int8)
                elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
                    df[col] = df[col].astype(np.int16)
                elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
                    df[col] = df[col].astype(np.int32)
                elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
                    df[col] = df[col].astype(np.int64)  
            else:
                if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
                    df[col] = df[col].astype(np.float16)
                elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
                    df[col] = df[col].astype(np.float32)
                else:
                    df[col] = df[col].astype(np.float64)
        else:
            df[col] = df[col].astype('category')

    end_mem = df.memory_usage().sum() 
    print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
    print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
    return df

导入数据

sample_feature = reduce_mem_usage(pd.read_csv('data_for_tree_seperate.csv'))

数据挖掘Task4 建模调参

选择部分参与训练的特征,便于下一步使用

continuous_feature_names = [x for x in sample_feature.columns if x not in ['price','brand','model','brand','v2_bin']]

线性回归 & 五折交叉验证 & 模拟真实业务情况

sample_feature = sample_feature.dropna().replace('-', 0).reset_index(drop=True)
sample_feature['notRepairedDamage'] = sample_feature['notRepairedDamage'].astype(np.float32)
train = sample_feature[continuous_feature_names + ['price']]
train_X = train[continuous_feature_names]
train_y = train['price']

简单建模

from sklearn.linear_model import LinearRegression
model = LinearRegression(normalize=True)
model = model.fit(train_X, train_y

查看训练的线性回归模型的截距(intercept)与权重(coef)

'intercept:'+ str(model.intercept_)
sorted(dict(zip(continuous_feature_names, model.coef_)).items(), key=lambda x:x[1], reverse=True)

数据挖掘Task4 建模调参

from matplotlib import pyplot as plt
subsample_index = np.random.randint(low=0, high=len(train_y), size=50)

绘制特征v_9的值与标签的散点图,图片发现模型的预测结果(蓝色点)与真实标签(黑色点)的分布差异较
大,且部分预测值出现了小于0的情况,说明我们的模型存在一些问题

plt.scatter(train_X['v_9'][subsample_index], train_y[subsample_index], color='black')
plt.scatter(train_X['v_9'][subsample_index], model.predict(train_X.loc[subsample_index]), color='blu
plt.xlabel('v_9')
plt.ylabel('price')
plt.legend(['True Price','Predicted Price'],loc='upper right')
print('The predicted price is obvious different from true price')
plt.show()

数据挖掘Task4 建模调参
通过作图我们发现数据的标签(price)呈现长尾分布,不利于我们的建模预测。原因是很多模型都假设数据误差
项符合正态分布,而长尾分布的数据违背了这一假设。参考博客:

https://blog.csdn.net/Noob_daniel/article/details/76087829
(https://blog.csdn.net/Noob_daniel/article/details/76087829)

import seaborn as sns
print('It is clear to see the price shows a typical exponential distribution')
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
sns.distplot(train_y)
plt.subplot(1,2,2)
sns.distplot(train_y[train_y < np.quantile(train_y, 0.9)])

It is clear to see the price shows a typical exponential distribution
数据挖掘Task4 建模调参
在这里我们对标签进行了 ????????????(???? + 1) 变换,使标签贴近于正态分布

train_y_ln = np.log(train_y + 1)
import seaborn as sns
print('The transformed price seems like normal distribution')
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
sns.distplot(train_y_ln)
plt.subplot(1,2,2)
sns.distplot(train_y_ln[train_y_ln < np.quantile(train_y_ln, 0.9)])

The transformed price seems like normal distribution
数据挖掘Task4 建模调参

model = model.fit(train_X, train_y_ln)
print('intercept:'+ str(model.intercept_))
sorted(dict(zip(continuous_feature_names, model.coef_)).items(), key=lambda x:x[1], reverse=True)

数据挖掘Task4 建模调参
再次进行可视化,发现预测结果与真实值较为接近,且未出现异常状况

plt.scatter(train_X['v_9'][subsample_index], train_y[subsample_index], color='black')
plt.scatter(train_X['v_9'][subsample_index], np.exp(model.predict(train_X.loc[subsample_index])), co
plt.xlabel('v_9')
plt.ylabel('price')
plt.legend(['True Price','Predicted Price'],loc='upper right')
print('The predicted price seems normal after np.log transforming')
plt.show()

数据挖掘Task4 建模调参

五折交叉验证

在使用训练集对参数进行训练的时候,经常会发现人们通常会将一整个训练集分为三个部分(比
如mnist手写训练集)。一般分为:训练集(train_set),评估集(valid_set),测试集
(test_set)这三个部分。这其实是为了保证训练效果而特意设置的。其中测试集很好理解,其
实就是完全不参与训练的数据,仅仅用来观测测试效果的数据。而训练集和评估集则牵涉到下面
的知识了。

因为在实际的训练中,训练的结果对于训练集的拟合程度通常还是挺好的(初始条件敏感),但
是对于训练集之外的数据的拟合程度通常就不那么令人满意了。因此我们通常并不会把所有的数
据集都拿来训练,而是分出一部分来(这一部分不参加训练)对训练集生成的参数进行测试,相
对客观的判断这些参数对训练集之外的数据的符合程度。这种思想就称为交叉验证(Cross
Validation)

from sklearn.model_selection import cross_val_score
from sklearn.metrics import mean_absolute_error, make_scorer
def log_transfer(func):
 def wrapper(y, yhat):
 result = func(np.log(y), np.nan_to_num(np.log(yhat)))
 return result
 return wrapper
 scores = cross_val_score(model, X=train_X, y=train_y, verbose=1, cv = 5, scoring=make_scorer(log_tr)
相关标签: 机器学习