大家好,我是带我去滑雪!
在机器学习模型中,图像组学特征可分为三类。第一类是相关特征,这些特征对机器学习有积极作用,能够提升学习算法的性能。第二类是无关特征,它们对算法没有任何帮助,不会改善模型的学习效果。第三类是冗余特征,此类特征的信息可以通过其他特征推导出来,不会为算法提供额外的相关新信息。
在特征筛选或降维过程中,我们的目标是减少或去除无关特征和噪声特征,以减少它们对模型的影响,同时注意保留对模型有用的相关特征。特征降维的核心目的是提升机器学习模型的学习效率,并有效降低过模型的风险。常用的特征选择方法有:过滤式(按照统计检验中某种标准,设定阈值进行过滤,如方差选择法,卡方检验,相关系数法,互信息法等都属于过滤式特征筛选。)、包装式(根据目标函数,多次迭代,每次保留若干特征。最常用的包装式特征筛选方法就是递归特征消除法)、嵌入式(通过结合机器学习算法,得到各个特征的权重系数,从而进行排序和筛选。常用的LASSO岭回归以及梯度提升决策树(GBTD)等都属于嵌入式特征筛选。)。下面开始代码实战。
目录
一、过滤式
(1)方差法
(2)t检验
(3)曼惠特尼U检验
(4)互信息法
二、包装式
三、嵌入式
一、过滤式
(1)方差法
from sklearn.feature_selection import VarianceThreshold
import pandas as pd
data = pd.read_csv(r'E:\工作\硕士\博客\博客95-多种方法实现影像组学特征筛选\Meter_D.csv')
x = data.iloc[:, 0:-1]
y = data["V44"]
print(x.shape)
selector_var = VarianceThreshold(0.1)
x_var = selector_var.fit_transform(x)
print("筛选后数据形状:", x_var.shape)
support_mask = selector_var.get_support()
print("保留的特征索引:", support_mask)
selected_features = x.columns[support_mask]
print("保留的特征名称:", selected_features)
x_var = pd.DataFrame(x_var, columns=selected_features)
print(x_var)
(2)t检验
import pandas as pd
from scipy.stats import ttest_ind,levenedata = pd.read_csv(r'E:\工作\硕士\博客\博客95-多种方法实现影像组学特征筛选\Meter_D.csv')
data_a = data[data["V44"] == 0]
data_b = data[data["V44"] == 1]
data_a
data_bx_a = data_a.iloc[:,0:-1]
y_a = data["V44"]
x_b = data_b.iloc[:,0:-1]
y_b = data["V44"]
print(x_a.shape,x_b.shape) levene(x_a["A"],x_b["A"])
ttest_ind(x_a["A"],x_b["A"],equal_var=False)colNamesSel_t = []
for colName in x_a.columns[:]:if levene(x_a[colName],x_b[colName])[1] > 0.05:if ttest_ind(x_a[colName],x_b[colName])[1] < 0.05:colNamesSel_t.append(colName) else:if ttest_ind(x_a[colName],x_b[colName],equal_var=False)[1] < 0.05: colNamesSel_t.append(colName)
print(len(colNamesSel_t))
print(colNamesSel_t)
(3)曼惠特尼U检验
import pandas as pd
from scipy.stats import ttest_ind,levenedata = pd.read_csv(r'E:\工作\硕士\博客\博客95-多种方法实现影像组学特征筛选\Meter_D.csv')
data_a = data[data["V44"] == 0]
data_b = data[data["V44"] == 1]
data_a
data_bx_a = data_a.iloc[:,0:-1]
y_a = data["V44"]
x_b = data_b.iloc[:,0:-1]
y_b = data["V44"]
print(x_a.shape,x_b.shape) levene(x_a["A"],x_b["A"])
ttest_ind(x_a["A"],x_b["A"],equal_var=False)colNamesSel_mvU = []
for colName in x_a.columns[:]:if mannwhitneyu(x_a[colName],x_b[colName])[1] < 0.05:colNamesSel_mvU.append(colName)
print(len(colNamesSel_mvU))
print(colNamesSel_mvU)
(4)互信息法
from sklearn.feature_selection import mutual_info_classif as MI
import pandas as pd
data = pd.read_csv(r'E:\工作\硕士\博客\博客95-多种方法实现影像组学特征筛选\Meter_D.csv')
x = data.iloc[:, 0:-1]
y = data["V44"]
print(x.shape)
MI_result = MI(x,y)
MI_result
x_MI = x[x.collumns[MI_result > 0.1]]
x_MI
from sklearn.feature_selection import SelectKBest
SKB = SelectKBest(MI,15)
SKB.fit(x,y)
SKB.scores_ SKB.get_support()
x_MI_k15 = SKB.transform(x)
x_MI_k15x_MI_k15 = pd.DataFrame(x_MI_k15,columns = x.columns[SKB.get_support()])
x_MI_k15
二、包装式
from sklearn.feature_selection import RFE
from sklearn.ensemble import RandomForestClassifier
import pandas as pddata = pd.read_csv(r'E:\工作\硕士\博客\博客95-多种方法实现影像组学特征筛选\Meter_D.csv')
x = data.iloc[:, 0:-1]
y = data["V44"]
print(x.shape) RFC = RandomForestClassifier(n_estimators = 25,random_state=11)
selector_RFE = RFE(RFC,n_features_to_select=15,step=1).fit(x,y) selector_RFE.n_features_selector_RFE.ranking_ selector_RFE.support_
x_RFE = x[x.columns[selector_RFE.support_]]
x_RFE
三、嵌入式
from sklearn.linear_model import LassoCV
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt# 读取数据
data = pd.read_csv(r'E:\工作\硕士\博客\博客95-多种方法实现影像组学特征筛选\Meter_D.csv')
x = data.iloc[:, :-1]
y = data["V44"]
print(x.shape)# 设置 LassoCV 的 alpha 参数范围
alphas = np.logspace(-10, -1, 100, base=10)# LassoCV 训练
selector_lasso = LassoCV(alphas=alphas, cv=5, max_iter=int(1e6))
selector_lasso.fit(x, y)# 输出最优 alpha 值和系数
print("Optimal alpha:", selector_lasso.alpha_)
print("Feature coefficients:", selector_lasso.coef_)# 选取非零系数的特征
df_selected_features = x.loc[:, selector_lasso.coef_ != 0]
print("Selected features:", df_selected_features.columns.tolist())# 输出截距和 MSE 误差路径形状
print("Intercept:", selector_lasso.intercept_)
print("MSE path shape:", selector_lasso.mse_path_.shape)# 计算 MSE 均值和标准差
MSEs_mean = selector_lasso.mse_path_.mean(axis=1)
MSEs_std = selector_lasso.mse_path_.std(axis=1)# 画图 1: MSE 均值和误差棒
plt.figure()
plt.errorbar(selector_lasso.alphas_, MSEs_mean, yerr=MSEs_std, fmt="o",ms=3, mfc="r", mec="r", ecolor="lightblue",elinewidth=2, capsize=4, capthick=1)
plt.semilogx()
plt.axvline(selector_lasso.alpha_, color="black", ls="--")
plt.xlabel("Lambda")
plt.ylabel("MSE")
plt.show()# 计算最小 MSE 处的一倍标准误
alpha_index = np.where(selector_lasso.alphas_ == selector_lasso.alpha_)[0][0]
MSE_mean_SE = MSEs_mean[alpha_index] + MSEs_std[alpha_index]
alpha_se_index = np.min(np.where(MSEs_mean <= MSE_mean_SE))
alpha_se = selector_lasso.alphas_[alpha_se_index]
print("Alpha at 1-SE rule:", alpha_se)# 画图 2: 加入 1-SE 规则的线
plt.figure()
plt.errorbar(selector_lasso.alphas_, MSEs_mean, yerr=MSEs_std, fmt="o",ms=3, mfc="r", mec="r", ecolor="lightblue",elinewidth=2, capsize=4, capthick=1)
plt.semilogx()
plt.axvline(selector_lasso.alpha_, color="black", ls="--")
plt.axvline(alpha_se, color="black", ls="--")
plt.xlabel("Lambda")
plt.ylabel("MSE")
plt.show()# 选出 alpha_ 和 alpha_se 之间的特征
selector_lasso_se = LassoCV(alphas=[alpha_se], cv=5, max_iter=int(1e6))
selector_lasso_se.fit(x, y)
selected_features_se = x.columns[selector_lasso_se.coef_ != 0]
print("Selected features at alpha_se:", selected_features_se.tolist())
输出结果:
更多优质内容持续发布中,请移步主页查看。
若有问题可邮箱联系:1736732074@qq.com
博主的WeChat:TCB1736732074
点赞+关注,下次不迷路!