第T6周:好莱坞明星识别

embedded/2024/10/18 18:13:34/

>- **🍨 本文为[🔗365天深度学习训练营]中的学习记录博客**
>- **🍖 原作者:[K同学啊]**

● 难度:夯实基础
● 语言:Python3、TensorFlow2

🍺 要求:
1. 使用categorical_crossentropy(多分类的对数损失函数)完成本次选题
2. 探究不同损失函数的使用场景与代码实现

🍻 拔高(可选):
1. 自己搭建VGG-16网络框架
2. 调用官方的VGG-16网络框架
3. 使用VGG-16算法训练该模型

🔎 探索(难度有点大)
1. 准确率达到60%

一、前期工作 

🚀我的环境:

  • 语言环境:Python3.11.7
  • 编译器:jupyter notebook

1. 设置GPU

如果使用的是CPU可以忽略这步

import tensorflow as tfgpus=tf.config.list_physical_devices("GPU")if gpus:gpu0=gpus[0] #如果有多个GPU,仅使用第0个GPUtf.config.experimental.set_memory_growth(gpu0,True) #设置GPU显存用量按需使用tf.config.set_visible_devices([gpu0],"GPU")gpus

2. 导入数据

import pathlibdata_dir="D:\THE MNIST DATABASE\P6-data"
data_dir=pathlib.Path(data_dir)

3. 查看数据

image_count=len(list(data_dir.glob('*/*.jpg')))print("图片总数为:",image_count)

运行结果:

图片总数为: 1800

查看其中某张图片:

import PILroses=list(data_dir.glob('Jennifer Lawrence/*.jpg'))
PIL.Image.open(str(roses[0]))

运行结果:

二、数据预处理

1. 加载数据

使用image_dataset_from_directory方法将磁盘中的数据加载到tf.data.Dataset

测试集与验证集的关系:

  1. 验证集并没有参与训练过程梯度下降过程的,狭义上来讲是没有参与模型的参数训练更新的。
  2. 但是广义上来讲,验证集存在的意义确实参与了一个“人工调参”的过程,我们根据每一个epoch训练之后模型在valid data上的表现来决定是否需要训练进行early stop,或者根据这个过程模型的性能变化来调整模型的超参数,如学习率,batch_size等等。
  3. 因此,我们也可以认为,验证集也参与了训练,但是并没有使得模型去overfit验证集
batch_size=32
img_height=224
img_width=224

label_mode:

  • int:标签将被编码成整数(使用的损失函数应为:sparse_categorical_crossentropy loss)。
  • categorical:标签将被编码为分类向量(使用的损失函数应为:categorical_crossentropy loss)。

加载训练集:

train_ds=tf.keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.1,subset="training",label_mode="categorical",seed=123,image_size=(img_height,img_width),batch_size=batch_size
)

运行结果:

Found 1800 files belonging to 17 classes.
Using 1620 files for training.

加载验证集:

val_ds=tf.keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.1,subset="validation",label_mode="categorical",seed=123,image_size=(img_height,img_width),batch_size=batch_size
)

运行结果:

Found 1800 files belonging to 17 classes.
Using 180 files for validation.

通过class_names输出数据集的标签。标签将按字母顺序对应于目录名称。

class_names=train_ds.class_names
print(class_names)

运行结果:

['Angelina Jolie', 'Brad Pitt', 'Denzel Washington', 'Hugh Jackman', 'Jennifer Lawrence', 'Johnny Depp', 'Kate Winslet', 'Leonardo DiCaprio', 'Megan Fox', 'Natalie Portman', 'Nicole Kidman', 'Robert Downey Jr', 'Sandra Bullock', 'Scarlett Johansson', 'Tom Cruise', 'Tom Hanks', 'Will Smith']

2. 可视化数据

import matplotlib.pyplot as plt
import numpy as npplt.figure(figsize=(20,10))for images,labels in train_ds.take(1):for i in range(20):ax=plt.subplot(5,10,i+1)plt.imshow(images[i].numpy().astype("uint8"))plt.title(class_names[np.argmax(labels[i])])plt.axis("off")

运行结果:

3. 再次检查数据

for image_batch,labels_batch in train_ds:print(image_batch.shape)print(labels_batch.shape)break

 运行结果:

(32, 224, 224, 3)
(32, 17)
  • Image_batch是形状的张量(32,224,224,3)。这是一批形状224x224x3的32张图片(最后一维指的是彩色通道RGB)。
  • Label_batch是形状(32,)的张量,这些标签对应32张图片

4. 配置数据集

●shuffle() :打乱数据,关于此函数的详细介绍可以参考:https://zhuanlan.zhihu.com/p/42417456
●prefetch() :预取数据,加速运行

AUTOTUNE=tf.data.AUTOTUNEtrain_ds=train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds=val_ds.cache().prefetch(buffer_size=AUTOTUNE)

三、构建CNN网络

卷积神经网络(CNN)的输入是张量 (Tensor) 形式的 (image_height, image_width, color_channels),包含了图像高度、宽度及颜色信息。不需要输入batch size。color_channels 为 (R,G,B) 分别对应 RGB 的三个颜色通道(color channel)。在此示例中,我们的 CNN 输入的形状是 (224, 224, 3)即彩色图像。我们需要在声明第一层时将形状赋值给参数input_shape

网络结构图(可单击放大查看)

from tensorflow.keras import models,layersmodel=models.Sequential([layers.experimental.preprocessing.Rescaling(1./255,input_shape=(img_height,img_width,3)),layers.Conv2D(16,(3,3),activation='relu',input_shape=(img_height,img_width,3)),#卷积层1,卷积核3*3layers.AveragePooling2D((2,2)), #池化层1,2*2采样layers.Conv2D(32,(3,3),activation='relu'), #卷积层2,卷积核3*3layers.AveragePooling2D((2,2)),  #池化层2,2*2采样layers.Dropout(0.5),  #防止过拟合,提高模型泛化能力layers.Conv2D(64,(3,3),activation='relu'), #卷积层3,卷积核3*3layers.AveragePooling2D((2,2)), #池化层3,2*2采样layers.Dropout(0.5), #正则化layers.Conv2D(128,(3,3),activation='relu'), #卷积层4,3*3layers.Dropout(0.5),layers.Flatten(), #Flatten层,连接卷积层和全连接层layers.Dense(128,activation='relu'), #全连接层,特征进一步提取layers.Dense(len(class_names))  #输出层,输出预期结果
])model.summary()  #打印网络结构

运行结果:

Model: "sequential"
_________________________________________________________________Layer (type)                Output Shape              Param #   
=================================================================rescaling (Rescaling)       (None, 224, 224, 3)       0         conv2d (Conv2D)             (None, 222, 222, 16)      448       average_pooling2d (Average  (None, 111, 111, 16)      0         Pooling2D)                                                      conv2d_1 (Conv2D)           (None, 109, 109, 32)      4640      average_pooling2d_1 (Avera  (None, 54, 54, 32)        0         gePooling2D)                                                    dropout (Dropout)           (None, 54, 54, 32)        0         conv2d_2 (Conv2D)           (None, 52, 52, 64)        18496     average_pooling2d_2 (Avera  (None, 26, 26, 64)        0         gePooling2D)                                                    dropout_1 (Dropout)         (None, 26, 26, 64)        0         conv2d_3 (Conv2D)           (None, 24, 24, 128)       73856     dropout_2 (Dropout)         (None, 24, 24, 128)       0         flatten (Flatten)           (None, 73728)             0         dense (Dense)               (None, 128)               9437312   dense_1 (Dense)             (None, 17)                2193      =================================================================
Total params: 9536945 (36.38 MB)
Trainable params: 9536945 (36.38 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

四、训练模型

在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:
●损失函数(loss):用于衡量模型在训练期间的准确率。
●优化器(optimizer):决定模型如何根据其看到的数据和自身的损失函数进行更新。
●指标(metrics):用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。


1.设置动态学习率


📮 ExponentialDecay函数:
tf.keras.optimizers.schedules.ExponentialDecay是 TensorFlow 中的一个学习率衰减策略,用于在训练神经网络时动态地降低学习率。学习率衰减是一种常用的技巧,可以帮助优化算法更有效地收敛到全局最小值,从而提高模型的性能。

🔎 主要参数:
●initial_learning_rate(初始学习率):初始学习率大小。
●decay_steps(衰减步数):学习率衰减的步数。在经过 decay_steps 步后,学习率将按照指数函数衰减。例如,如果 decay_steps 设置为 10,则每10步衰减一次。
●decay_rate(衰减率):学习率的衰减率。它决定了学习率如何衰减。通常,取值在 0 到 1 之间。
●staircase(阶梯式衰减):一个布尔值,控制学习率的衰减方式。如果设置为 True,则学习率在每个 decay_steps 步之后直接减小,形成阶梯状下降。如果设置为 False,则学习率将连续衰减。

#设置初始学习率
initial_learning_rate=1e-4lr_schedule=tf.keras.optimizers.schedules.ExponentialDecay(initial_learning_rate,decay_steps=60,decay_rate=0.96,staircase=True
)#将指数衰减学习率送入优化器
opt=tf.keras.optimizers.Adam(learning_rate=lr_schedule)model.compile(optimizer=opt,loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),metrics=['accuracy'])

损失函数Loss详解:

1. binary_crossentropy(对数损失函数)

与 sigmoid 相对应的损失函数,针对于二分类问题。

2. categorical_crossentropy(多分类的对数损失函数)

与 softmax 相对应的损失函数,如果是one-hot编码,则使用 categorical_crossentropy

调用方法一:

model.compile(optimizer="adam",loss='categorical_crossentropy',metrics=['accuracy'])

调用方法二:

model.compile(optimizer="adam",loss=tf.keras.losses.CategoricalCrossentropy(),metrics=['accuracy'])

3. sparse_categorical_crossentropy(稀疏性多分类的对数损失函数)

softmax 相对应的损失函数,如果是整数编码,则使用 sparse_categorical_crossentropy

📌 调用方法一:

model.compile(optimizer="adam",loss='sparse_categorical_crossentropy',metrics=['accuracy'])

📌 调用方法二:

model.compile(optimizer="adam",loss=tf.keras.losses.SparseCategoricalCrossentropy(),metrics=['accuracy'])

函数原型

tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False,reduction=losses_utils.ReductionV2.AUTO,name='sparse_categorical_crossentropy'
)

参数说明:

  • from_logits: 为True时,会将y_pred转化为概率(用softmax),否则不进行转换,通常情况下用True结果更稳定;
  • reduction:类型为tf.keras.losses.Reduction,对loss进行处理,默认是AUTO;
  • name: name

2.早停与保存最佳模型参数

关于ModelCheckpoint的详细介绍可参考文章

EarlyStopping()参数说明:ModelCheckpoint 讲解【TensorFlow2入门手册】

  • monitor: 被监测的数据。
  • min_delta: 在被监测的数据中被认为是提升的最小变化, 例如,小于 min_delta 的绝对变化会被认为没有提升。
  • patience: 没有进步的训练轮数,在这之后训练就会被停止。
  • verbose: 详细信息模式。
  • mode: {auto, min, max} 其中之一。 在 min 模式中, 当被监测的数据停止下降,训练就会停止;在 max 模式中,当被监测的数据停止上升,训练就会停止;在 auto 模式中,方向会自动从被监测的数据的名字中判断出来。
  • baseline: 要监控的数量的基准值。 如果模型没有显示基准的改善,训练将停止。
  • estore_best_weights: 是否从具有监测数量的最佳值的时期恢复模型权重。 如果为 False,则使用在训练的最后一步获得的模型权重。

关于EarlyStopping()的详细介绍可参考文章早停 tf.keras.callbacks.EarlyStopping() 详解【TensorFlow2入门手册】

from tensorflow.keras.callbacks import ModelCheckpoint,EarlyStoppingepochs=100#保存最佳模型参数
checkpointer=ModelCheckpoint('T6_model.h5',monitor='val_accuracy',verbose=1,save_best_only=True,save_weights_only=True)#设置早停
earlystopper=EarlyStopping(monitor='val_accuracy',min_delta=0.001,patience=20,verbose=1)

3. 模型训练

history=model.fit(train_ds,validation_data=val_ds,epochs=epochs,callbacks=[checkpointer,earlystopper])

运行结果:

Epoch 1/100
51/51 [==============================] - ETA: 0s - loss: 2.8148 - accuracy: 0.1037
Epoch 1: val_accuracy improved from -inf to 0.13889, saving model to T6_model.h5
51/51 [==============================] - 54s 724ms/step - loss: 2.8148 - accuracy: 0.1037 - val_loss: 2.7825 - val_accuracy: 0.1389
Epoch 2/100
51/51 [==============================] - ETA: 0s - loss: 2.7602 - accuracy: 0.1191
Epoch 2: val_accuracy improved from 0.13889 to 0.15000, saving model to T6_model.h5
51/51 [==============================] - 36s 695ms/step - loss: 2.7602 - accuracy: 0.1191 - val_loss: 2.7352 - val_accuracy: 0.1500
……
Epoch 58/100
51/51 [==============================] - ETA: 0s - loss: 0.1992 - accuracy: 0.9407
Epoch 58: val_accuracy did not improve from 0.34444
51/51 [==============================] - 30s 596ms/step - loss: 0.1992 - accuracy: 0.9407 - val_loss: 3.7554 - val_accuracy: 0.3056
Epoch 59/100
51/51 [==============================] - ETA: 0s - loss: 0.2070 - accuracy: 0.9383
Epoch 59: val_accuracy did not improve from 0.34444
51/51 [==============================] - 31s 606ms/step - loss: 0.2070 - accuracy: 0.9383 - val_loss: 3.7213 - val_accuracy: 0.3222

大体跑到59轮时验证集的准确率勉强达到30%,故我临时停止。重新修改原模型为VGG16框架。

from tensorflow.keras import models,layers
from tensorflow.keras.layers import Conv2D,MaxPooling2D,Dense,Flatten,Dropoutdef VGG16(nb_classes,input_shape):input_tensor=layers.Input(shape=input_shape)#卷积块1x=Conv2D(64,(3,3),activation='relu',padding='same')(input_tensor)x=Conv2D(64,(3,3),activation='relu',padding='same')(x)x=MaxPooling2D((2,2),strides=(2,2))(x)#卷积块2x=Conv2D(128,(3,3),activation='relu',padding='same')(x)x=Conv2D(128,(3,3),activation='relu',padding='same')(x)x=MaxPooling2D((2,2),strides=(2,2))(x)#卷积块3x=Conv2D(256,(3,3),activation='relu',padding='same')(x)x=Conv2D(256,(3,3),activation='relu',padding='same')(x)x=Conv2D(256,(3,3),activation='relu',padding='same')(x)x=MaxPooling2D((2,2),strides=(2,2))(x)#卷积块4x=Conv2D(512,(3,3),activation='relu',padding='same')(x)x=Conv2D(512,(3,3),activation='relu',padding='same')(x)x=Conv2D(512,(3,3),activation='relu',padding='same')(x)x=MaxPooling2D((2,2),strides=(2,2))(x)#卷积块5x=Conv2D(512,(3,3),activation='relu',padding='same')(x)x=Conv2D(512,(3,3),activation='relu',padding='same')(x)x=Conv2D(512,(3,3),activation='relu',padding='same')(x)x=MaxPooling2D((2,2),strides=(2,2))(x)#全连接层x=Flatten()(x)x=Dense(4096,activation='relu')(x)x=Dense(4096,activation='relu')(x)output_tensor=Dense(nb_classes,activation='relu')(x)model=models.Model(input_tensor,output_tensor)return modelmodel=VGG16(len(class_names),(img_height,img_width,3))
model.summary()  #打印网络结构

运行结果:

Epoch 1/20
C:\Users\Administrator\AppData\Roaming\Python\Python311\site-packages\keras\src\backend.py:5562: UserWarning: "`categorical_crossentropy` received `from_logits=True`, but the `output` argument was produced by a Softmax activation and thus does not represent logits. Was this intended?output, from_logits = _get_logits(
51/51 [==============================] - ETA: 0s - loss: 2.8623 - accuracy: 0.0981 
Epoch 1: val_accuracy improved from -inf to 0.13889, saving model to T6_model.h5
51/51 [==============================] - 632s 12s/step - loss: 2.8623 - accuracy: 0.0981 - val_loss: 2.8038 - val_accuracy: 0.1389
Epoch 2/20
51/51 [==============================] - ETA: 0s - loss: 2.8088 - accuracy: 0.1068 
Epoch 2: val_accuracy improved from 0.13889 to 0.15000, saving model to T6_model.h5
51/51 [==============================] - 619s 12s/step - loss: 2.8088 - accuracy: 0.1068 - val_loss: 2.6988 - val_accuracy: 0.1500
Epoch 3/20
51/51 [==============================] - ETA: 0s - loss: 2.6224 - accuracy: 0.1426 
Epoch 3: val_accuracy did not improve from 0.15000
51/51 [==============================] - 623s 12s/step - loss: 2.6224 - accuracy: 0.1426 - val_loss: 2.5018 - val_accuracy: 0.1333
Epoch 4/20
51/51 [==============================] - ETA: 0s - loss: 2.3467 - accuracy: 0.2179 
Epoch 4: val_accuracy improved from 0.15000 to 0.25000, saving model to T6_model.h5
51/51 [==============================] - 620s 12s/step - loss: 2.3467 - accuracy: 0.2179 - val_loss: 2.3408 - val_accuracy: 0.2500
Epoch 5/20
51/51 [==============================] - ETA: 0s - loss: 2.0764 - accuracy: 0.3179 
Epoch 5: val_accuracy improved from 0.25000 to 0.26111, saving model to T6_model.h5
51/51 [==============================] - 620s 12s/step - loss: 2.0764 - accuracy: 0.3179 - val_loss: 2.2325 - val_accuracy: 0.2611
Epoch 6/20
51/51 [==============================] - ETA: 0s - loss: 1.8253 - accuracy: 0.3926 
Epoch 6: val_accuracy improved from 0.26111 to 0.28333, saving model to T6_model.h5
51/51 [==============================] - 620s 12s/step - loss: 1.8253 - accuracy: 0.3926 - val_loss: 2.1555 - val_accuracy: 0.2833
Epoch 7/20
51/51 [==============================] - ETA: 0s - loss: 1.4368 - accuracy: 0.5160 
Epoch 7: val_accuracy improved from 0.28333 to 0.32222, saving model to T6_model.h5
51/51 [==============================] - 620s 12s/step - loss: 1.4368 - accuracy: 0.5160 - val_loss: 2.1915 - val_accuracy: 0.3222
Epoch 8/20
51/51 [==============================] - ETA: 0s - loss: 1.0111 - accuracy: 0.6691 
Epoch 8: val_accuracy did not improve from 0.32222
51/51 [==============================] - 661s 13s/step - loss: 1.0111 - accuracy: 0.6691 - val_loss: 2.5368 - val_accuracy: 0.3111
Epoch 9/20
51/51 [==============================] - ETA: 0s - loss: 0.4512 - accuracy: 0.8605 
Epoch 9: val_accuracy improved from 0.32222 to 0.37778, saving model to T6_model.h5
51/51 [==============================] - 646s 13s/step - loss: 0.4512 - accuracy: 0.8605 - val_loss: 2.7887 - val_accuracy: 0.3778
Epoch 10/20
51/51 [==============================] - ETA: 0s - loss: 0.1820 - accuracy: 0.9481 
Epoch 10: val_accuracy did not improve from 0.37778
51/51 [==============================] - 641s 13s/step - loss: 0.1820 - accuracy: 0.9481 - val_loss: 3.8434 - val_accuracy: 0.3722
Epoch 11/20
51/51 [==============================] - ETA: 0s - loss: 0.0769 - accuracy: 0.9778 
Epoch 11: val_accuracy did not improve from 0.37778
51/51 [==============================] - 641s 13s/step - loss: 0.0769 - accuracy: 0.9778 - val_loss: 5.9557 - val_accuracy: 0.3222
Epoch 12/20
51/51 [==============================] - ETA: 0s - loss: 0.1315 - accuracy: 0.9636 
Epoch 12: val_accuracy did not improve from 0.37778
51/51 [==============================] - 645s 13s/step - loss: 0.1315 - accuracy: 0.9636 - val_loss: 4.4689 - val_accuracy: 0.3667
Epoch 13/20
51/51 [==============================] - ETA: 0s - loss: 0.0243 - accuracy: 0.9938 
Epoch 13: val_accuracy improved from 0.37778 to 0.41111, saving model to T6_model.h5
51/51 [==============================] - 658s 13s/step - loss: 0.0243 - accuracy: 0.9938 - val_loss: 4.9613 - val_accuracy: 0.4111
Epoch 14/20
51/51 [==============================] - ETA: 0s - loss: 0.0120 - accuracy: 0.9963 
Epoch 14: val_accuracy did not improve from 0.41111
51/51 [==============================] - 664s 13s/step - loss: 0.0120 - accuracy: 0.9963 - val_loss: 5.4229 - val_accuracy: 0.3944
Epoch 15/20
51/51 [==============================] - ETA: 0s - loss: 0.0038 - accuracy: 0.9994 
Epoch 15: val_accuracy improved from 0.41111 to 0.42222, saving model to T6_model.h5
51/51 [==============================] - 626s 12s/step - loss: 0.0038 - accuracy: 0.9994 - val_loss: 6.1491 - val_accuracy: 0.4222
Epoch 16/20
51/51 [==============================] - ETA: 0s - loss: 0.0010 - accuracy: 0.9994 
Epoch 16: val_accuracy did not improve from 0.42222
51/51 [==============================] - 622s 12s/step - loss: 0.0010 - accuracy: 0.9994 - val_loss: 5.7233 - val_accuracy: 0.4222
Epoch 17/20
51/51 [==============================] - ETA: 0s - loss: 0.0090 - accuracy: 0.9969 
Epoch 17: val_accuracy improved from 0.42222 to 0.42778, saving model to T6_model.h5
51/51 [==============================] - 623s 12s/step - loss: 0.0090 - accuracy: 0.9969 - val_loss: 5.8963 - val_accuracy: 0.4278
Epoch 18/20
51/51 [==============================] - ETA: 0s - loss: 0.0017 - accuracy: 0.9994 
Epoch 18: val_accuracy did not improve from 0.42778
51/51 [==============================] - 624s 12s/step - loss: 0.0017 - accuracy: 0.9994 - val_loss: 6.1538 - val_accuracy: 0.4278
Epoch 19/20
51/51 [==============================] - ETA: 0s - loss: 1.1349e-04 - accuracy: 1.0000 
Epoch 19: val_accuracy did not improve from 0.42778
51/51 [==============================] - 624s 12s/step - loss: 1.1349e-04 - accuracy: 1.0000 - val_loss: 6.2141 - val_accuracy: 0.4278
Epoch 20/20
51/51 [==============================] - ETA: 0s - loss: 8.4786e-05 - accuracy: 1.0000 
Epoch 20: val_accuracy improved from 0.42778 to 0.43333, saving model to T6_model.h5
51/51 [==============================] - 626s 12s/step - loss: 8.4786e-05 - accuracy: 1.0000 - val_loss: 6.2625 - val_accuracy: 0.4333

五、模型评估

1. Loss与Accuracy图

acc=history.history['accuracy']
val_acc=history.history['val_accuracy']loss=history.history['loss']
val_loss=history.history['val_loss']epochs_range=range(len(loss))plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.plot(epochs_range,acc,label='Training Accuracy')
plt.plot(epochs_range,val_acc,label='Validation Accuracy')
plt.legend(loc='lower right')plt.subplot(1,2,2)
plt.plot(epochs_range,loss,label='Training Loss')
plt.plot(epochs_range,val_loss,label='Validation Loss')
plt.legend('loc=upper right')
plt.title('Training and Validation Loss')
plt.show()

运行结果: 

发现模型过拟合现象严重。

六、模型改进与心得体会

考虑添加Dropout参数防止过拟合,运行后发现结果更差。

由于修改模型过于繁琐,故采用直接引入VGG16模型尝试。

from tensorflow.keras.applications import VGG16model_base=VGG16(weights='imagenet',include_top=False,input_shape=(224,224,3))

 VGG16模型有以下参数:

  1. weights: 权重初始化方式,可以是以下选项之一:

    • 'imagenet': 加载在ImageNet数据集上预训练的权重。
    • None: 不加载任何预训练权重,即随机初始化权重。
  2. include_top: 是否包含顶层的全连接层(通常是分类层)。默认为True,表示包含顶层。如果设置为False,则只返回卷积基网络的部分,通常用于特征提取。

  3. input_shape: 输入图像的形状,例如(224, 224, 3)表示一个224x224像素的RGB图像。

  4. classes: 分类的数量,即输出层的神经元数量。默认情况下,对于预训练的ImageNet权重,类别数为1000(对应于1000个不同的类别)。如果你有自己的数据集,并且类别数不同,你需要设置这个参数。

  5. pooling: 池化方式,可选值有'max''avg'。默认为'max',表示使用最大池化;如果设置为'avg',则使用平均池化。

在此次引入模型中,include_top参数选择为False,也即仅仅引入卷积网络,究其原因,是因为后面的全连接层所占参数比例过大,而我本身是CPU版本,模型跑起来时间过长,故选择自己修改后面的全连接层。

from tensorflow.keras import models,layersmodel=models.Sequential()
model.add(model_base)
model.add(layers.Flatten())
model.add(layers.Dense(256,activation='relu'))
model.add(layers.Dropout(0.4))
model.add(layers.Dense(len(class_names)))
model.summary()

 运行结果:

Model: "sequential"
_________________________________________________________________Layer (type)                Output Shape              Param #   
=================================================================vgg16 (Functional)          (None, 7, 7, 512)         14714688  flatten (Flatten)           (None, 25088)             0         dense (Dense)               (None, 256)               6422784   dropout (Dropout)           (None, 256)               0         dense_1 (Dense)             (None, 17)                4369      =================================================================
Total params: 21141841 (80.65 MB)
Trainable params: 21141841 (80.65 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

从中可以看出,Total params的数量大大减少。

运行模型后结果:

Epoch 1/20
51/51 [==============================] - ETA: 0s - loss: 3.6238 - accuracy: 0.0654 
Epoch 1: val_accuracy improved from -inf to 0.05000, saving model to T6_model.h5
51/51 [==============================] - 672s 13s/step - loss: 3.6238 - accuracy: 0.0654 - val_loss: 2.8251 - val_accuracy: 0.0500
Epoch 2/20
51/51 [==============================] - ETA: 0s - loss: 2.8299 - accuracy: 0.0932 
Epoch 2: val_accuracy improved from 0.05000 to 0.12778, saving model to T6_model.h5
51/51 [==============================] - 647s 13s/step - loss: 2.8299 - accuracy: 0.0932 - val_loss: 2.8338 - val_accuracy: 0.1278
Epoch 3/20
51/51 [==============================] - ETA: 0s - loss: 2.8214 - accuracy: 0.1006 
Epoch 3: val_accuracy did not improve from 0.12778
51/51 [==============================] - 603s 12s/step - loss: 2.8214 - accuracy: 0.1006 - val_loss: 2.8255 - val_accuracy: 0.0500
Epoch 4/20
51/51 [==============================] - ETA: 0s - loss: 2.7933 - accuracy: 0.0951 
Epoch 4: val_accuracy improved from 0.12778 to 0.13889, saving model to T6_model.h5
51/51 [==============================] - 637s 12s/step - loss: 2.7933 - accuracy: 0.0951 - val_loss: 2.8041 - val_accuracy: 0.1389
Epoch 5/20
51/51 [==============================] - ETA: 0s - loss: 2.7476 - accuracy: 0.1191 
Epoch 5: val_accuracy improved from 0.13889 to 0.15000, saving model to T6_model.h5
51/51 [==============================] - 678s 13s/step - loss: 2.7476 - accuracy: 0.1191 - val_loss: 2.7612 - val_accuracy: 0.1500
Epoch 6/20
51/51 [==============================] - ETA: 0s - loss: 2.6780 - accuracy: 0.1469 
Epoch 6: val_accuracy did not improve from 0.15000
51/51 [==============================] - 684s 14s/step - loss: 2.6780 - accuracy: 0.1469 - val_loss: 2.6977 - val_accuracy: 0.1500
Epoch 7/20
51/51 [==============================] - ETA: 0s - loss: 2.6516 - accuracy: 0.1438 
Epoch 7: val_accuracy did not improve from 0.15000
51/51 [==============================] - 638s 12s/step - loss: 2.6516 - accuracy: 0.1438 - val_loss: 2.6933 - val_accuracy: 0.1278
Epoch 8/20
51/51 [==============================] - ETA: 0s - loss: 2.5822 - accuracy: 0.1543 
Epoch 8: val_accuracy improved from 0.15000 to 0.17222, saving model to T6_model.h5
51/51 [==============================] - 601s 12s/step - loss: 2.5822 - accuracy: 0.1543 - val_loss: 2.5967 - val_accuracy: 0.1722
Epoch 9/20
51/51 [==============================] - ETA: 0s - loss: 2.5001 - accuracy: 0.1883 
Epoch 9: val_accuracy did not improve from 0.17222
51/51 [==============================] - 612s 12s/step - loss: 2.5001 - accuracy: 0.1883 - val_loss: 2.5445 - val_accuracy: 0.1667
Epoch 10/20
51/51 [==============================] - ETA: 0s - loss: 2.4183 - accuracy: 0.1994 
Epoch 10: val_accuracy improved from 0.17222 to 0.21111, saving model to T6_model.h5
51/51 [==============================] - 601s 12s/step - loss: 2.4183 - accuracy: 0.1994 - val_loss: 2.4706 - val_accuracy: 0.2111
Epoch 11/20
51/51 [==============================] - ETA: 0s - loss: 2.3507 - accuracy: 0.2284 
Epoch 11: val_accuracy improved from 0.21111 to 0.23333, saving model to T6_model.h5
51/51 [==============================] - 614s 12s/step - loss: 2.3507 - accuracy: 0.2284 - val_loss: 2.4065 - val_accuracy: 0.2333
Epoch 12/20
51/51 [==============================] - ETA: 0s - loss: 2.2458 - accuracy: 0.2698 
Epoch 12: val_accuracy did not improve from 0.23333
51/51 [==============================] - 605s 12s/step - loss: 2.2458 - accuracy: 0.2698 - val_loss: 2.3563 - val_accuracy: 0.2056
Epoch 13/20
51/51 [==============================] - ETA: 0s - loss: 2.1484 - accuracy: 0.2944 
Epoch 13: val_accuracy improved from 0.23333 to 0.27222, saving model to T6_model.h5
51/51 [==============================] - 590s 12s/step - loss: 2.1484 - accuracy: 0.2944 - val_loss: 2.3116 - val_accuracy: 0.2722
Epoch 14/20
51/51 [==============================] - ETA: 0s - loss: 2.0489 - accuracy: 0.3321 
Epoch 14: val_accuracy did not improve from 0.27222
51/51 [==============================] - 591s 12s/step - loss: 2.0489 - accuracy: 0.3321 - val_loss: 2.2721 - val_accuracy: 0.2667
Epoch 15/20
51/51 [==============================] - ETA: 0s - loss: 1.9441 - accuracy: 0.3537 
Epoch 15: val_accuracy did not improve from 0.27222
51/51 [==============================] - 591s 12s/step - loss: 1.9441 - accuracy: 0.3537 - val_loss: 2.2598 - val_accuracy: 0.2444
Epoch 16/20
51/51 [==============================] - ETA: 0s - loss: 1.7549 - accuracy: 0.4099 
Epoch 16: val_accuracy improved from 0.27222 to 0.32778, saving model to T6_model.h5
51/51 [==============================] - 590s 12s/step - loss: 1.7549 - accuracy: 0.4099 - val_loss: 2.2388 - val_accuracy: 0.3278
Epoch 17/20
51/51 [==============================] - ETA: 0s - loss: 1.5727 - accuracy: 0.4691 
Epoch 17: val_accuracy did not improve from 0.32778
51/51 [==============================] - 569s 11s/step - loss: 1.5727 - accuracy: 0.4691 - val_loss: 2.1595 - val_accuracy: 0.2889
Epoch 18/20
51/51 [==============================] - ETA: 0s - loss: 1.4369 - accuracy: 0.5160 
Epoch 18: val_accuracy did not improve from 0.32778
51/51 [==============================] - 569s 11s/step - loss: 1.4369 - accuracy: 0.5160 - val_loss: 2.1593 - val_accuracy: 0.2944
Epoch 19/20
51/51 [==============================] - ETA: 0s - loss: 1.2512 - accuracy: 0.5815 
Epoch 19: val_accuracy improved from 0.32778 to 0.33333, saving model to T6_model.h5
51/51 [==============================] - 570s 11s/step - loss: 1.2512 - accuracy: 0.5815 - val_loss: 2.3871 - val_accuracy: 0.3333
Epoch 20/20
51/51 [==============================] - ETA: 0s - loss: 1.0838 - accuracy: 0.6444 
Epoch 20: val_accuracy improved from 0.33333 to 0.39444, saving model to T6_model.h5
51/51 [==============================] - 571s 11s/step - loss: 1.0838 - accuracy: 0.6444 - val_loss: 2.1118 - val_accuracy: 0.3944

由于始终未能安装上GPU版本的TensorFlow,本项目在测试过程中十分缓慢,仅仅测试20轮就花费了大概5个小时左右的时间。虽然未能达到要求的验证集60%的准确率 ,从结果上看,模型一直处于上升的过程中。由于实在太耗时,故本次项目的运行就到此为止。希望以后能在GPU版本的情况下继续修正本模型。


http://www.ppmy.cn/embedded/91121.html

相关文章

老版本kafka查询topic消费情况(python查询)

由于老版本的kafka缺少shell,导致无法通过命令直接进行查询,所以通过python代码,实现消费情况查询 安装必须的包 #pyhon2.5 pip install kafka-python1.4.7python脚本 #!/usr/bin/env python import sys from kafka import KafkaConsumer, …

Oracle认证1Z0-071线上考试注意事项

目录 一、前言二、回顾过往战绩第一次 裸考🐒第二次 背题库硬考!🐒第三次 软件卡住,寄!🙈第四次 汇总纠错,通过!🌚 三、考试流程四、考试注意事项1. 是否需要科学上网2. …

第六章 网络互连与互联网(一)

一、网络互连设备 (1)组成因特网的各个网络叫做子网,用于连接子网的设备叫作中间系统 ( IS ), 它的主要作用是协调各个网络的工作,使得跨网络的通信得以实现。 (2)网络互连设备的作用是连接不…

同城交易小程序的设计

管理员账户功能包括:系统首页,个人中心,商家管理,用户管理,商品分类管理,商品信息管理,订单管理,系统管理 微信端账号功能包括:系统首页,商品信息&#xff0…

【java基础】徒手写Hello, World!程序

文章目录 前提:java环境变量配置使用vscode编写helloworld解析 前提:java环境变量配置 https://blog.csdn.net/xzzteach/article/details/140869188 使用vscode编写helloworld code .为什么用code看下图 报错了!!!&…

如何在运行Centos 6的虚拟服务器上安装cPanel

前些天发现了一个巨牛的人工智能学习网站,通俗易懂,风趣幽默,忍不住分享一下给大家。点击跳转到网站。 Status: 废弃 本文涵盖的 CentOS 版本已不再受支持。如果您目前正在运行 CentOS 6 服务器,我们强烈建议升级或迁移到受支持的…

数据结构——双链表详解(超详细)

前言: 小编在之前已经写过单链表的创建了,接下来要开始双链表的讲解了,双链表比单链表要复杂一些,不过确实要比单链表更好进行实现!下面紧跟小编的步伐,开启今天的双链表之旅! 目录 1.概念和结构…

ATTCK实战系列-红队评估 (一)Vulnstack三层网络域渗透

目录 一、搭建环境 1.靶场下载地址: 2、网络拓扑 3、环境配置 Win7(外网服务器 ) Win2008(域控) Win2003(域成员) 4、启动环境 二、信息收集 1、端口扫描 2、目录扫描 三、漏洞利用…