2022.3.7
搞清楚迁移学习中的微调是啥意思了。
1)定义经典模型
如:model_base= tf.keras.applications.VGG16(input_shape=image_size + (3,),
include_top = False,pooling = 'avg', weights = 'imagenet')
print(model_base.summary())
model_base.trainable = False
2)完善后面最后全连接层,注意只有include_top=false时才能调整输入图像size
如:model = keras.models.Sequential()
model.add(model_base)
model.add(tf.keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(512, activation = 'relu'))
model.add(keras.layers.Dense(1, activation = 'sigmoid')) # 因为include_top = False,所以需要自己定义最后一层
print(model.summary())
3) 看训练结果如何
4)如果想微调,则调model_base为可训练,然后想调哪层就改哪层为可训练。
model_base.trainable = True
len(model_base.layers)
fin_tune_at = -5
for layer in model_base.layers[:fin_tune_at]:
layer.trainable = False
原来如此