CNN(functional)_model_fit

CNN Functional 모델 학습

In [0]:
# 런타임 -> 런타임 유형변경 -> 하드웨어 가속도 TPU변경
%tensorflow_version 2.x
#런타임 -> 런타임 다시시작
TensorFlow 2.x selected.
In [0]:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

1. Importing Libraries

In [0]:
import tensorflow as tf 
from tensorflow import keras
from tensorflow.keras.utils import to_categorical # one-hot 인코딩
import numpy as np
import matplotlib.pyplot as plt
import os

print(tf.__version__)     # 텐서플로우 버전확인 (colab의 기본버전은 1.15.0) --> 2.0 변경 "%tensorflow_version 2.x"
print(keras.__version__)  # 케라스 버전확인
2.1.0-rc1
2.2.4-tf

2. Hyper Parameters

In [0]:
learning_rate = 0.001
training_epochs = 15
batch_size = 100

3. MNIST Data

In [0]:
mnist = keras.datasets.mnist
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
In [0]:
# MNIST image load (trian, test)
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()    

# 0~255 중 하나로 표현되는 입력 이미지들의 값을 1 이하가 되도록 정규화    
train_images = train_images.astype(np.float32) / 255.
test_images = test_images.astype(np.float32) / 255.

# np.expand_dims 차원을 변경
train_images = np.expand_dims(train_images, axis=-1)
test_images = np.expand_dims(test_images, axis=-1)

# label을 ont-hot encoding    
train_labels = to_categorical(train_labels, 10)
test_labels = to_categorical(test_labels, 10)    
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step

4. Model Function

In [0]:
# Functional 모델 층 구성하기
def create_model():
    inputs = keras.Input(shape=(28, 28, 1))
    conv1 = keras.layers.Conv2D(filters=32, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)(inputs)
    pool1 = keras.layers.MaxPool2D(padding='SAME')(conv1)
    conv2 = keras.layers.Conv2D(filters=64, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)(pool1)
    pool2 = keras.layers.MaxPool2D(padding='SAME')(conv2)
    conv3 = keras.layers.Conv2D(filters=128, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)(pool2)
    pool3 = keras.layers.MaxPool2D(padding='SAME')(conv3)
    pool3_flat = keras.layers.Flatten()(pool3)
    dense4 = keras.layers.Dense(units=256, activation=tf.nn.relu)(pool3_flat)
    drop4 = keras.layers.Dropout(rate=0.4)(dense4)
    logits = keras.layers.Dense(units=10, activation=tf.nn.softmax)(drop4)
    return keras.Model(inputs=inputs, outputs=logits)
In [0]:
model = create_model() # 모델 함수를 model로 변경
model.summary() # 모델에 대한 요약 출력해줌
Model: "model_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_3 (InputLayer)         [(None, 28, 28, 1)]       0         
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 28, 28, 32)        320       
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 14, 14, 32)        0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 14, 14, 64)        18496     
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 7, 7, 64)          0         
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 7, 7, 128)         73856     
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 4, 4, 128)         0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 2048)              0         
_________________________________________________________________
dense_4 (Dense)              (None, 256)               524544    
_________________________________________________________________
dropout_2 (Dropout)          (None, 256)               0         
_________________________________________________________________
dense_5 (Dense)              (None, 10)                2570      
=================================================================
Total params: 619,786
Trainable params: 619,786
Non-trainable params: 0
_________________________________________________________________

5. Training

In [0]:
# CNN 모델 구조 확정하고 컴파일 진행
model.compile(loss='categorical_crossentropy',      # crossentropy loss
              optimizer='adam',                      # adam optimizer
              metrics=['accuracy'])                  # 측정값 : accuracy

# 학습실행
model.fit(train_images, train_labels,                # 입력값
          batch_size=batch_size,                      # 1회마다 배치마다 100개 프로세스 
          epochs=training_epochs,                     # 15회 학습
          verbose=1,                                  # verbose는 학습 중 출력되는 문구를 설정하는 것 
          validation_data=(test_images, test_labels)) # test를 val로 사용

score = model.evaluate(test_images, test_labels, verbose=0) # test 값 결과 확인
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Train on 60000 samples, validate on 10000 samples
Epoch 1/15
60000/60000 [==============================] - 97s 2ms/sample - loss: 0.2000 - accuracy: 0.9357 - val_loss: 0.0371 - val_accuracy: 0.9878
Epoch 2/15
60000/60000 [==============================] - 97s 2ms/sample - loss: 0.0572 - accuracy: 0.9834 - val_loss: 0.0362 - val_accuracy: 0.9884
Epoch 3/15
60000/60000 [==============================] - 97s 2ms/sample - loss: 0.0389 - accuracy: 0.9883 - val_loss: 0.0262 - val_accuracy: 0.9912
Epoch 4/15
60000/60000 [==============================] - 97s 2ms/sample - loss: 0.0305 - accuracy: 0.9905 - val_loss: 0.0217 - val_accuracy: 0.9925
Epoch 5/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0232 - accuracy: 0.9924 - val_loss: 0.0231 - val_accuracy: 0.9917
Epoch 6/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0200 - accuracy: 0.9936 - val_loss: 0.0244 - val_accuracy: 0.9922
Epoch 7/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0169 - accuracy: 0.9948 - val_loss: 0.0213 - val_accuracy: 0.9928
Epoch 8/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0151 - accuracy: 0.9953 - val_loss: 0.0243 - val_accuracy: 0.9933
Epoch 9/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0129 - accuracy: 0.9960 - val_loss: 0.0251 - val_accuracy: 0.9927
Epoch 10/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0128 - accuracy: 0.9959 - val_loss: 0.0276 - val_accuracy: 0.9920
Epoch 11/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0105 - accuracy: 0.9966 - val_loss: 0.0285 - val_accuracy: 0.9914
Epoch 12/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0094 - accuracy: 0.9967 - val_loss: 0.0246 - val_accuracy: 0.9928
Epoch 13/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0077 - accuracy: 0.9975 - val_loss: 0.0260 - val_accuracy: 0.9929
Epoch 14/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0092 - accuracy: 0.9971 - val_loss: 0.0229 - val_accuracy: 0.9937
Epoch 15/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0068 - accuracy: 0.9977 - val_loss: 0.0242 - val_accuracy: 0.9936
Test loss: 0.02418438413054219
Test accuracy: 0.9936

+ Recent posts