CNN(subclassing)_GradientTape

CNN Subclassing 모델 학습

출처 : https://github.com/deeplearningzerotoall/TensorFlow (모두의 딥러닝)

In [0]:
# 런타임 -> 런타임 유형변경 -> 하드웨어 가속도 TPU변경
%tensorflow_version 2.x
#런타임 -> 런타임 다시시작
TensorFlow 2.x selected.
In [0]:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

1. Importing Libraries

In [0]:
import tensorflow.compat.v1 as tf # 텐서플로우 1.X 버전 사용 가능
from tensorflow import keras
from tensorflow.keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
import os

print(tf.__version__)     # 텐서플로우 버전확인 (colab의 기본버전은 1.15.0) --> 2.0 변경 "%tensorflow_version 2.x"
print(keras.__version__)  # 케라스 버전확인
2.1.0-rc1
2.2.4-tf

2. Enable Eager Mode

In [0]:
# 그래프 기반 모드에서 즉시 실행 (Eager Execution) 모드로 변경하여 사용
tf.enable_eager_execution()

3. Hyper Parameters

In [0]:
learning_rate = 0.001  # 러닝레이트 
training_epochs = 15   # 에폭
batch_size = 100       # 배치사이즈

tf.set_random_seed(777)  # 랜덤하게 숫자 추출

4. Creating a Checkpoint Directory

In [0]:
cur_dir = os.getcwd()            # 현재 스크립트의 실행 경로
ckpt_dir_name = 'checkpoints'    # 체크포인트 이름 설정
model_dir_name = 'minst_cnn_seq' # 모델 이름 설정

checkpoint_dir = os.path.join(cur_dir, ckpt_dir_name, model_dir_name) # 경로와 이름 설정
os.makedirs(checkpoint_dir, exist_ok=True) # 폴더 생성

checkpoint_prefix = os.path.join(checkpoint_dir, model_dir_name) # 저장되는 이름 설정

5. MNIST/Fashion MNIST Data

In [0]:
## MNIST Dataset #########################################################
mnist = keras.datasets.mnist
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
##########################################################################

## Fashion MNIST Dataset #################################################
#mnist = keras.datasets.fashion_mnist
#class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
##########################################################################

6. Datasets

In [0]:
# MNIST image load (trian, test)
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()    

# 0~255 중 하나로 표현되는 입력 이미지들의 값을 1 이하가 되도록 정규화    
train_images = train_images.astype(np.float32) / 255.
test_images = test_images.astype(np.float32) / 255.

# np.expand_dims 차원을 변경
train_images = np.expand_dims(train_images, axis=-1)
test_images = np.expand_dims(test_images, axis=-1)

# label을 ont-hot encoding    
train_labels = to_categorical(train_labels, 10)
test_labels = to_categorical(test_labels, 10)    

# dataset 인스턴스 만들기
train_dataset = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).shuffle(
                buffer_size=100000).batch(batch_size)
test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(batch_size)
# from_tensor_slices : 이미지를 이미지와 라벨로 나누기
# batch : 해당 배치 사이즈 만큼 나누기
# shuffle : 고정된 buffer_size만큼 epoch 마다 이미지를 섞어서 오버피팅이 줄도록 도와줌
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step

8. Model Class

In [0]:
# model class 구현
class MNISTModel(tf.keras.Model): # keras.model 구현
    def __init__(self): # 기본이 되는 층을 구현
        super(MNISTModel, self).__init__()
        self.conv1 = keras.layers.Conv2D(filters=32, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)
        self.pool1 = keras.layers.MaxPool2D(padding='SAME')
        self.conv2 = keras.layers.Conv2D(filters=64, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)
        self.pool2 = keras.layers.MaxPool2D(padding='SAME')
        self.conv3 = keras.layers.Conv2D(filters=128, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)
        self.pool3 = keras.layers.MaxPool2D(padding='SAME')
        self.pool3_flat = keras.layers.Flatten()
        self.dense4 = keras.layers.Dense(units=256, activation=tf.nn.relu)
        self.drop4 = keras.layers.Dropout(rate=0.4)
        self.dense5 = keras.layers.Dense(units=10)
    def call(self, inputs, training=False):  # init에서 만든 층을 불러와서 network 구성
        net = self.conv1(inputs)
        net = self.pool1(net)
        net = self.conv2(net)
        net = self.pool2(net)
        net = self.conv3(net)
        net = self.pool3(net)
        net = self.pool3_flat(net)
        net = self.dense4(net)
        net = self.drop4(net)
        net = self.dense5(net)
        return net
In [0]:
model = MNISTModel() # 모델 함수를 model로 변경
temp_inputs = keras.Input(shape=(28, 28, 1)) # model input image size
model(temp_inputs) # model input
model.summary() # 모델에 대한 요약 출력해줌
Model: "mnist_model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 28, 28, 32)        320       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 14, 14, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 7, 7, 64)          0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 7, 7, 128)         73856     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 4, 4, 128)         0         
_________________________________________________________________
flatten (Flatten)            (None, 2048)              0         
_________________________________________________________________
dense (Dense)                (None, 256)               524544    
_________________________________________________________________
dropout (Dropout)            (None, 256)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 10)                2570      
=================================================================
Total params: 619,786
Trainable params: 619,786
Non-trainable params: 0
_________________________________________________________________

8. Loss Function

In [0]:
def loss_fn(model, images, labels):
    logits = model(images, training=True)
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2( #softmax값 함께 계산
            logits=logits, labels=labels))    
    return loss   

9. Calculating Gradient

In [0]:
def grad(model, images, labels):
    with tf.GradientTape() as tape: # 자동 미분이 가능하고 실행된 모든 연산을 테이프에 기록함
        loss = loss_fn(model, images, labels)
    return tape.gradient(loss, model.variables)

10. Caculating Model's Accuracy

In [0]:
def evaluate(model, images, labels):
    logits = model(images, training=False)
    correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1)) # 라벨값들을 비교
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # 동일한 라벨값들에 대한 평균을 구함
    return accuracy

11. Optimizer

In [0]:
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)

12. Creating a Checkpoint

In [0]:
checkpoint = tf.train.Checkpoint(cnn=model) # 데이터 그룹화하여 저장하고 추후 복원에 사용됨 

13. Training

In [0]:
# train my model
print('Learning started. It takes sometime.')
for epoch in range(training_epochs): # 하이퍼 파마리터로 설정 (training_epochs = 15) 
    
    # 값 초기화
    avg_loss = 0.
    avg_train_acc = 0.
    avg_test_acc = 0.
    train_step = 0
    test_step = 0
    
    # Training
    for images, labels in train_dataset:
        grads = grad(model, images, labels)                     # GradientTape에 계산된 데이터 저장             
        optimizer.apply_gradients(zip(grads, model.variables))  # optimizer 실행
        loss = loss_fn(model, images, labels)                   # 해당 epoch의 loss 계산
        acc = evaluate(model, images, labels)                   # 해당 epoch의 accuracy 계산
        avg_loss = avg_loss + loss                              # 총 loss 합산
        avg_train_acc = avg_train_acc + acc                     # 총 accuracy 계산
        train_step += 1                                         # 한 epoch 실행마다 step 갯수 1씩 늘어남
    avg_loss = avg_loss / train_step                            # loss 값 계산
    avg_train_acc = avg_train_acc / train_step                  # accuracy 값 계산
    
    # Test
    for images, labels in test_dataset:        
        acc = evaluate(model, images, labels)                  # 해당 epoch의 accuracy 계산
        avg_test_acc = avg_test_acc + acc                      # 총 accuracy 계산
        test_step += 1                                         # 한 epoch 실행마다 step 갯수 1씩 늘어남
    avg_test_acc = avg_test_acc / test_step                    # accuracy 값 계산

    # epoch 별 loss, accuracy값 출력하기
    print('Epoch:', '{}'.format(epoch + 1), 'loss =', '{:.8f}'.format(avg_loss), 
          'train accuracy = ', '{:.4f}'.format(avg_train_acc), 
          'test accuracy = ', '{:.4f}'.format(avg_test_acc))
    
    # 해당 모델의 값들 저장
    checkpoint.save(file_prefix=checkpoint_prefix)

print('Learning Finished!')
Learning started. It takes sometime.
Epoch: 1 loss = 0.17350215 train accuracy =  0.9587 test accuracy =  0.9851
Epoch: 2 loss = 0.04424703 train accuracy =  0.9906 test accuracy =  0.9905
Epoch: 3 loss = 0.03116141 train accuracy =  0.9932 test accuracy =  0.9926
Epoch: 4 loss = 0.02212973 train accuracy =  0.9958 test accuracy =  0.9919
Epoch: 5 loss = 0.01829735 train accuracy =  0.9967 test accuracy =  0.9924
Epoch: 6 loss = 0.01512399 train accuracy =  0.9975 test accuracy =  0.9937
Epoch: 7 loss = 0.01169200 train accuracy =  0.9980 test accuracy =  0.9921
Epoch: 8 loss = 0.01036383 train accuracy =  0.9984 test accuracy =  0.9913
Epoch: 9 loss = 0.00851057 train accuracy =  0.9987 test accuracy =  0.9913
Epoch: 10 loss = 0.00819825 train accuracy =  0.9987 test accuracy =  0.9938
Epoch: 11 loss = 0.00700515 train accuracy =  0.9990 test accuracy =  0.9931
Epoch: 12 loss = 0.00609655 train accuracy =  0.9991 test accuracy =  0.9934
Epoch: 13 loss = 0.00584066 train accuracy =  0.9993 test accuracy =  0.9928
Epoch: 14 loss = 0.00519815 train accuracy =  0.9993 test accuracy =  0.9932
Epoch: 15 loss = 0.00527517 train accuracy =  0.9993 test accuracy =  0.9930
Learning Finished!
CNN(subclassing)_model_fit

CNN Subclassing 모델 학습

In [0]:
# 런타임 -> 런타임 유형변경 -> 하드웨어 가속도 TPU변경
%tensorflow_version 2.x
#런타임 -> 런타임 다시시작
TensorFlow 2.x selected.
In [0]:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

1. Importing Libraries

In [0]:
import tensorflow as tf 
from tensorflow import keras
from tensorflow.keras.utils import to_categorical # one-hot 인코딩
import numpy as np
import matplotlib.pyplot as plt
import os

print(tf.__version__)     # 텐서플로우 버전확인 (colab의 기본버전은 1.15.0) --> 2.0 변경 "%tensorflow_version 2.x"
print(keras.__version__)  # 케라스 버전확인
2.1.0-rc1
2.2.4-tf

2. Hyper Parameters

In [0]:
learning_rate = 0.001
training_epochs = 15
batch_size = 100

3. MNIST Data

In [0]:
mnist = keras.datasets.mnist
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
In [0]:
# MNIST image load (trian, test)
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()    

# 0~255 중 하나로 표현되는 입력 이미지들의 값을 1 이하가 되도록 정규화    
train_images = train_images.astype(np.float32) / 255.
test_images = test_images.astype(np.float32) / 255.

# np.expand_dims 차원을 변경
train_images = np.expand_dims(train_images, axis=-1)
test_images = np.expand_dims(test_images, axis=-1)

# label을 ont-hot encoding    
train_labels = to_categorical(train_labels, 10)
test_labels = to_categorical(test_labels, 10) 
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step

4. Model Function

In [0]:
# model class 구현
class MNISTModel (tf.keras.Model): # keras.model 구현
    def __init__(self):  # 기본이 되는 층을 구현
        # call the parent constructor
        super(MNISTModel, self).__init__() 
        # initialize the layers
        self.conv1 = keras.layers.Conv2D(filters=32, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)
        self.pool1 = keras.layers.MaxPool2D(padding='SAME')
        self.conv2 = keras.layers.Conv2D(filters=64, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)
        self.pool2 = keras.layers.MaxPool2D(padding='SAME')
        self.conv3 = keras.layers.Conv2D(filters=128, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)
        self.pool3 = keras.layers.MaxPool2D(padding='SAME')
        self.pool3_flat = keras.layers.Flatten()
        self.dense4 = keras.layers.Dense(units=256, activation=tf.nn.relu)
        self.drop4 = keras.layers.Dropout(rate=0.4)
        self.dense5 = keras.layers.Dense(units=10, activation=tf.nn.softmax) 
    def call(self, inputs, training=False):  # init에서 만든 층을 불러와서 network 구성
        net = self.conv1(inputs)
        net = self.pool1(net)
        net = self.conv2(net)
        net = self.pool2(net)
        net = self.conv3(net)
        net = self.pool3(net)
        net = self.pool3_flat(net)
        net = self.dense4(net)
        net = self.drop4(net)
        net = self.dense5(net)
        return net
In [0]:
model = MNISTModel() # 모델 함수를 model로 변경
temp_inputs = keras.Input(shape=(28, 28, 1)) # model input image size
model(temp_inputs) # model input
model.summary() # 모델에 대한 요약 출력해줌
Model: "mnist_model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 28, 28, 32)        320       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 14, 14, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 7, 7, 64)          0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 7, 7, 128)         73856     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 4, 4, 128)         0         
_________________________________________________________________
flatten (Flatten)            (None, 2048)              0         
_________________________________________________________________
dense (Dense)                (None, 256)               524544    
_________________________________________________________________
dropout (Dropout)            (None, 256)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 10)                2570      
=================================================================
Total params: 619,786
Trainable params: 619,786
Non-trainable params: 0
_________________________________________________________________

5. Training

In [0]:
# CNN 모델 구조 확정하고 컴파일 진행
model.compile(loss='categorical_crossentropy',      # crossentropy loss
              optimizer='adam',                      # adam optimizer
              metrics=['accuracy'])                  # 측정값 : accuracy

# 학습실행
model.fit(train_images, train_labels,                # 입력값
          batch_size=batch_size,                      # 1회마다 배치마다 100개 프로세스 
          epochs=training_epochs,                     # 15회 학습
          verbose=1,                                  # verbose는 학습 중 출력되는 문구를 설정하는 것 
          validation_data=(test_images, test_labels)) # test를 val로 사용

score = model.evaluate(test_images, test_labels, verbose=0) # test 값 결과 확인
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Train on 60000 samples, validate on 10000 samples
Epoch 1/15
60000/60000 [==============================] - 100s 2ms/sample - loss: 0.1975 - accuracy: 0.9377 - val_loss: 0.0523 - val_accuracy: 0.9822
Epoch 2/15
60000/60000 [==============================] - 99s 2ms/sample - loss: 0.0535 - accuracy: 0.9834 - val_loss: 0.0301 - val_accuracy: 0.9901
Epoch 3/15
60000/60000 [==============================] - 98s 2ms/sample - loss: 0.0414 - accuracy: 0.9876 - val_loss: 0.0255 - val_accuracy: 0.9910
Epoch 4/15
60000/60000 [==============================] - 98s 2ms/sample - loss: 0.0293 - accuracy: 0.9909 - val_loss: 0.0324 - val_accuracy: 0.9900
Epoch 5/15
60000/60000 [==============================] - 98s 2ms/sample - loss: 0.0252 - accuracy: 0.9920 - val_loss: 0.0236 - val_accuracy: 0.9916
Epoch 6/15
60000/60000 [==============================] - 98s 2ms/sample - loss: 0.0198 - accuracy: 0.9937 - val_loss: 0.0248 - val_accuracy: 0.9918
Epoch 7/15
60000/60000 [==============================] - 98s 2ms/sample - loss: 0.0177 - accuracy: 0.9941 - val_loss: 0.0250 - val_accuracy: 0.9924
Epoch 8/15
60000/60000 [==============================] - 98s 2ms/sample - loss: 0.0146 - accuracy: 0.9950 - val_loss: 0.0247 - val_accuracy: 0.9930
Epoch 9/15
60000/60000 [==============================] - 98s 2ms/sample - loss: 0.0133 - accuracy: 0.9955 - val_loss: 0.0234 - val_accuracy: 0.9929
Epoch 10/15
60000/60000 [==============================] - 98s 2ms/sample - loss: 0.0131 - accuracy: 0.9956 - val_loss: 0.0306 - val_accuracy: 0.9905
Epoch 11/15
60000/60000 [==============================] - 98s 2ms/sample - loss: 0.0111 - accuracy: 0.9964 - val_loss: 0.0234 - val_accuracy: 0.9941
Epoch 12/15
60000/60000 [==============================] - 98s 2ms/sample - loss: 0.0082 - accuracy: 0.9973 - val_loss: 0.0319 - val_accuracy: 0.9920
Epoch 13/15
60000/60000 [==============================] - 98s 2ms/sample - loss: 0.0090 - accuracy: 0.9973 - val_loss: 0.0233 - val_accuracy: 0.9939
Epoch 14/15
60000/60000 [==============================] - 99s 2ms/sample - loss: 0.0070 - accuracy: 0.9976 - val_loss: 0.0270 - val_accuracy: 0.9934
Epoch 15/15
60000/60000 [==============================] - 98s 2ms/sample - loss: 0.0069 - accuracy: 0.9977 - val_loss: 0.0393 - val_accuracy: 0.9904
Test loss: 0.03933068524470041
Test accuracy: 0.9904
CNN(functional)_GradientTape

CNN Functional 모델 학습

출처 : https://github.com/deeplearningzerotoall/TensorFlow (모두의 딥러닝)

In [0]:
# 런타임 -> 런타임 유형변경 -> 하드웨어 가속도 TPU변경
%tensorflow_version 2.x
#런타임 -> 런타임 다시시작
TensorFlow 2.x selected.
In [0]:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

1. Importing Libraries

In [0]:
import tensorflow.compat.v1 as tf # 텐서플로우 1.X 버전 사용 가능
from tensorflow import keras
from tensorflow.keras.utils import to_categorical # one-hot 인코딩
import numpy as np
import matplotlib.pyplot as plt
import os

print(tf.__version__)     # 텐서플로우 버전확인 (colab의 기본버전은 1.15.0) --> 2.0 변경 "%tensorflow_version 2.x"
print(keras.__version__)  # 케라스 버전확인
2.1.0-rc1
2.2.4-tf

2. Enable Eager Mode

In [0]:
# 그래프 기반 모드에서 즉시 실행 (Eager Execution) 모드로 변경하여 사용
tf.enable_eager_execution()

3. Hyper Parameters

In [0]:
learning_rate = 0.001
training_epochs = 15
batch_size = 100

tf.set_random_seed(777)

4. Creating a Checkpoint Directory

In [0]:
cur_dir = os.getcwd()            # 현재 스크립트의 실행 경로
ckpt_dir_name = 'checkpoints'    # 체크포인트 이름 설정
model_dir_name = 'minst_cnn_seq' # 모델 이름 설정

checkpoint_dir = os.path.join(cur_dir, ckpt_dir_name, model_dir_name) # 경로와 이름 설정
os.makedirs(checkpoint_dir, exist_ok=True) # 폴더 생성

checkpoint_prefix = os.path.join(checkpoint_dir, model_dir_name) # 저장되는 이름 설정

5. MNIST/Fashion MNIST Data

In [0]:
## MNIST Dataset #########################################################
mnist = keras.datasets.mnist
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
##########################################################################

## Fashion MNIST Dataset #################################################
#mnist = keras.datasets.fashion_mnist
#class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
##########################################################################

6. Datasets

In [0]:
# MNIST image load (trian, test)
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()    

# 0~255 중 하나로 표현되는 입력 이미지들의 값을 1 이하가 되도록 정규화    
train_images = train_images.astype(np.float32) / 255.
test_images = test_images.astype(np.float32) / 255.

# np.expand_dims 차원을 변경
train_images = np.expand_dims(train_images, axis=-1)
test_images = np.expand_dims(test_images, axis=-1)

# label을 ont-hot encoding    
train_labels = to_categorical(train_labels, 10)
test_labels = to_categorical(test_labels, 10)    

# dataset 인스턴스 만들기
train_dataset = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).shuffle(
                buffer_size=100000).batch(batch_size)
test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(batch_size)
# from_tensor_slices : 이미지를 이미지와 라벨로 나누기
# batch : 해당 배치 사이즈 만큼 나누기
# shuffle : 고정된 buffer_size만큼 epoch 마다 이미지를 섞어서 오버피팅이 줄도록 도와줌

7. Model Function

In [0]:
# Functional 모델 층 구성하기
def create_model():
    inputs = keras.Input(shape=(28, 28, 1))
    conv1 = keras.layers.Conv2D(filters=32, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)(inputs)
    pool1 = keras.layers.MaxPool2D(padding='SAME')(conv1)
    conv2 = keras.layers.Conv2D(filters=64, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)(pool1)
    pool2 = keras.layers.MaxPool2D(padding='SAME')(conv2)
    conv3 = keras.layers.Conv2D(filters=128, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)(pool2)
    pool3 = keras.layers.MaxPool2D(padding='SAME')(conv3)
    pool3_flat = keras.layers.Flatten()(pool3)
    dense4 = keras.layers.Dense(units=256, activation=tf.nn.relu)(pool3_flat)
    drop4 = keras.layers.Dropout(rate=0.4)(dense4)
    logits = keras.layers.Dense(units=10)(drop4)
    return keras.Model(inputs=inputs, outputs=logits)
In [0]:
model = create_model() # 모델 함수를 model로 변경
model.summary() # 모델에 대한 요약 출력해줌
Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 28, 28, 1)]       0         
_________________________________________________________________
conv2d (Conv2D)              (None, 28, 28, 32)        320       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 14, 14, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 7, 7, 64)          0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 7, 7, 128)         73856     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 4, 4, 128)         0         
_________________________________________________________________
flatten (Flatten)            (None, 2048)              0         
_________________________________________________________________
dense (Dense)                (None, 256)               524544    
_________________________________________________________________
dropout (Dropout)            (None, 256)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 10)                2570      
=================================================================
Total params: 619,786
Trainable params: 619,786
Non-trainable params: 0
_________________________________________________________________

8. Loss Function

In [0]:
def loss_fn(model, images, labels):
    logits = model(images, training=True)
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(
            logits=logits, labels=labels))    
    return loss   

9. Calculate Gradient

In [0]:
def grad(model, images, labels):
    with tf.GradientTape() as tape: # 자동 미분이 가능하고 실행된 모든 연산을 테이프에 기록함
        loss = loss_fn(model, images, labels)
    return tape.gradient(loss, model.variables)

10. Caculating Model's Accuracy

In [0]:
def evaluate(model, images, labels):
    logits = model(images, training=False)
    correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1)) # 라벨값들을 비교
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # 동일한 라벨값들에 대한 평균을 구함
    return accuracy

11. Optimizer

In [0]:
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) # 하이퍼파라미터로 설정(0.001)

12. Creating a Checkpoint

In [0]:
checkpoint = tf.train.Checkpoint(cnn=model) # 데이터 그룹화하여 저장하고 추후 복원에 사용됨 

13. Training

In [0]:
# train my model
print('Learning started. It takes sometime.')
for epoch in range(training_epochs): # 하이퍼 파마리터로 설정 (training_epochs = 15) 

    # 값 초기화
    avg_loss = 0.
    avg_train_acc = 0.
    avg_test_acc = 0.
    train_step = 0
    test_step = 0
    
    # Training
    for images, labels in train_dataset:
        grads = grad(model, images, labels)                     # GradientTape에 계산된 데이터 저장             
        optimizer.apply_gradients(zip(grads, model.variables))  # optimizer 실행
        loss = loss_fn(model, images, labels)                   # 해당 epoch의 loss 계산
        acc = evaluate(model, images, labels)                   # 해당 epoch의 accuracy 계산
        avg_loss = avg_loss + loss                              # 총 loss 합산
        avg_train_acc = avg_train_acc + acc                     # 총 accuracy 계산
        train_step += 1                                         # 한 epoch 실행마다 step 갯수 1씩 늘어남
    avg_loss = avg_loss / train_step                            # loss 값 계산
    avg_train_acc = avg_train_acc / train_step                  # accuracy 값 계산
    
    # Test
    for images, labels in test_dataset:        
        acc = evaluate(model, images, labels)                  # 해당 epoch의 accuracy 계산
        avg_test_acc = avg_test_acc + acc                      # 총 accuracy 계산
        test_step += 1                                         # 한 epoch 실행마다 step 갯수 1씩 늘어남
    avg_test_acc = avg_test_acc / test_step                    # accuracy 값 계산

    # epoch 별 loss, accuracy값 출력하기
    print('Epoch:', '{}'.format(epoch + 1), 'loss =', '{:.8f}'.format(avg_loss), 
          'train accuracy = ', '{:.4f}'.format(avg_train_acc), 
          'test accuracy = ', '{:.4f}'.format(avg_test_acc))
    
    # 해당 모델의 값들 저장
    checkpoint.save(file_prefix=checkpoint_prefix)

print('Learning Finished!')
Learning started. It takes sometime.
Epoch: 1 loss = 0.18117337 train accuracy =  0.9556 test accuracy =  0.9855
Epoch: 2 loss = 0.04840804 train accuracy =  0.9892 test accuracy =  0.9897
Epoch: 3 loss = 0.03126438 train accuracy =  0.9933 test accuracy =  0.9896
Epoch: 4 loss = 0.02329855 train accuracy =  0.9954 test accuracy =  0.9911
Epoch: 5 loss = 0.01931384 train accuracy =  0.9962 test accuracy =  0.9910
Epoch: 6 loss = 0.01544751 train accuracy =  0.9973 test accuracy =  0.9930
Epoch: 7 loss = 0.01314946 train accuracy =  0.9979 test accuracy =  0.9925
Epoch: 8 loss = 0.01103821 train accuracy =  0.9984 test accuracy =  0.9932
Epoch: 9 loss = 0.00946009 train accuracy =  0.9986 test accuracy =  0.9929
Epoch: 10 loss = 0.00834494 train accuracy =  0.9987 test accuracy =  0.9934
Epoch: 11 loss = 0.00737364 train accuracy =  0.9991 test accuracy =  0.9929
Epoch: 12 loss = 0.00642006 train accuracy =  0.9993 test accuracy =  0.9936
Epoch: 13 loss = 0.00624036 train accuracy =  0.9991 test accuracy =  0.9927
Epoch: 14 loss = 0.00453461 train accuracy =  0.9996 test accuracy =  0.9930
Epoch: 15 loss = 0.00536788 train accuracy =  0.9995 test accuracy =  0.9942
Learning Finished!
CNN(functional)_model_fit

CNN Functional 모델 학습

In [0]:
# 런타임 -> 런타임 유형변경 -> 하드웨어 가속도 TPU변경
%tensorflow_version 2.x
#런타임 -> 런타임 다시시작
TensorFlow 2.x selected.
In [0]:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

1. Importing Libraries

In [0]:
import tensorflow as tf 
from tensorflow import keras
from tensorflow.keras.utils import to_categorical # one-hot 인코딩
import numpy as np
import matplotlib.pyplot as plt
import os

print(tf.__version__)     # 텐서플로우 버전확인 (colab의 기본버전은 1.15.0) --> 2.0 변경 "%tensorflow_version 2.x"
print(keras.__version__)  # 케라스 버전확인
2.1.0-rc1
2.2.4-tf

2. Hyper Parameters

In [0]:
learning_rate = 0.001
training_epochs = 15
batch_size = 100

3. MNIST Data

In [0]:
mnist = keras.datasets.mnist
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
In [0]:
# MNIST image load (trian, test)
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()    

# 0~255 중 하나로 표현되는 입력 이미지들의 값을 1 이하가 되도록 정규화    
train_images = train_images.astype(np.float32) / 255.
test_images = test_images.astype(np.float32) / 255.

# np.expand_dims 차원을 변경
train_images = np.expand_dims(train_images, axis=-1)
test_images = np.expand_dims(test_images, axis=-1)

# label을 ont-hot encoding    
train_labels = to_categorical(train_labels, 10)
test_labels = to_categorical(test_labels, 10)    
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step

4. Model Function

In [0]:
# Functional 모델 층 구성하기
def create_model():
    inputs = keras.Input(shape=(28, 28, 1))
    conv1 = keras.layers.Conv2D(filters=32, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)(inputs)
    pool1 = keras.layers.MaxPool2D(padding='SAME')(conv1)
    conv2 = keras.layers.Conv2D(filters=64, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)(pool1)
    pool2 = keras.layers.MaxPool2D(padding='SAME')(conv2)
    conv3 = keras.layers.Conv2D(filters=128, kernel_size=[3, 3], padding='SAME', activation=tf.nn.relu)(pool2)
    pool3 = keras.layers.MaxPool2D(padding='SAME')(conv3)
    pool3_flat = keras.layers.Flatten()(pool3)
    dense4 = keras.layers.Dense(units=256, activation=tf.nn.relu)(pool3_flat)
    drop4 = keras.layers.Dropout(rate=0.4)(dense4)
    logits = keras.layers.Dense(units=10, activation=tf.nn.softmax)(drop4)
    return keras.Model(inputs=inputs, outputs=logits)
In [0]:
model = create_model() # 모델 함수를 model로 변경
model.summary() # 모델에 대한 요약 출력해줌
Model: "model_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_3 (InputLayer)         [(None, 28, 28, 1)]       0         
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 28, 28, 32)        320       
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 14, 14, 32)        0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 14, 14, 64)        18496     
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 7, 7, 64)          0         
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 7, 7, 128)         73856     
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 4, 4, 128)         0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 2048)              0         
_________________________________________________________________
dense_4 (Dense)              (None, 256)               524544    
_________________________________________________________________
dropout_2 (Dropout)          (None, 256)               0         
_________________________________________________________________
dense_5 (Dense)              (None, 10)                2570      
=================================================================
Total params: 619,786
Trainable params: 619,786
Non-trainable params: 0
_________________________________________________________________

5. Training

In [0]:
# CNN 모델 구조 확정하고 컴파일 진행
model.compile(loss='categorical_crossentropy',      # crossentropy loss
              optimizer='adam',                      # adam optimizer
              metrics=['accuracy'])                  # 측정값 : accuracy

# 학습실행
model.fit(train_images, train_labels,                # 입력값
          batch_size=batch_size,                      # 1회마다 배치마다 100개 프로세스 
          epochs=training_epochs,                     # 15회 학습
          verbose=1,                                  # verbose는 학습 중 출력되는 문구를 설정하는 것 
          validation_data=(test_images, test_labels)) # test를 val로 사용

score = model.evaluate(test_images, test_labels, verbose=0) # test 값 결과 확인
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Train on 60000 samples, validate on 10000 samples
Epoch 1/15
60000/60000 [==============================] - 97s 2ms/sample - loss: 0.2000 - accuracy: 0.9357 - val_loss: 0.0371 - val_accuracy: 0.9878
Epoch 2/15
60000/60000 [==============================] - 97s 2ms/sample - loss: 0.0572 - accuracy: 0.9834 - val_loss: 0.0362 - val_accuracy: 0.9884
Epoch 3/15
60000/60000 [==============================] - 97s 2ms/sample - loss: 0.0389 - accuracy: 0.9883 - val_loss: 0.0262 - val_accuracy: 0.9912
Epoch 4/15
60000/60000 [==============================] - 97s 2ms/sample - loss: 0.0305 - accuracy: 0.9905 - val_loss: 0.0217 - val_accuracy: 0.9925
Epoch 5/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0232 - accuracy: 0.9924 - val_loss: 0.0231 - val_accuracy: 0.9917
Epoch 6/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0200 - accuracy: 0.9936 - val_loss: 0.0244 - val_accuracy: 0.9922
Epoch 7/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0169 - accuracy: 0.9948 - val_loss: 0.0213 - val_accuracy: 0.9928
Epoch 8/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0151 - accuracy: 0.9953 - val_loss: 0.0243 - val_accuracy: 0.9933
Epoch 9/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0129 - accuracy: 0.9960 - val_loss: 0.0251 - val_accuracy: 0.9927
Epoch 10/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0128 - accuracy: 0.9959 - val_loss: 0.0276 - val_accuracy: 0.9920
Epoch 11/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0105 - accuracy: 0.9966 - val_loss: 0.0285 - val_accuracy: 0.9914
Epoch 12/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0094 - accuracy: 0.9967 - val_loss: 0.0246 - val_accuracy: 0.9928
Epoch 13/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0077 - accuracy: 0.9975 - val_loss: 0.0260 - val_accuracy: 0.9929
Epoch 14/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0092 - accuracy: 0.9971 - val_loss: 0.0229 - val_accuracy: 0.9937
Epoch 15/15
60000/60000 [==============================] - 96s 2ms/sample - loss: 0.0068 - accuracy: 0.9977 - val_loss: 0.0242 - val_accuracy: 0.9936
Test loss: 0.02418438413054219
Test accuracy: 0.9936
CNN(sequential)_GradientTape

CNN Sequential 모델 학습

출처 : https://github.com/deeplearningzerotoall/TensorFlow (모두의 딥러닝)

In [0]:
# 런타임 -> 런타임 유형변경 -> 하드웨어 가속도 TPU변경
%tensorflow_version 2.x
#런타임 -> 런타임 다시시작
TensorFlow 2.x selected.
In [0]:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

1. Importing Libraries

In [0]:
import tensorflow.compat.v1 as tf # 텐서플로우 1.X 버전 사용 가능
from tensorflow import keras
from tensorflow.keras.utils import to_categorical # one-hot 인코딩
import numpy as np
import matplotlib.pyplot as plt
import os

print(tf.__version__)     # 텐서플로우 버전확인 (colab의 기본버전은 1.15.0) --> 2.0 변경 "%tensorflow_version 2.x"
print(keras.__version__)  # 케라스 버전확인
2.1.0-rc1
2.2.4-tf

2. Enable Eager Mode

In [0]:
# 그래프 기반 모드에서 즉시 실행 (Eager Execution) 모드로 변경하여 사용
tf.enable_eager_execution()

3. Hyper Parameters

In [0]:
learning_rate = 0.001  # 러닝레이트 
training_epochs = 15   # 에폭
batch_size = 100       # 배치사이즈

tf.set_random_seed(777)  # 랜덤하게 숫자 추출

4. Creating a Checkpoint Directory

In [0]:
cur_dir = os.getcwd()            # 현재 스크립트의 실행 경로
ckpt_dir_name = 'checkpoints'    # 체크포인트 이름 설정
model_dir_name = 'minst_cnn_seq' # 모델 이름 설정

checkpoint_dir = os.path.join(cur_dir, ckpt_dir_name, model_dir_name) # 경로와 이름 설정
os.makedirs(checkpoint_dir, exist_ok=True) # 폴더 생성

checkpoint_prefix = os.path.join(checkpoint_dir, model_dir_name) # 저장되는 이름 설정

5. MNIST/Fashion MNIST Data

In [0]:
## MNIST Dataset #########################################################
mnist = keras.datasets.mnist
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
##########################################################################

## Fashion MNIST Dataset #################################################
#mnist = keras.datasets.fashion_mnist
#class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
##########################################################################

6. Datasets

In [0]:
# MNIST image load (trian, test)
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()    

# 0~255 중 하나로 표현되는 입력 이미지들의 값을 1 이하가 되도록 정규화    
train_images = train_images.astype(np.float32) / 255.
test_images = test_images.astype(np.float32) / 255.

# np.expand_dims 차원을 변경
train_images = np.expand_dims(train_images, axis=-1)
test_images = np.expand_dims(test_images, axis=-1)

# label을 ont-hot encoding    
train_labels = to_categorical(train_labels, 10)
test_labels = to_categorical(test_labels, 10)    

# dataset 인스턴스 만들기
train_dataset = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).shuffle(
                buffer_size=100000).batch(batch_size)
test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(batch_size)
# from_tensor_slices : 이미지를 이미지와 라벨로 나누기
# batch : 해당 배치 사이즈 만큼 나누기
# shuffle : 고정된 buffer_size만큼 epoch 마다 이미지를 섞어서 오버피팅이 줄도록 도와줌
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 1s 0us/step

7. Model Function

In [0]:
# Sequential 모델 층 구성하기
def create_model():
    model = keras.Sequential() # Sequential 모델 시작
    model.add(keras.layers.Conv2D(filters=32, kernel_size=3, activation=tf.nn.relu, padding='SAME', 
                                  input_shape=(28, 28, 1)))
    model.add(keras.layers.MaxPool2D(padding='SAME'))
    model.add(keras.layers.Conv2D(filters=64, kernel_size=3, activation=tf.nn.relu, padding='SAME'))
    model.add(keras.layers.MaxPool2D(padding='SAME'))
    model.add(keras.layers.Conv2D(filters=128, kernel_size=3, activation=tf.nn.relu, padding='SAME'))
    model.add(keras.layers.MaxPool2D(padding='SAME'))
    model.add(keras.layers.Flatten())
    model.add(keras.layers.Dense(256, activation=tf.nn.relu))
    model.add(keras.layers.Dropout(0.4))
    model.add(keras.layers.Dense(10))
    return model
In [0]:
model = create_model() # 모델 함수를 model로 변경
model.summary() # 모델에 대한 요약 출력해줌
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 28, 28, 32)        320       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 14, 14, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 7, 7, 64)          0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 7, 7, 128)         73856     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 4, 4, 128)         0         
_________________________________________________________________
flatten (Flatten)            (None, 2048)              0         
_________________________________________________________________
dense (Dense)                (None, 256)               524544    
_________________________________________________________________
dropout (Dropout)            (None, 256)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 10)                2570      
=================================================================
Total params: 619,786
Trainable params: 619,786
Non-trainable params: 0
_________________________________________________________________

8. Loss Function (cross_entropy)

In [0]:
def loss_fn(model, images, labels):
    logits = model(images, training=True)
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2( #softmax값 함께 계산
            logits=logits, labels=labels))    
    return loss   

9. Calculating Gradient

In [0]:
def grad(model, images, labels):
    with tf.GradientTape() as tape: # 자동 미분이 가능하고 실행된 모든 연산을 테이프에 기록함
        loss = loss_fn(model, images, labels)
    return tape.gradient(loss, model.variables)

10. Caculating Model's Accuracy

In [0]:
def evaluate(model, images, labels):
    logits = model(images, training=False)
    correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1)) # 라벨값들을 비교
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # 동일한 라벨값들에 대한 평균을 구함
    return accuracy

11. Optimizer (Adam)

In [0]:
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) # 러닝레이트를 위에서 하이퍼파라미터로 설정(0.001)

12. Creating a Checkpoint

In [0]:
checkpoint = tf.train.Checkpoint(cnn=model) # 데이터 그룹화하여 저장하고 추후 복원에 사용됨 

13. Training

In [0]:
# train my model
print('Learning started. It takes sometime.')
for epoch in range(training_epochs): # 하이퍼 파마리터로 설정 (training_epochs = 15) 
    
    # 값 초기화
    avg_loss = 0.
    avg_train_acc = 0.
    avg_test_acc = 0.
    train_step = 0
    test_step = 0
    
    # Training
    for images, labels in train_dataset:
        grads = grad(model, images, labels)                     # GradientTape에 계산된 데이터 저장             
        optimizer.apply_gradients(zip(grads, model.variables))  # optimizer 실행
        loss = loss_fn(model, images, labels)                   # 해당 epoch의 loss 계산
        acc = evaluate(model, images, labels)                   # 해당 epoch의 accuracy 계산
        avg_loss = avg_loss + loss                              # 총 loss 합산
        avg_train_acc = avg_train_acc + acc                     # 총 accuracy 계산
        train_step += 1                                         # 한 epoch 실행마다 step 갯수 1씩 늘어남
    avg_loss = avg_loss / train_step                            # loss 값 계산
    avg_train_acc = avg_train_acc / train_step                  # accuracy 값 계산

    # Test
    for images, labels in test_dataset:        
        acc = evaluate(model, images, labels)                  # 해당 epoch의 accuracy 계산
        avg_test_acc = avg_test_acc + acc                      # 총 accuracy 계산
        test_step += 1                                         # 한 epoch 실행마다 step 갯수 1씩 늘어남
    avg_test_acc = avg_test_acc / test_step                    # accuracy 값 계산

    # epoch 별 loss, accuracy값 출력하기
    print('Epoch:', '{}'.format(epoch + 1), 'loss =', '{:.8f}'.format(avg_loss), 
          'train accuracy = ', '{:.4f}'.format(avg_train_acc), 
          'test accuracy = ', '{:.4f}'.format(avg_test_acc))
    
    # 해당 모델의 값들 저장
    checkpoint.save(file_prefix=checkpoint_prefix)

print('Learning Finished!')
Learning started. It takes sometime.
Epoch: 1 loss = 0.17178966 train accuracy =  0.9592 test accuracy =  0.9841
Epoch: 2 loss = 0.04375326 train accuracy =  0.9906 test accuracy =  0.9893
Epoch: 3 loss = 0.03280596 train accuracy =  0.9932 test accuracy =  0.9924
Epoch: 4 loss = 0.02246016 train accuracy =  0.9957 test accuracy =  0.9919
Epoch: 5 loss = 0.01764675 train accuracy =  0.9965 test accuracy =  0.9934
Epoch: 6 loss = 0.01566134 train accuracy =  0.9974 test accuracy =  0.9920
Epoch: 7 loss = 0.01119288 train accuracy =  0.9981 test accuracy =  0.9927
Epoch: 8 loss = 0.01164061 train accuracy =  0.9980 test accuracy =  0.9879
Epoch: 9 loss = 0.00928981 train accuracy =  0.9986 test accuracy =  0.9921
Epoch: 10 loss = 0.00828191 train accuracy =  0.9987 test accuracy =  0.9936
Epoch: 11 loss = 0.00749216 train accuracy =  0.9990 test accuracy =  0.9937
Epoch: 12 loss = 0.00621789 train accuracy =  0.9993 test accuracy =  0.9926
Epoch: 13 loss = 0.00654142 train accuracy =  0.9992 test accuracy =  0.9932
Epoch: 14 loss = 0.00561295 train accuracy =  0.9993 test accuracy =  0.9925
Epoch: 15 loss = 0.00490414 train accuracy =  0.9994 test accuracy =  0.9928
Learning Finished!

+ Recent posts