CNN(sequential)_model_fit

CNN Sequential 모델 학습

In [0]:
# 런타임 -> 런타임 유형변경 -> 하드웨어 가속도 TPU변경
%tensorflow_version 2.x
#런타임 -> 런타임 다시시작
TensorFlow 2.x selected.
In [0]:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

1. Importing Libraries

In [0]:
import tensorflow as tf 
from tensorflow import keras
from tensorflow.keras.utils import to_categorical # one-hot 인코딩
import numpy as np
import matplotlib.pyplot as plt
import os

print(tf.__version__)     # 텐서플로우 버전확인 (colab의 기본버전은 1.15.0) --> 2.0 변경 "%tensorflow_version 2.x"
print(keras.__version__)  # 케라스 버전확인
2.1.0-rc1
2.2.4-tf

2. Hyper Parameters

In [0]:
learning_rate = 0.001
training_epochs = 50
batch_size = 100

3. MNIST Data

In [0]:
mnist = keras.datasets.mnist
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
In [0]:
# MNIST image load (trian, test)
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()    

# 0~255 중 하나로 표현되는 입력 이미지들의 값을 1 이하가 되도록 정규화    
train_images = train_images.astype(np.float32) / 255.
test_images = test_images.astype(np.float32) / 255.

# np.expand_dims 차원을 변경
train_images = np.expand_dims(train_images, axis=-1)
test_images = np.expand_dims(test_images, axis=-1)

# label을 ont-hot encoding    
train_labels = to_categorical(train_labels, 10)
test_labels = to_categorical(test_labels, 10) 
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step

4. Model Function

In [0]:
# Sequential 모델 층 구성하기
def create_model():
    model = keras.Sequential() # Sequential 모델 시작
    model.add(keras.layers.Conv2D(filters=32, kernel_size=3, activation=tf.nn.relu, padding='SAME', 
                                  input_shape=(28, 28, 1)))
    model.add(keras.layers.MaxPool2D(padding='SAME'))
    model.add(keras.layers.Conv2D(filters=64, kernel_size=3, activation=tf.nn.relu, padding='SAME'))
    model.add(keras.layers.MaxPool2D(padding='SAME'))
    model.add(keras.layers.Conv2D(filters=128, kernel_size=3, activation=tf.nn.relu, padding='SAME'))
    model.add(keras.layers.MaxPool2D(padding='SAME'))
    model.add(keras.layers.Flatten())
    model.add(keras.layers.Dense(256, activation=tf.nn.relu))
    model.add(keras.layers.Dropout(0.4))
    model.add(keras.layers.Dense(10, activation=tf.nn.softmax))
    return model
In [0]:
model = create_model() # 모델 함수를 model로 변경
model.summary() # 모델에 대한 요약 출력해줌
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_3 (Conv2D)            (None, 28, 28, 32)        320       
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 14, 14, 32)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 14, 14, 64)        18496     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 7, 7, 64)          0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 7, 7, 128)         73856     
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 4, 4, 128)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 2048)              0         
_________________________________________________________________
dense_2 (Dense)              (None, 256)               524544    
_________________________________________________________________
dropout_1 (Dropout)          (None, 256)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 10)                2570      
=================================================================
Total params: 619,786
Trainable params: 619,786
Non-trainable params: 0
_________________________________________________________________

5. Training

In [0]:
# CNN 모델 구조 확정하고 컴파일 진행
model.compile(loss='categorical_crossentropy',      # crossentropy loss
              optimizer='adam',                      # adam optimizer
              metrics=['accuracy'])                  # 측정값 : accuracy

# 학습실행
history = model.fit(train_images, train_labels,                # 입력값
          batch_size=batch_size,                      # 1회마다 배치마다 100개 프로세스 
          epochs=training_epochs,                     # 15회 학습
          verbose=1,                                  # verbose는 학습 중 출력되는 문구를 설정하는 것 
          validation_data=(test_images, test_labels)) # test를 val로 사용

score = model.evaluate(test_images, test_labels, verbose=0) # test 값 결과 확인
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Train on 60000 samples, validate on 10000 samples
Epoch 1/50
60000/60000 [==============================] - 91s 2ms/sample - loss: 0.2028 - accuracy: 0.9346 - val_loss: 0.0369 - val_accuracy: 0.9878
Epoch 2/50
60000/60000 [==============================] - 91s 2ms/sample - loss: 0.0568 - accuracy: 0.9829 - val_loss: 0.0312 - val_accuracy: 0.9902
Epoch 3/50
60000/60000 [==============================] - 91s 2ms/sample - loss: 0.0400 - accuracy: 0.9878 - val_loss: 0.0310 - val_accuracy: 0.9902
Epoch 4/50
60000/60000 [==============================] - 90s 2ms/sample - loss: 0.0326 - accuracy: 0.9898 - val_loss: 0.0272 - val_accuracy: 0.9912
Epoch 5/50
60000/60000 [==============================] - 90s 2ms/sample - loss: 0.0248 - accuracy: 0.9921 - val_loss: 0.0218 - val_accuracy: 0.9927
Epoch 6/50
60000/60000 [==============================] - 90s 2ms/sample - loss: 0.0201 - accuracy: 0.9938 - val_loss: 0.0232 - val_accuracy: 0.9928
Epoch 7/50
60000/60000 [==============================] - 91s 2ms/sample - loss: 0.0182 - accuracy: 0.9944 - val_loss: 0.0221 - val_accuracy: 0.9936
Epoch 8/50
60000/60000 [==============================] - 91s 2ms/sample - loss: 0.0145 - accuracy: 0.9951 - val_loss: 0.0253 - val_accuracy: 0.9923
Epoch 9/50
60000/60000 [==============================] - 90s 2ms/sample - loss: 0.0140 - accuracy: 0.9955 - val_loss: 0.0237 - val_accuracy: 0.9923
Epoch 10/50
60000/60000 [==============================] - 91s 2ms/sample - loss: 0.0123 - accuracy: 0.9962 - val_loss: 0.0234 - val_accuracy: 0.9928
Epoch 11/50
60000/60000 [==============================] - 90s 2ms/sample - loss: 0.0112 - accuracy: 0.9963 - val_loss: 0.0258 - val_accuracy: 0.9933
Epoch 12/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0106 - accuracy: 0.9966 - val_loss: 0.0313 - val_accuracy: 0.9926
Epoch 13/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0079 - accuracy: 0.9974 - val_loss: 0.0285 - val_accuracy: 0.9925
Epoch 14/50
60000/60000 [==============================] - 90s 2ms/sample - loss: 0.0090 - accuracy: 0.9972 - val_loss: 0.0256 - val_accuracy: 0.9937
Epoch 15/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0086 - accuracy: 0.9971 - val_loss: 0.0311 - val_accuracy: 0.9934
Epoch 16/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0083 - accuracy: 0.9973 - val_loss: 0.0270 - val_accuracy: 0.9928
Epoch 17/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0066 - accuracy: 0.9977 - val_loss: 0.0311 - val_accuracy: 0.9918
Epoch 18/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0061 - accuracy: 0.9979 - val_loss: 0.0237 - val_accuracy: 0.9939
Epoch 19/50
60000/60000 [==============================] - 89s 1ms/sample - loss: 0.0057 - accuracy: 0.9980 - val_loss: 0.0331 - val_accuracy: 0.9925
Epoch 20/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0051 - accuracy: 0.9983 - val_loss: 0.0508 - val_accuracy: 0.9896
Epoch 21/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0062 - accuracy: 0.9980 - val_loss: 0.0441 - val_accuracy: 0.9918
Epoch 22/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0061 - accuracy: 0.9981 - val_loss: 0.0343 - val_accuracy: 0.9924
Epoch 23/50
60000/60000 [==============================] - 89s 1ms/sample - loss: 0.0050 - accuracy: 0.9985 - val_loss: 0.0310 - val_accuracy: 0.9941
Epoch 24/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0052 - accuracy: 0.9981 - val_loss: 0.0292 - val_accuracy: 0.9938
Epoch 25/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0047 - accuracy: 0.9984 - val_loss: 0.0347 - val_accuracy: 0.9927
Epoch 26/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0049 - accuracy: 0.9983 - val_loss: 0.0356 - val_accuracy: 0.9935
Epoch 27/50
60000/60000 [==============================] - 89s 1ms/sample - loss: 0.0042 - accuracy: 0.9987 - val_loss: 0.0394 - val_accuracy: 0.9922
Epoch 28/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0053 - accuracy: 0.9983 - val_loss: 0.0453 - val_accuracy: 0.9928
Epoch 29/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0055 - accuracy: 0.9982 - val_loss: 0.0315 - val_accuracy: 0.9938
Epoch 30/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0043 - accuracy: 0.9986 - val_loss: 0.0375 - val_accuracy: 0.9930
Epoch 31/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0042 - accuracy: 0.9988 - val_loss: 0.0376 - val_accuracy: 0.9927
Epoch 32/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0046 - accuracy: 0.9985 - val_loss: 0.0386 - val_accuracy: 0.9935
Epoch 33/50
60000/60000 [==============================] - 89s 1ms/sample - loss: 0.0033 - accuracy: 0.9990 - val_loss: 0.0449 - val_accuracy: 0.9930
Epoch 34/50
60000/60000 [==============================] - 89s 1ms/sample - loss: 0.0048 - accuracy: 0.9986 - val_loss: 0.0430 - val_accuracy: 0.9928
Epoch 35/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0028 - accuracy: 0.9990 - val_loss: 0.0424 - val_accuracy: 0.9933
Epoch 36/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0063 - accuracy: 0.9982 - val_loss: 0.0448 - val_accuracy: 0.9915
Epoch 37/50
60000/60000 [==============================] - 89s 1ms/sample - loss: 0.0029 - accuracy: 0.9991 - val_loss: 0.0370 - val_accuracy: 0.9938
Epoch 38/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0024 - accuracy: 0.9992 - val_loss: 0.0327 - val_accuracy: 0.9940
Epoch 39/50
60000/60000 [==============================] - 89s 1ms/sample - loss: 0.0023 - accuracy: 0.9993 - val_loss: 0.0352 - val_accuracy: 0.9941
Epoch 40/50
60000/60000 [==============================] - 89s 1ms/sample - loss: 0.0026 - accuracy: 0.9994 - val_loss: 0.0343 - val_accuracy: 0.9942
Epoch 41/50
60000/60000 [==============================] - 89s 1ms/sample - loss: 0.0046 - accuracy: 0.9985 - val_loss: 0.0514 - val_accuracy: 0.9912
Epoch 42/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0036 - accuracy: 0.9989 - val_loss: 0.0397 - val_accuracy: 0.9937
Epoch 43/50
60000/60000 [==============================] - 89s 1ms/sample - loss: 0.0046 - accuracy: 0.9988 - val_loss: 0.0378 - val_accuracy: 0.9936
Epoch 44/50
60000/60000 [==============================] - 89s 1ms/sample - loss: 0.0025 - accuracy: 0.9992 - val_loss: 0.0370 - val_accuracy: 0.9944
Epoch 45/50
60000/60000 [==============================] - 89s 1ms/sample - loss: 0.0048 - accuracy: 0.9988 - val_loss: 0.0452 - val_accuracy: 0.9925
Epoch 46/50
60000/60000 [==============================] - 89s 1ms/sample - loss: 0.0026 - accuracy: 0.9992 - val_loss: 0.0461 - val_accuracy: 0.9927
Epoch 47/50
60000/60000 [==============================] - 89s 1ms/sample - loss: 0.0034 - accuracy: 0.9992 - val_loss: 0.0441 - val_accuracy: 0.9940
Epoch 48/50
60000/60000 [==============================] - 89s 1ms/sample - loss: 0.0036 - accuracy: 0.9990 - val_loss: 0.0357 - val_accuracy: 0.9941
Epoch 49/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0036 - accuracy: 0.9991 - val_loss: 0.0498 - val_accuracy: 0.9932
Epoch 50/50
60000/60000 [==============================] - 90s 1ms/sample - loss: 0.0033 - accuracy: 0.9991 - val_loss: 0.0407 - val_accuracy: 0.9943
Test loss: 0.04074125387043765
Test accuracy: 0.9943

6. Visualization

In [0]:
import matplotlib.pyplot as plt
import numpy as np
import os

# 모델 학습 후 정보가 담긴 history 내용을 토대로 선 그래프를 그리는 함수 설정

def plot_acc(history, title=None):        # Accuracy(정확도) Visualization
    # summarize history for accuracy
    if not isinstance(history, dict):
        history = history.history

    plt.plot(history['accuracy'])        # accuracy
    plt.plot(history['val_accuracy'])    # validation accuracy
    if title is not None:
        plt.title(title)
    plt.ylabel('Accracy')
    plt.xlabel('Epoch')
    plt.legend(['Training data', 'Validation data'], loc=0)
    # plt.show()


def plot_loss(history, title=None):     # Loss Visualization
    # summarize history for loss
    if not isinstance(history, dict):
        history = history.history

    plt.plot(history['loss'])           # loss
    plt.plot(history['val_loss'])       # validation
    if title is not None:
        plt.title(title)
    plt.ylabel('Loss')
    plt.xlabel('Epoch')
    plt.legend(['Training data', 'Validation data'], loc=0)
    # plt.show()
In [0]:
# Visualization
plot_acc(history, '(a) Accuracy')  # 학습 경과에 따른 정확도 변화 추이
plt.show()
plot_loss(history, '(b) Loss')     # 학습 경과에 따른 손실값 변화 추이
plt.show()

https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/exponential_decay

 

tf.train.exponential_decay  |  TensorFlow Core r1.15

Applies exponential decay to the learning rate. Aliases: tf.train.exponential_decay( learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None ) When training a model, it is often recommended to lower the learning rate as the training

www.tensorflow.org

 

tf.train.exponential_decay(
    learning_rate
,
    global_step
,
    decay_steps
,
    decay_rate
,
    staircase
=False,
    name
=None
)

 

decayed_learning_rate = learning_rate *
                        decay_rate
^ (global_step / decay_steps)

 

global_step = tf.Variable(0, trainable=False)
starter_learning_rate
= 0.1
learning_rate
= tf.compat.v1.train.exponential_decay(starter_learning_rate,
global_step
,
                                           
100000, 0.96, staircase=True)
# Passing global_step to minimize() will increment it at each step.
learning_step
= (
    tf
.compat.v1.train.GradientDescentOptimizer(learning_rate)
   
.minimize(...my loss..., global_step=global_step)
)

 

예제에서 첫 번째 파라메터로 들어간 starter_learning_rate는 말 그대로 최초 학습시 사용될 learning_rate이다.

두 번째 파라메터로 사용된 global_step은 현재 학습 횟수이다.

세 번째 파라메터는 위 예제에서는 100000이 들어갔는데 총 학습 횟수이다.

네 번째 파라메터는 얼마나 rate가 감소될 것인가를 나타낸다. 매번 0.96이 곱해진다고 생각하면 된다.

다섯 번째 파라메터는 이산적으로 학습 속도 감속 유무이다. 나는 이산적으로라는 말의 정의가 무엇인지 잘 모르겠지만 해당 파라메터가 true일때 decay_rate 즉 4번째 파라메터에 (global_step / decay_steps)의 승수가 적용된다.



출처: https://twinw.tistory.com/243 [흰고래의꿈]

  1. Save your Colab notebook.
  2. File > Download .ipynb
  3. On your terminal:
    jupyter nbconvert --to <output format> <filename.ipynb>

 

cmd 창

출처 : https://torbjornzetterlund.com/how-to-save-a-google-colab-notebook-as-html/

Matthews 상관 계수는 이진 및 멀티 클래스 분류의 품질 측정으로 기계 학습에 사용됩니다. 그것은 참과 거짓 긍정과 부정을 고려하고 일반적으로 클래스의 크기가 매우 다른 경우에도 사용할 수있는 균형 잡힌 척도로 간주됩니다. MCC는 본질적으로 -1과 +1 사이의 상관 계수 값입니다. +1의 계수는 완전 예측, 0은 평균 랜덤 예측, -1은 역 예측을 나타냅니다. 통계량은 파이 계수라고도합니다. [출처 : Wikipedia]

https://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html

from sklearn.metrics import matthews_corrcoef

Label = data["LABEL"].values
Gender = data["GENDER"].values
Age = data["AGE"].values
Job = data["JOB"].values

y_true = Label
y_pred = Job
matthews_corrcoef(y_true, y_pred)

MCC의 공식

 

피어슨 상관계수, 스피어만 상관계는 모수전, 비모수적에 대한 구분으로 연속형 자료에서 가능

 

둘다 이산적 형태이면 Phi coefficient

이산과 연속이 섞여있으면 point-biserial correlation coefficient

https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.pointbiserialr.html

 

scipy.stats.pointbiserialr — SciPy v0.14.0 Reference Guide

Parameters:x : array_like of bools y : array_like

docs.scipy.org

from scipy import stats

y_true = Label
y_pred = Age

stats.pointbiserialr(y_true, y_pred)

 

 

ORA-65096: 공통 사용자 또는 롤 이름이 부적합합니다.

라는 오류가 자꾸떠서 뭔가 찾아봤더니 

1년 새에 오라클이 완전 바뀜 ㅋㅋㅋ

 

SQL> create user c##n1

identified by n1;

사용자가 생성되었습니다.

SQL> grant connect, resource, dba to c##n1;

권한이 부여되었습니다.

 

'IT 공방 > SQL' 카테고리의 다른 글

SQL 집합(Aggregation)  (0) 2018.09.14

+ Recent posts