red convolucional
Vamos a utilizar la librería
# devtools::install_github("rstudio/keras")
library(keras)
# install_keras()
library(tidyverse)
library(knitr)
Vamos a utilizar nuevamente el dataset de MNIST de la clase de fully connected layers
mnist <- dataset_mnist()
x_train <- mnist$train$x
y_train <- mnist$train$y
x_test <- mnist$test$x
y_test <- mnist$test$y
Recordemos la pinta de los datos
datos de entrada
matrix.rotate <- function(img) {
t(apply(img, 2, rev))
}
par(mfrow=c(3, 3))
for (idx in 1:9) {
label <- y_train[idx]
image(matrix.rotate(x_train[idx,,]), col = grey(level = seq(1, 0, by=-1/255)), axes=F, main=label)
}
El dato esta en un array de 3 dimensiones (imagen,ancho,largo). Como tenemos 60K imágenes, esto tiene la forma de :
dim(x_train)
[1] 60000 28 28
Dimensiones del problema:
Definamos como variables las siguientes dimensiones del problema (nos facilita la reutilización del código):
num_classes <- 10
img_rows <- 28
img_cols <- 28
En un problema normal de clasificación para Machine Learning tenemos 2 dimensiones: filas y columnas, donde la 1° representa las observaciones y la segunda la secuencia de features.
En el caso de las redes convolucionales necesitamos datos de 4 dimensiones:
x_train <- array_reshape(x_train, c(nrow(x_train), img_rows, img_cols, 1))
x_test <- array_reshape(x_test, c(nrow(x_test), img_rows, img_cols, 1))
input_shape <- c(img_rows, img_cols, 1)
x_train <- x_train / 255
x_test <- x_test / 255
cat('x_train_shape:', dim(x_train), '\n')
x_train_shape: 60000 28 28 1
cat(nrow(x_train), 'train samples\n')
60000 train samples
cat(nrow(x_test), 'test samples\n')
10000 test samples
datos de salida
necesitamos pasarlo a one-hot encoding esto se hace con la función to_categorical()
de Keras
y_train <- to_categorical(y_train, num_classes)
y_test <- to_categorical(y_test, num_classes)
Para armar el modelo primero definimos el tipo de modelo. Para eso usamos keras_model_sequential()
que nos permite simplemente apilar capas de la red.
%>%
layer_droput(x)
que lo que hace es, en cada iteración del ajuste, ignorar el x% de las conexiones. Esto evita el sobreajuste del modelo
model <- keras_model_sequential() %>%
layer_conv_2d(filters = 32, kernel_size = c(3,3), activation = 'relu',
input_shape = input_shape) %>%
layer_conv_2d(filters = 64, kernel_size = c(3,3), activation = 'relu') %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_dropout(rate = 0.25) %>%
layer_flatten() %>%
layer_dense(units = 128, activation = 'relu') %>%
layer_dropout(rate = 0.5) %>%
layer_dense(units = num_classes, activation = 'softmax')
La arquitectura de esta red es básicamente la siguiente:
layer_conv_2d
convolución2
movimiento3
filtros de caras4
filtros de colores5
La capa de convoluciones construye pequeños filtros o kernels de la dimensión kernel_size()
que pasan por el input original realizando una convolución.
El kernel barre la imagen original, moviéndose de a strides()
posiciones. Por default se mueve de a 1 lugar.
Notemos que si el filtro es de 3x3 y el stride es 1, entonces la imagen original va a perder 2 pixels de largo y 2 pixels de ancho.
Por cada uno de los outputs de la capa anterior,Din, se generan tantos kernels como Dout. Al igual que en las Fully Connected, las matrices de las capas anteriores, convolucionadas, se suman, se les agrega el bias, y se pasan por la función de activación.
Composición de filtros6
Suma de Kernels7
Bias8
layer_max_pooling_2d
max pooling9
Es max pooling es una forma de reducir el tamaño de la matrix.
Al igual que la convolución, barre la imagen con una ventana de pool_size()
moviéndose de a stride()
posiciones, y devuelve el valor más alto.
Un pool_size()
de 2x2 nos reduce el tamaño de la imagen a la mitad.
layer_dropout
Dropout10
rate
proporción de los pesos. De esta forma, no se ajusta todo todo el tiempo, reduciendo los grados de libertad del modelo, y evitando el overfittinglayer_flatten
flatten11
layer_dense
dense
Para este modelo utilizamos las mismas dos funciones de activación que utilizamos en la FC nn:
Definidas en código y gráficamente:
relu <- function(x) ifelse(x >= 0, x, 0)
softmax <- function(x) exp(x) / sum(exp(x))
data.frame(x= seq(from=-1, to=1, by=0.1)) %>%
mutate(softmax = softmax(x),
relu = relu(x)) %>%
gather(variable,value,2:3) %>%
ggplot(., aes(x=x, y=value, group=variable, colour=variable))+
geom_line(size=1) +
ggtitle("ReLU & Softmax")+
theme_minimal()
ReLu es la función de activación que más se utiliza en la actualidad.
model
El modelo tiene 1.2 millones de parámetros para optimizar:
La primera capa convolucional tiene que entrenar los filtros. Como estos eran de 3x3, cada uno tiene 9 parámetros para entrenar + 1 bias por filtro
32* (3*3) +32
[1] 320
La segunda convolución tiene que entrenar kernels de 3x3 para 64 filtros \(64*(3*3)\), para cada uno de los 32 filtros de la capa anterior, +1 bias por filtro
64*(3*3)*32 +64
[1] 18496
layer_max_pooling_2d
, layer_dropout
y layer_flatten
no entrenan parámetros.
cuando aplanamos. El shape pasa a:
12*12*64
[1] 9216
128*9216 +128
[1] 1179776
128*10 +10
[1] 1290
Luego necesitamos compilar el modelo indicando la función de loss, qué tipo de optimizador utilizar, y qué métricas nos importan
model <- model %>% compile(
loss = "categorical_crossentropy",
optimizer = optimizer_adadelta(),
metrics = c('accuracy')
)
Para ajustar el modelo usamos la función fit()
, acá necesitamos pasar los siguientes parámetros:
epochs
: Cuantas veces va a recorrer el dataset de entrenamientobatch_size
: de a cuantas imágenes va a mirar en cada iteración del backpropagationvalidation_split
: Hacemos un split en train y validation para evaluar las métricas.epochs <- 12
batch_size <- 128
validation_split <- 0.2
fit_history <- model %>% fit(
x_train, y_train,
batch_size = batch_size,
epochs = epochs,
validation_split = validation_split
)
Mientras entrenamos el modelo, podemos ver la evolución en el gráfico interactivo que se genera en el viewer de Rstudio.
fit_history
Trained on 48,000 samples, validated on 12,000 samples (batch_size=128, epochs=12)
Final epoch (plot to see history):
acc: 0.9929
loss: 0.02294
val_acc: 0.9902
val_loss: 0.0382
fit()
nos devuelve un objeto que incluye las métricas de loss y accuracy.
Este objeto lo podemos graficar con plot()
y nos devuelve un objeto de ggplot, sobre el que podemos seguir trabajando
plot(fit_history)+
theme_minimal()+
labs(title= "Evolución de Loss y Accuracy en train y validation")
es importante guardar el modelo luego de entrenar, para poder reutilizarlo
model %>% save_model_hdf5("../Resultados/cnn_model.h5")
y para cargarlo
modelo_preentrenado <- load_model_hdf5("../Resultados/cnn_model.h5")
2019-12-14 08:48:47.697479: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-12-14 08:48:47.736008: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2599990000 Hz
2019-12-14 08:48:47.737087: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55776ab0d4d0 executing computations on platform Host. Devices:
2019-12-14 08:48:47.737112: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
OMP: Info #212: KMP_AFFINITY: decoding x2APIC ids.
OMP: Info #210: KMP_AFFINITY: Affinity capable, using global cpuid leaf 11 info
OMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0-3
OMP: Info #156: KMP_AFFINITY: 4 available OS procs
OMP: Info #157: KMP_AFFINITY: Uniform topology
OMP: Info #179: KMP_AFFINITY: 1 packages x 2 cores/pkg x 2 threads/core (2 total cores)
OMP: Info #214: KMP_AFFINITY: OS proc to physical thread map:
OMP: Info #171: KMP_AFFINITY: OS proc 0 maps to package 0 core 0 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 2 maps to package 0 core 0 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 1 maps to package 0 core 1 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 3 maps to package 0 core 1 thread 1
OMP: Info #250: KMP_AFFINITY: pid 16557 tid 16557 thread 0 bound to OS proc set 0
2019-12-14 08:48:47.740137: I tensorflow/core/common_runtime/process_util.cc:71] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
modelo_preentrenado
Model
__________________________________________________________________________________________________________
Layer (type) Output Shape Param #
==========================================================================================================
conv2d_1 (Conv2D) (None, 26, 26, 32) 320
__________________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 24, 24, 64) 18496
__________________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 12, 12, 64) 0
__________________________________________________________________________________________________________
dropout_3 (Dropout) (None, 12, 12, 64) 0
__________________________________________________________________________________________________________
flatten_1 (Flatten) (None, 9216) 0
__________________________________________________________________________________________________________
dense_4 (Dense) (None, 128) 1179776
__________________________________________________________________________________________________________
dropout_4 (Dropout) (None, 128) 0
__________________________________________________________________________________________________________
dense_5 (Dense) (None, 10) 1290
==========================================================================================================
Total params: 1,199,882
Trainable params: 1,199,882
Non-trainable params: 0
__________________________________________________________________________________________________________
Si queremos evaluar el modelo sobre el conjunto de test (distinto del de validación) podemos usar la función evaluate()
modelo_preentrenado %>% evaluate(x_test, y_test)
OMP: Info #250: KMP_AFFINITY: pid 16557 tid 16735 thread 1 bound to OS proc set 1
OMP: Info #250: KMP_AFFINITY: pid 16557 tid 16794 thread 2 bound to OS proc set 2
OMP: Info #250: KMP_AFFINITY: pid 16557 tid 16795 thread 3 bound to OS proc set 3
OMP: Info #250: KMP_AFFINITY: pid 16557 tid 16796 thread 4 bound to OS proc set 0
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
32/10000 [..............................] - ETA: 2:44 - loss: 0.0013 - acc: 1.0000
64/10000 [..............................] - ETA: 1:36 - loss: 0.0103 - acc: 1.0000
OMP: Info #250: KMP_AFFINITY: pid 16557 tid 16736 thread 5 bound to OS proc set 1
OMP: Info #250: KMP_AFFINITY: pid 16557 tid 16797 thread 6 bound to OS proc set 2
OMP: Info #250: KMP_AFFINITY: pid 16557 tid 16799 thread 8 bound to OS proc set 0
OMP: Info #250: KMP_AFFINITY: pid 16557 tid 16798 thread 7 bound to OS proc set 3
128/10000 [..............................] - ETA: 56s - loss: 0.0062 - acc: 1.0000
192/10000 [..............................] - ETA: 41s - loss: 0.0042 - acc: 1.0000
256/10000 [..............................] - ETA: 35s - loss: 0.0033 - acc: 1.0000
288/10000 [..............................] - ETA: 33s - loss: 0.0030 - acc: 1.0000
320/10000 [..............................] - ETA: 31s - loss: 0.0027 - acc: 1.0000
384/10000 [>.............................] - ETA: 28s - loss: 0.0168 - acc: 0.9948
416/10000 [>.............................] - ETA: 27s - loss: 0.0155 - acc: 0.9952
448/10000 [>.............................] - ETA: 26s - loss: 0.0165 - acc: 0.9933
512/10000 [>.............................] - ETA: 24s - loss: 0.0192 - acc: 0.9922
576/10000 [>.............................] - ETA: 23s - loss: 0.0171 - acc: 0.9931
640/10000 [>.............................] - ETA: 22s - loss: 0.0207 - acc: 0.9922
672/10000 [=>............................] - ETA: 22s - loss: 0.0207 - acc: 0.9926
736/10000 [=>............................] - ETA: 21s - loss: 0.0222 - acc: 0.9918
768/10000 [=>............................] - ETA: 20s - loss: 0.0217 - acc: 0.9922
832/10000 [=>............................] - ETA: 20s - loss: 0.0204 - acc: 0.9928
896/10000 [=>............................] - ETA: 19s - loss: 0.0225 - acc: 0.9922
960/10000 [=>............................] - ETA: 19s - loss: 0.0238 - acc: 0.9906
1024/10000 [==>...........................] - ETA: 18s - loss: 0.0304 - acc: 0.9902
1088/10000 [==>...........................] - ETA: 18s - loss: 0.0319 - acc: 0.9899
1152/10000 [==>...........................] - ETA: 17s - loss: 0.0302 - acc: 0.9905
1216/10000 [==>...........................] - ETA: 17s - loss: 0.0322 - acc: 0.9901
1280/10000 [==>...........................] - ETA: 17s - loss: 0.0360 - acc: 0.9883
1312/10000 [==>...........................] - ETA: 17s - loss: 0.0352 - acc: 0.9886
1376/10000 [===>..........................] - ETA: 16s - loss: 0.0363 - acc: 0.9884
1408/10000 [===>..........................] - ETA: 16s - loss: 0.0363 - acc: 0.9879
1472/10000 [===>..........................] - ETA: 16s - loss: 0.0358 - acc: 0.9878
1536/10000 [===>..........................] - ETA: 16s - loss: 0.0400 - acc: 0.9870
1600/10000 [===>..........................] - ETA: 16s - loss: 0.0384 - acc: 0.9875
1664/10000 [===>..........................] - ETA: 15s - loss: 0.0382 - acc: 0.9874
1728/10000 [====>.........................] - ETA: 15s - loss: 0.0391 - acc: 0.9867
1792/10000 [====>.........................] - ETA: 15s - loss: 0.0383 - acc: 0.9872
1856/10000 [====>.........................] - ETA: 14s - loss: 0.0370 - acc: 0.9876
1920/10000 [====>.........................] - ETA: 14s - loss: 0.0385 - acc: 0.9870
1984/10000 [====>.........................] - ETA: 14s - loss: 0.0373 - acc: 0.9874
2048/10000 [=====>........................] - ETA: 14s - loss: 0.0384 - acc: 0.9868
2080/10000 [=====>........................] - ETA: 14s - loss: 0.0379 - acc: 0.9870
2144/10000 [=====>........................] - ETA: 14s - loss: 0.0445 - acc: 0.9860
2208/10000 [=====>........................] - ETA: 13s - loss: 0.0435 - acc: 0.9864
2272/10000 [=====>........................] - ETA: 13s - loss: 0.0423 - acc: 0.9868
2336/10000 [======>.......................] - ETA: 13s - loss: 0.0427 - acc: 0.9867
2400/10000 [======>.......................] - ETA: 13s - loss: 0.0419 - acc: 0.9871
2464/10000 [======>.......................] - ETA: 13s - loss: 0.0434 - acc: 0.9862
2528/10000 [======>.......................] - ETA: 13s - loss: 0.0433 - acc: 0.9862
2592/10000 [======>.......................] - ETA: 12s - loss: 0.0425 - acc: 0.9865
2656/10000 [======>.......................] - ETA: 12s - loss: 0.0461 - acc: 0.9861
2720/10000 [=======>......................] - ETA: 12s - loss: 0.0451 - acc: 0.9864
2784/10000 [=======>......................] - ETA: 12s - loss: 0.0443 - acc: 0.9867
2848/10000 [=======>......................] - ETA: 12s - loss: 0.0433 - acc: 0.9870
2912/10000 [=======>......................] - ETA: 12s - loss: 0.0457 - acc: 0.9870
2976/10000 [=======>......................] - ETA: 11s - loss: 0.0482 - acc: 0.9862
3040/10000 [========>.....................] - ETA: 11s - loss: 0.0479 - acc: 0.9862
3104/10000 [========>.....................] - ETA: 11s - loss: 0.0478 - acc: 0.9861
3168/10000 [========>.....................] - ETA: 11s - loss: 0.0468 - acc: 0.9864
3232/10000 [========>.....................] - ETA: 11s - loss: 0.0459 - acc: 0.9867
3264/10000 [========>.....................] - ETA: 11s - loss: 0.0454 - acc: 0.9868
3296/10000 [========>.....................] - ETA: 11s - loss: 0.0452 - acc: 0.9870
3328/10000 [========>.....................] - ETA: 11s - loss: 0.0448 - acc: 0.9871
3360/10000 [=========>....................] - ETA: 11s - loss: 0.0443 - acc: 0.9872
3392/10000 [=========>....................] - ETA: 11s - loss: 0.0440 - acc: 0.9873
3424/10000 [=========>....................] - ETA: 11s - loss: 0.0445 - acc: 0.9871
3456/10000 [=========>....................] - ETA: 10s - loss: 0.0441 - acc: 0.9873
3520/10000 [=========>....................] - ETA: 10s - loss: 0.0440 - acc: 0.9872
3584/10000 [=========>....................] - ETA: 10s - loss: 0.0456 - acc: 0.9872
3616/10000 [=========>....................] - ETA: 10s - loss: 0.0453 - acc: 0.9873
3648/10000 [=========>....................] - ETA: 10s - loss: 0.0449 - acc: 0.9874
3712/10000 [==========>...................] - ETA: 10s - loss: 0.0442 - acc: 0.9876
3744/10000 [==========>...................] - ETA: 10s - loss: 0.0445 - acc: 0.9874
3808/10000 [==========>...................] - ETA: 10s - loss: 0.0440 - acc: 0.9877
3840/10000 [==========>...................] - ETA: 10s - loss: 0.0451 - acc: 0.9875
3904/10000 [==========>...................] - ETA: 10s - loss: 0.0445 - acc: 0.9877
3936/10000 [==========>...................] - ETA: 10s - loss: 0.0442 - acc: 0.9878
4000/10000 [===========>..................] - ETA: 10s - loss: 0.0439 - acc: 0.9877
4064/10000 [===========>..................] - ETA: 9s - loss: 0.0433 - acc: 0.9879
4096/10000 [===========>..................] - ETA: 9s - loss: 0.0432 - acc: 0.9880
4160/10000 [===========>..................] - ETA: 9s - loss: 0.0426 - acc: 0.9882
4224/10000 [===========>..................] - ETA: 9s - loss: 0.0440 - acc: 0.9882
4256/10000 [===========>..................] - ETA: 9s - loss: 0.0445 - acc: 0.9878
4320/10000 [===========>..................] - ETA: 9s - loss: 0.0441 - acc: 0.9877
4384/10000 [============>.................] - ETA: 9s - loss: 0.0435 - acc: 0.9879
4448/10000 [============>.................] - ETA: 9s - loss: 0.0430 - acc: 0.9881
4512/10000 [============>.................] - ETA: 9s - loss: 0.0425 - acc: 0.9883
4576/10000 [============>.................] - ETA: 8s - loss: 0.0428 - acc: 0.9882
4640/10000 [============>.................] - ETA: 8s - loss: 0.0423 - acc: 0.9884
4704/10000 [=============>................] - ETA: 8s - loss: 0.0417 - acc: 0.9885
4768/10000 [=============>................] - ETA: 8s - loss: 0.0432 - acc: 0.9880
4800/10000 [=============>................] - ETA: 8s - loss: 0.0429 - acc: 0.9881
4864/10000 [=============>................] - ETA: 8s - loss: 0.0429 - acc: 0.9879
4896/10000 [=============>................] - ETA: 8s - loss: 0.0427 - acc: 0.9879
4928/10000 [=============>................] - ETA: 8s - loss: 0.0424 - acc: 0.9880
4992/10000 [=============>................] - ETA: 8s - loss: 0.0420 - acc: 0.9882
5056/10000 [==============>...............] - ETA: 8s - loss: 0.0414 - acc: 0.9883
5120/10000 [==============>...............] - ETA: 7s - loss: 0.0409 - acc: 0.9885
5184/10000 [==============>...............] - ETA: 7s - loss: 0.0404 - acc: 0.9886
5248/10000 [==============>...............] - ETA: 7s - loss: 0.0400 - acc: 0.9888
5312/10000 [==============>...............] - ETA: 7s - loss: 0.0395 - acc: 0.9889
5376/10000 [===============>..............] - ETA: 7s - loss: 0.0391 - acc: 0.9890
5440/10000 [===============>..............] - ETA: 7s - loss: 0.0386 - acc: 0.9892
5472/10000 [===============>..............] - ETA: 7s - loss: 0.0384 - acc: 0.9892
5504/10000 [===============>..............] - ETA: 7s - loss: 0.0382 - acc: 0.9893
5568/10000 [===============>..............] - ETA: 7s - loss: 0.0377 - acc: 0.9894
5600/10000 [===============>..............] - ETA: 7s - loss: 0.0375 - acc: 0.9895
5632/10000 [===============>..............] - ETA: 7s - loss: 0.0373 - acc: 0.9895
5664/10000 [===============>..............] - ETA: 7s - loss: 0.0372 - acc: 0.9896
5696/10000 [================>.............] - ETA: 7s - loss: 0.0370 - acc: 0.9896
5728/10000 [================>.............] - ETA: 7s - loss: 0.0368 - acc: 0.9897
5760/10000 [================>.............] - ETA: 7s - loss: 0.0366 - acc: 0.9898
5792/10000 [================>.............] - ETA: 7s - loss: 0.0364 - acc: 0.9898
5856/10000 [================>.............] - ETA: 7s - loss: 0.0360 - acc: 0.9899
5920/10000 [================>.............] - ETA: 7s - loss: 0.0357 - acc: 0.9900
5952/10000 [================>.............] - ETA: 7s - loss: 0.0357 - acc: 0.9899
6016/10000 [=================>............] - ETA: 7s - loss: 0.0360 - acc: 0.9899
6080/10000 [=================>............] - ETA: 7s - loss: 0.0356 - acc: 0.9900
6144/10000 [=================>............] - ETA: 6s - loss: 0.0353 - acc: 0.9901
6208/10000 [=================>............] - ETA: 6s - loss: 0.0352 - acc: 0.9902
6272/10000 [=================>............] - ETA: 6s - loss: 0.0348 - acc: 0.9903
6336/10000 [==================>...........] - ETA: 6s - loss: 0.0345 - acc: 0.9904
6400/10000 [==================>...........] - ETA: 6s - loss: 0.0341 - acc: 0.9905
6432/10000 [==================>...........] - ETA: 6s - loss: 0.0339 - acc: 0.9905
6464/10000 [==================>...........] - ETA: 6s - loss: 0.0338 - acc: 0.9906
6528/10000 [==================>...........] - ETA: 6s - loss: 0.0340 - acc: 0.9905
6560/10000 [==================>...........] - ETA: 6s - loss: 0.0339 - acc: 0.9905
6624/10000 [==================>...........] - ETA: 5s - loss: 0.0354 - acc: 0.9903
6656/10000 [==================>...........] - ETA: 5s - loss: 0.0355 - acc: 0.9902
6720/10000 [===================>..........] - ETA: 5s - loss: 0.0351 - acc: 0.9903
6784/10000 [===================>..........] - ETA: 5s - loss: 0.0349 - acc: 0.9904
6848/10000 [===================>..........] - ETA: 5s - loss: 0.0346 - acc: 0.9905
6912/10000 [===================>..........] - ETA: 5s - loss: 0.0342 - acc: 0.9906
6976/10000 [===================>..........] - ETA: 5s - loss: 0.0339 - acc: 0.9907
7040/10000 [====================>.........] - ETA: 5s - loss: 0.0336 - acc: 0.9908
7104/10000 [====================>.........] - ETA: 5s - loss: 0.0333 - acc: 0.9909
7168/10000 [====================>.........] - ETA: 4s - loss: 0.0330 - acc: 0.9909
7232/10000 [====================>.........] - ETA: 4s - loss: 0.0328 - acc: 0.9910
7296/10000 [====================>.........] - ETA: 4s - loss: 0.0325 - acc: 0.9911
7360/10000 [=====================>........] - ETA: 4s - loss: 0.0322 - acc: 0.9912
7424/10000 [=====================>........] - ETA: 4s - loss: 0.0319 - acc: 0.9912
7488/10000 [=====================>........] - ETA: 4s - loss: 0.0317 - acc: 0.9913
7552/10000 [=====================>........] - ETA: 4s - loss: 0.0314 - acc: 0.9914
7616/10000 [=====================>........] - ETA: 4s - loss: 0.0311 - acc: 0.9915
7680/10000 [======================>.......] - ETA: 4s - loss: 0.0309 - acc: 0.9915
7712/10000 [======================>.......] - ETA: 3s - loss: 0.0308 - acc: 0.9916
7776/10000 [======================>.......] - ETA: 3s - loss: 0.0305 - acc: 0.9916
7840/10000 [======================>.......] - ETA: 3s - loss: 0.0303 - acc: 0.9917
7904/10000 [======================>.......] - ETA: 3s - loss: 0.0300 - acc: 0.9918
7968/10000 [======================>.......] - ETA: 3s - loss: 0.0298 - acc: 0.9918
8032/10000 [=======================>......] - ETA: 3s - loss: 0.0295 - acc: 0.9919
8064/10000 [=======================>......] - ETA: 3s - loss: 0.0294 - acc: 0.9919
8128/10000 [=======================>......] - ETA: 3s - loss: 0.0292 - acc: 0.9920
8192/10000 [=======================>......] - ETA: 3s - loss: 0.0290 - acc: 0.9921
8256/10000 [=======================>......] - ETA: 2s - loss: 0.0290 - acc: 0.9920
8320/10000 [=======================>......] - ETA: 2s - loss: 0.0288 - acc: 0.9921
8384/10000 [========================>.....] - ETA: 2s - loss: 0.0287 - acc: 0.9920
8448/10000 [========================>.....] - ETA: 2s - loss: 0.0286 - acc: 0.9921
8512/10000 [========================>.....] - ETA: 2s - loss: 0.0283 - acc: 0.9921
8576/10000 [========================>.....] - ETA: 2s - loss: 0.0283 - acc: 0.9921
8640/10000 [========================>.....] - ETA: 2s - loss: 0.0281 - acc: 0.9921
8704/10000 [=========================>....] - ETA: 2s - loss: 0.0278 - acc: 0.9922
8768/10000 [=========================>....] - ETA: 2s - loss: 0.0276 - acc: 0.9922
8832/10000 [=========================>....] - ETA: 1s - loss: 0.0274 - acc: 0.9923
8896/10000 [=========================>....] - ETA: 1s - loss: 0.0272 - acc: 0.9924
8960/10000 [=========================>....] - ETA: 1s - loss: 0.0270 - acc: 0.9924
9024/10000 [==========================>...] - ETA: 1s - loss: 0.0272 - acc: 0.9922
9056/10000 [==========================>...] - ETA: 1s - loss: 0.0271 - acc: 0.9923
9120/10000 [==========================>...] - ETA: 1s - loss: 0.0269 - acc: 0.9923
9184/10000 [==========================>...] - ETA: 1s - loss: 0.0267 - acc: 0.9924
9248/10000 [==========================>...] - ETA: 1s - loss: 0.0265 - acc: 0.9924
9312/10000 [==========================>...] - ETA: 1s - loss: 0.0263 - acc: 0.9925
9376/10000 [===========================>..] - ETA: 1s - loss: 0.0262 - acc: 0.9925
9440/10000 [===========================>..] - ETA: 0s - loss: 0.0260 - acc: 0.9926
9504/10000 [===========================>..] - ETA: 0s - loss: 0.0258 - acc: 0.9926
9568/10000 [===========================>..] - ETA: 0s - loss: 0.0256 - acc: 0.9927
9632/10000 [===========================>..] - ETA: 0s - loss: 0.0255 - acc: 0.9927
9696/10000 [============================>.] - ETA: 0s - loss: 0.0261 - acc: 0.9925
9760/10000 [============================>.] - ETA: 0s - loss: 0.0270 - acc: 0.9924
9824/10000 [============================>.] - ETA: 0s - loss: 0.0269 - acc: 0.9925
9888/10000 [============================>.] - ETA: 0s - loss: 0.0268 - acc: 0.9925
9952/10000 [============================>.] - ETA: 0s - loss: 0.0266 - acc: 0.9926
10000/10000 [==============================] - 17s 2ms/sample - loss: 0.0267 - acc: 0.9925
$loss
[1] 0.02666184
$acc
[1] 0.9925
Para obtener las predicciones sobre un nuevo conjunto de datos utilizamos predict_classes()
modelo_preentrenado %>% predict_classes(x_test) %>% head(.)
Otros recursos interesantes:
Visualización de una Red Fully conected para clasificación de dígitos
Estas notas estan basadas en https://tensorflow.rstudio.com/keras/#tutorials↩
https://ailephant.com/computer-vision-convolutional-neural-networks/↩
https://devblogs.nvidia.com/deep-learning-nutshell-core-concepts/hierarchical_features/↩
https://ailephant.com/computer-vision-convolutional-neural-networks/↩
https://towardsdatascience.com/a-comprehensive-introduction-to-different-types-of-convolutions-in-deep-learning-669281e58215↩
https://towardsdatascience.com/intuitively-understanding-convolutions-for-deep-learning-1f6f42faee1↩
https://towardsdatascience.com/intuitively-understanding-convolutions-for-deep-learning-1f6f42faee1↩
https://computersciencewiki.org/index.php/File:MaxpoolSample2.png↩
http://jmlr.org/papers/volume15/srivastava14a.old/srivastava14a.pdf↩
https://rubikscode.net/2018/02/26/introduction-to-convolutional-neural-networks/↩