If you follow the technology scene closely, you've probably heard the term "deep learning".
This technology has revolutionized artificial intelligence, helping humanity to create machines and systems that we previously only dreamed of. Essentially, Deep Learning is a machine learning subsystem that uses deep artificial neural networks.
Easy to use and widely supported, Keras makes working with deep learning very convenient and efficient.
Deep neural networks are becoming more and more fashionable, but the difficulty of mastering their basic structures is a major obstacle to use by many developers who are new to machine learning.
During 2010-2020, several improved and simplified high-level APIs were proposed for building neural network models. They are all broadly similar, but show significant differences upon closer inspection.
Keras is one of the leading high level neural network APIs. It is written in Python and supports several internal neural network computing engines. The main idea behind Keras development is to facilitate experimentation through rapid prototyping. The ability to move from idea to result with the least possible lag is the key to good machine intelligence research.
Basically it is just a high-level library built on top of Theano or TensorFlow. Keras provides an API like scikit-learn for building neural networks. Developers can use Keras to quickly build neural networks without worrying about the mathematical aspects of tensor algebra, numerical methods, and optimization techniques.
Any developer who has already dealt with the Python environment can get started with Keras without any problems.
To get started, you need the following preinstalled software:
Next, install the keras R package from GitHub as follows:
Devtools :: install_github ( "rstudio / keras")
Keras R frontend uses TensorFlow server-side engine by default. To install both the Keras core library and the TensorFlow backend use the install function_keras ():
library (keras) install_keras ()
This will provide you with standard CPU-based Keras and TensorFlow installations. For a more customized installation, such as taking advantage of NVIDIA GPUs, see the install documentation_keras ().
There is no special way to load data into Keras from a local disk, just save the test data and work with it in the appropriate folder.
-- current directory -- data | --train | -- test
If your directory stream is like this, you can use the following code to load data:
import os import numpy as np from keras.preprocessing import image PATH = os.getcwd() train_path = PATH+'/data/train/' train_batch = os.listdir(train_path) x_train =  # if data are in form of images for sample in train_data: img_path = train_path+sample x = image.load_img(img_path)...
The basic Keras data structure is a model, a way of organizing layers. There are two main types of models available in Keras: a sequential model and a Model class used with the functional API. The simplest type of model is a sequential model, a linear stack of layers.
The sequential model is a linear stack of layers, and layers can be described very simply.
Here's an example from the Keras documentation that uses model.add () to define two dense layers in a sequential model:
import keras from keras.models import Sequential from keras.layers import Dense #Create Sequential model with Dense layers, using the add method model = Sequential() #Dense implements the operation: # output = activation(dot(input, kernel) + bias) #Units are the dimensionality of the output space for the layer, # which equals the number of hidden units #Activation and loss functions may be specified by strings or classes model.add(Dense(units=64, activation='relu', input_dim=100)) model.add(Dense(units=10, activation='softmax')) #The compile method configures the model’s learning process model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) #The fit method does the training in batches # x_train and y_train are Numpy arrays --just like in the Scikit-Learn API. model.fit(x_train, y_train, epochs=5, batch_size=32) #The evaluate method calculates the losses and metrics # for the trained model loss_and_metrics = model.evaluate(x_test, y_test, batch_size=128) #The predict method applies the trained model to inputs # to generate outputs classes = model.predict(x_test, batch_size=128)
It's also worth noting how little actual code is in actual code compared to, say, the low-level TensorFlow APIs. Each level definition requires one line of code, compilation (learning process definition) requires one line of code, and one line of code is required for fitting (training), evaluating (calculating losses and metrics), and predicting the output of the trained model.
We start by creating a sequential model and then add layers using the pipe operator (%>%):
model <- keras_model_sequential() model %>% layer_dense(units = 256, activation = 'relu', input_shape = c(784)) %>% layer_dropout(rate = 0.4) %>% layer_dense(units = 128, activation = 'relu') %>% layer_dropout(rate = 0.3) %>% layer_dense(units = 10, activation = 'softmax')
Input argument_shape for the first layer defines the shape of the input (a numeric vector of length 784 representing the grayscale image). The last layer outputs a numeric vector of length 10 (probabilities for each digit) using the softmax activation function.
Use the summary () function to print the details of the model:
summary(model) Model ___________________________________________________________________________ Layer (type) Output Shape Param # =================================================================== dense_1 (Dense) (None, 256) 200960 ___________________________________________________________________________ dropout_1 (Dropout) (None, 256) 0 ___________________________________________________________________________ dense_2 (Dense) (None, 128) 32896 ___________________________________________________________________________ dropout_2 (Dropout) (None, 128) 0 ___________________________________________________________________________ dense_3 (Dense) (None, 10) 1290 =================================================================== Total params: 235,146 Trainable params: 235,146 Non-trainable params: 0 ___________________________________________________________________________
Then compile the model with the appropriate loss function, optimizer, and metrics:
model %>% compile( loss = 'categorical_crossentropy', optimizer = optimizer_rmsprop(), metrics = c('accuracy')
Image recognition using a trained model
We can learn the basics of Keras by going through a simple example: handwritten digit recognition from the MNIST dataset.
MNIST consists of 28 x 28 px grayscale images, for example:
5 0 4 1
The dataset also includes labels for each image, telling us which number it is. For example, labels for images above 5, 0, 4, and 1.
MNIST dataset is included in Keras and accessed using dataset function_mnist ().
Here we load the dataset, then create variables for our test and training data:
library (keras) mnist <- dataset_mnist () x_train <- mnist $ train $ x y_train <- mnist $ train $ y x_test <- mnist $ test $ x y_test <- mnist $ test $ y
X data is a 3D array (image, width, height) of grayscale values. To prepare the data for training, we convert the 3D arrays to matrices, changing the width and height in one dimension (28x28 images collapsed into 784 vectors).
Then we convert grayscale values from integers in the range 0 to 255 to floating point values in the range 0 to 1:
# reshape x_train <- array_reshape (x_train, c (nrow (x_train), 784)) x_test <- array_reshape (x_test, c (nrow (x_test), 784)) # rescale x_train <- x_train / 255 x_test <- x_test / 255
Note that we are using the array function_reshape (), not the dim function <- () to change the shape of the array. This is to ensure that the data is interpreted using basic row semantics (as opposed to the default basic column semantics in R), which in turn is consistent with how the numeric libraries called by Keras interpret array dimensions.
The y data is an integer vector with values ranging from 0 to 9.
To prepare this data for training, we uniquely encode the vectors into binary class matrices using the Keras to function_categorical ():
y_train <- to_categorical (y_train, 10) y_test <- to_categorical (y_test, 10)
This example shows how to implement a convolutional neural network for pattern recognition with Keras. These models can achieve higher accuracy than standard fully connected networks and take very little time to work with.
Order hosting in the company “Hosting Ukraine”... With us, you can choose the package of services that best suits your business needs.
High-quality and inexpensive SSD hosting from 1$, VPS on SSD from 12$, Cloud (cloud) hosting from 3$, cloud VPS from 6$... Dedicated server or dedicated server.
Simply select domain, hosting and install a suitable CMS in one click.
We have 24/7 technical supportto help you resolve any hosting or domain questions you might have.