# Neural Networks and Deep Learning with SAS® Viya®

### Introduction

Neural Networks started to become popular predictive modelling algorithms much more than a decade ago. They were used in manufacturing, marketing, predictive maintenance and many other domains. However, their lower level of interpretability (especially when compared to more traditional modelling methods like regression or decision trees) constrained their use in strictly regulated industries and domains like financial services or pharmaceuticals. At that time, computing power allowed the use of artificial neural networks for complex prediction and analysis tasks.

With the increase of available computing capacity, memory and processor speed, neural networks were then able to grow in their complexity. Increasing complexity equates to having more neurons, organised into more layers, with each layer specialising in behaviour. This enables appropriate learning of the problem to solve or the target to predict. With growing complexity, neural networks are now able to support predictive tasks on big data, unstructured data mining, bioinformatics and many more applications.

Deep learning is a specific broader family of neural networks, applied in natural language processing, computer vision, object detection, speech recognition and autonomous systems. Deep learning essentially refers to a neural network with multiple layers, where subsequent layers of neurons learn more complex patterns of data. With the example of facial recognition in the area of computer vision, lower neuron layers learn relatively simple structures of an image like edges or colour boundaries, and higher layers tend to learn more complex structures like a human eye or nose, or even a complete face – based on the simple structures in the lower neuron layers.

SAS Viya enables a data scientist to exploit neural networks and deep learning, using graphical user interfaces and also via programming techniques.

### SAS Viya example

In the following example we will use neural networks to perform a prediction of the type of trees covering an area of wilderness using SAS Visual Data Mining and Machine Learning (VDMML) – a software solution available on the SAS Viya platform. The following screenshot shows a sample of the data, which has one observation per wilderness area:

Our binary target variable is Cover_Types_Category (with values Pine and Poplar), and we have input variables holding information about distances to water or roadways, slope, soil type, elevation or how shaded the area is at various times during the day.

We can start building a neural network model by adding a Neural Network object in the SAS VDMML GUI. First we need to set the variable roles; Cover_Types_Category as the response and the other relevant variables as predictors. An initial neural network can be built using default settings, however, they can be amended as needed. The following screenshot shows the main settings related to the model architecture and other parameters of the neural network model to be built.

The Event level can be chosen in case of a category response variable (Poplar in our case). The number of optimisation iterations (123 in our case) and a time limit can be set – however, in practice, the neural network algorithm tends to converge to the final model much quicker. The optimisation method can also be set – usually this does not influence the final model but rather the number of iterations required to find the final model. L1 and L2 are learning parameters. Learning parameters define how quickly the neurons adapt their weights (i.e. how quickly they learn). This also control how likely it is that less significant input variables will be excluded from the model. On the second screenshot above, the neural network architecture can be amended by changing the number of hidden layers and the number of neurons on each hidden layer. Neural networks with a higher number of hidden layers can be considered deep learning. Activation functions can be set for each hidden layer separately – the activation function is a spline that each neuron uses to transform its weighted inputs to its output. Finally, as the third screenshot above shows, the parameters discussed earlier can be optimised using the Autotune functionality within VDMML. Autotune refers to changing the network architecture and the learning parameters to find the optimal combination. This is performed automatically by SAS Viya when selected. It ultimately helps identify the model with the best predictive power among the different settings.

As we have set our parameters, SAS Viya has built the model for us. Let’s have a look at the outputs! The following screenshot shows the number of true and false predictions for the event and non-event (Poplar and Pine):

Altogether, the model has 10.09% misclassification rate for the event (Poplar). Neural networks are usually hard to interpret, however SAS Viya helps with the interpretation. Let’s have a look which input variables has an effect on the output:

Elevation has the highest effect, with variables measuring hill shade and distance from various points of interest also proving to be important predictors. Further benefits with visualising the neural network can be seen on the following screenshot, which shows the different layers of the network, connections between the neurons together with neurons having significant effect in the model:

We can see the input variables used in the final model on the left (input layer), the first hidden layer (with neurons marked Neuron1_xxx), the second hidden layer (with neurons marked Neuron 2_yyy) and the output layer (the response variable). Weight values are also displayed for the final (second) hidden layer in a colour scale to show the different effects neurons have in the model.

### Next steps

The Neural Network object can be copied into an analysis pipeline in Model Studio, allowing further settings to be changed and the neural network architecture to be set even more complex. Further hidden layers can also be added to the architecture to allow deep learning. Autotuning can be further used to refine the neural network architecture.