Artificial Neural Network

An Introduction to Artificial Neural Network (ANN) - Basic Terminologies

What is Artificial Neural Network?
  • The Artificial Neural Network (ANN) is an efficient computer system, central theme borrowed from the analogy of the Biological Neural Network. ANNs is sometimes also referred to as Artificial Neural Systems or Parallel Distributed Processing Systems or Connectionist Systems.
  • ANNs acquire a massive collection of units that link together in a specific pattern to enable communication between them. These units, are called Nodes or Neurons, are simple processors that work in parallel and arrange in tiers. Every Neuron forms a connection with other Neurons through a connection link. Each connection link is associated with a Weight that has information about the Input Signal. Which is the most beneficial information for a Neuron to solve a particular problem because the Weight usually excites or inhibits the signal communicating currently. Each Neuron has an internal state, which is called Activation Signal. Finally, an Output Signal, generated after combining the Input Signals and Activation Rule, may be sent to other units.
What is Biological Neuron Network?
  • The brain is linked to the rest of the body sensors and actors by a complex network of nerves. A nerve cell is a unique biological cell that processes information based on estimation. The brain has an enormous number of neurons, roughly 10¹¹, which are the building blocks of the whole central nervous system of the living body.
  • The neuron is the root component of neural networks. A neuron is a type of cell that, like all other cells in the body, has a DNA code and is produced in the same way as other biological cells. Even though each organism’s DNA is different, the function is the same. The cell body (also known as Soma), dendrites, and axon are the three major components of a neuron. Dendrites are like fibers that branch out in different directions and connect to a mass of cells in a cluster.
  • The axon receives signals from neighboring neurons and sends them to the other neurons via dendrites. A synapse connects the terminal axons to the dendrite. An electrical impulse is nothing but a way to carry the Output signal transported by an axon. Axons are extensions of neurons. Every neuron has an axon. The axons act like a domino effect by relaying impulses from one neuron to the next.
Structure of a Biological Neuron Network
Image of a biological Neural Network
Relationship between BNN and ANN
Biological Neural Network (BNN) Artificial Neural Network (ANN)
Dendrites Inputs
Soma Nodes / Neurons
Synapse Weights
Axon Outputs
Architecture of ANN
  • To understand the mindset of synthetic neural network design, we need to understand what a neural network includes. To define a neural network made up of a flock of artificial neurons, called units, organized in a sequence of layers in different types of coats (layers) available in an artificial neural network.

  • Neural Networks consists of three coats (Layers) as follows :

    • Input Layers’
      • The input layer accepts various formats provided by the programmer.

    • Hidden Layers’
      • Hidden layer rests between input layers and output layers, performing all the necessary calculations to find hidden features and patterns.

    • Output Layers’
      • The input goes through a series of transformations using the hidden layer that ultimately results in an output moving along with that layer. The artificial neural network takes in the input and calculates the weighted sum considering any biases present. This calculation act in the form of a transfer function.

      • Equation of the Artificial Neural Network
      • The equation above specifies that the weighted sum is passed as input to a trigger function to produce the output. Trigger functions choose whether a node should fire or not. Only those who are triggered make it to the exit shift. There are different activation functions available applicable to the type of task we are performing.
Structure of a Artificial Neuron Network
Image of a Artificial Neural Network
How ANN Works?
  • An ANN entails a wide range of processors working in parallel and organized in tiers. The first tier gets the underdone input information — analogous to optic nerves in overt processing. Each successive plane obtains the output from the previous plane, in preference to the underdone input information — in the same way neurons similarly from the optic nerve get hold of alerts from the ones towards it. The ultimate tier produces the output of the system.

  • Each processing node has its small sphere of knowledge, along with what it has visible and any policies (rules) it turned into in the beginning programmed with or developed for itself. The planes are highly interconnected, which means each node in tier n will be tie-up to many nodes in plane n-1 — its inputs — and in tier n+1, which provides input data for those nodes. There may be one or more nodes in the output layer, of which the generated solution occurs, which read repeatedly.

  • Artificial neural networks are notable for being adaptive, which means they modify themselves as they learn from initial training and subsequent runs provide more information about the world. The most basic learning model centered on weighting the input streams, which is how each node weights the importance of input data from every predecessor. Inputs that contribute to getting the correct answers are weighted higher.
How ANN Learns?
  • Typically, an ANN is first trained or supplied with large amounts of data. Of images including actors, non-actors, masks, statues, and animal faces. Training is about providing information and telling the network what the result should be. Each entry gets accompanied by the appropriate identification, such as the name, non-actor, non-human information, etc. Providing the answers allows the model to adjust its internal weights to learn how to do its job better.

  • Define the rules and make determinations that are, each node decides what to send to the next level based on inputs from the previous level of neural networks using different principles. This process includes gradient-based training, fuzzy logic, genetic algorithms, and Bayesian methods. It can give you some basic rules about the relationships of objects in the data getting modeled.

  • For example, a facial recognition system instructed, “The eyebrows are above the eyes” or “The whiskers are under the nose.” “The whiskers are either next/above the mouth.” Preloaded rules can speed up training and make the model more powerful sooner but are also based on assumptions about the nature of the problem, which may prove irrelevant and useless, or incorrect and counterproductive, so that the decision of what, if any, can be made. Rules created are crucial.

  • Furthermore, the assumptions people make when training algorithms lead neural networks to amplify cultural biases. Skewed data sets are a constant challenge in training systems that find answers themselves by recognizing patterns in the data. If the data feeding the algorithm is not neutral and there is almost no data, the machine propagates the bias.
Types of Neural Network
  1. Feedforward Neural Network:
    • The purest form of the Artificial Neural Network is a Feedforward Neural Network. The information in this kind of neural network runs in one direction. There can or cannot be hidden layers depending on the requirements. A feedforward neural network has an input layer (entry for data) and an output layer (exit for data). Feedforward neural network doesn’t provide back-propagation. There are many uses of feedforward neural networks, such as speech recognition and computer vision. Feedforward neural networks are easier to maintain and have excellent responsiveness to noisy data (image recognition).

  2. Radial basis Neural Network:
    • There are two layers in the RBF functions used to account for the distance of a center concerning the point. In the first layer, the entities of the inner layer connect to the radial basis function. The output of this layer is assumed to compute the same in the next iteration. One of the applications of the Radial Basic Feature noticed under Power Restoration Systems of an electrical grid. The power supply is obliged to be restored as reliably and quickly as possible after a power failure.

  3. Kohonen self organizing Neural Network:
    • In this neural network, a discrete map is built depending on the vectors from arbitrary dimensions. Training data for an organization is created by training the card. The map can have one or two proportions. The weight of neurons can change depending on their value.

    • The position of the neuron does not change during the training of the map and remains constant. In the first phase of the self-organization process, the input vector and the low weight get assigned to each neuron value. A winning neuron is a neuron that is closer to the point. Other neurons will also move towards the extremity along with the winning neuron in the second phase.

    • The winning neuron has the shortest distance, and the Euclidean distance is employed to calculate the distance between the neurons and the point. Each neuron represents a kind of group (cluster), and the grouping (clustering) of all points takes place via iterations.

    • One of the leading uses of the Kohonen Neural Network is to recognize data patterns. It is also used in medical analytics to classify diseases more precisely. The data are grouped (clustered) into different categories after analyzing the data trends.

  4. Recurrent Neural Network:
    • The principle of the recurrent neural network is to trace the output of a layer back to the input. This principle helps to predict the result of a layer. In the computation process, each neuron acts as a memory cell. The neuron retains some information when you move to the next time step.

    • This process is known as the recurrent neural network. Work for the next step starts after setting the data aside for later use. The prediction will improve by correcting errors. When amending the inaccuracy, some modifications help to produce the correct prediction output. Learning speed is the speed with which the network can make the faultless prediction from the wrong prediction.

    • There are many uses of recurrent neural networks, for example, a text-to-speech model. The recurrent neural network evolved for supervised learning without the requirement of teaching instructions.

  5. Convolutional Neural Network:
    • In this type of neural network, initially, a neuron has learnable biases and weights. Image processing and signal processing are some of their applications in the computer vision field. It has taken over OpenCV.

    • Images are retrieved in parts to aid the network in arithmetic operations. Photos are recognized based on batch input properties. During the calculation, images transmogrify from the HSI or RGB scale into gray levels. These images are classified into several categories after the images transmogrify. Edges detected by the change in pixel value. Convolutional Neural Networks have a very high level of accuracy for Image classification, which is one of the primary reasons convolutional neural networks influence computer vision techniques.

  6. Modular Neural Network:
    • In this type of neural network, many independent networks contribute together to the results. Many subtasks are performed and constructed by each of these neural networks. Provide a range of inputs that are unique when compared to other neural networks. No signal. Exchange or interaction between these neural networks to carry out any task.

    • The complexity of a problem gets slightly reduced when solving problems with these modular networks because they completely break down the significant computational process into small components. The computing speed also improves when the number of connections is broken down and reduces the need for interaction of the neural networks with each other.

    • The total processing time depends upon the participation of the neurons in the calculation of the results. Which gets affected by how many neurons are involved in the process.

  7. Long / Short Term Memory Networks:
    • In 1997 Schmidhuber and Hochreiter built a neural network called long short term memory (LSTM) networks. Its primary purpose is to memorize things for a long time in an explicitly defined memory cell. The values are stored in the memory cell unless you prompt to forget the values by “forgot gate.”

    • New material adds to the storage cell through the “front door,” and the next hidden state of the cell passes along the vectors determined by the “exit door.” Composing primitive music, writing like Shakespeare, or learning complex sequences are some of the uses of the LSTM.

Mr. Ellis Tarmaster