Neural Networks

In computer science, a neural network is a mathematical model inspired by the functioning of biological neural networks.

Example. In the human brain, neurons are interconnected by synapses, enabling us to reason and control every function and nerve in our body. Similarly, in computing, a neural network is made up of nodes (neurons) and edges (synapses) that connect these nodes. Neural networks are a fundamental concept in artificial intelligence.

A Practical Example of a Simple Neural Network

The graphical representation of a neural network is the flow graph.

The graph consists of nodes connected by edges.

an example of a flow graph

The sequence of edges creates many possible paths between a starting node and an ending node.

The node is the neural network's unit where operations are performed. It can be a simple mathematical calculation (e.g., addition) or a complex algorithm.

Types of Nodes

There are three types of nodes.

  • Input Nodes. These are the input nodes, located on the left, at the entrance of the neural network. They are used to introduce input data.
    input nodes
  • Hidden Nodes. These nodes are in the middle layer of the network. Also known as hidden nodes because they are within the network, neither at the entrance nor the exit. Data processing occurs at these nodes. Hidden nodes are referred to as network layers (or levels).
    hidden nodes in a neural network
  • Output Nodes. These are the output nodes, located on the right, at the exit of the neural networks. They are used to deliver the processed results.
    output nodes

Note. A simple neural network comprises only one layer of intermediate nodes. When there are multiple intermediate layers, it's referred to as a multilayer neural network (e.g., deep learning).
the difference between a simple neural network and a multilayer (deep) network

How a Neural Network Functions

The edges transmit data processing within the neural network from one node to another.

Information propagates from left to right (forward).

an example of a node with two edges

Incoming edges on nodes convey input values (e.g., 4), while outgoing edges transmit the result of the operation forward (e.g., 4+2 = 6).

Each node can be connected by multiple edges, both incoming and outgoing.

Example. The first edges of the network (C1 and C2) introduce input values for problem variables (instances). Intermediate edges (from C3 to C14) carry temporary data during processing. Finally, the nodes (C15 and C16) output the processed result.

Each edge has a synaptic weight (coefficient C) that modifies the data before delivering it to the destination node.

an example of an edge with weight

Here are some practical cases:

  • If the edge's weight is greater than one (C>1), the value is amplified.
  • If the weight is between zero and one (0
  • If the weight is a negative number (C<0), the value's sign is reversed (C=-1), and depending on the case, it's amplified (C<-1) or reduced (-1

Propagation and Feedback

Generally, these edges have a unidirectional flow... but not necessarily.

Things get more complicated when considering propagation and edge orientation.

Types of Propagation

Propagation can occur forward or backward (feedback).

  1. Forward Propagation. The result of a node is transmitted to a node in the next layer, one further to the right. It's the case of simple and acyclic propagation. In simple networks, always from left to right.
    a forward neural network
  2. Backward Propagation (feedback). In this case, the result is transmitted to one of the nodes in the previous layer, one to the left. Feedback is a feature of more complex networks, which can also become cyclic during data processing (recurrent neural networks).
    an example of feedback in a recurrent neural network

Complex Neural Networks with Feedback

The following flow graph shows the case of a recurrent and cyclic neural network.

an example of a complex recurrent neural network

Between nodes N1 and N2, there is a bidirectional edge (s1) that connects them, even though the nodes are on the same level (layer).

Nodes N3 and N5 have a feedback edge (r1 and r2) with a node in the previous layer (N1 and N2).

Node N4 also has an edge (t1) connecting the node's output to its input.

Complex neural networks have several advantages

An artificial neural network can tackle a problem by adapting to difficulties and the situation.

In this case, the algorithm can go back and modify the weights of the edges to avoid more critical nodes.

This way, the algorithm adapts the neural network to the operational environment and chooses the best path depending on the current situation.

It's not tied to a single fixed schema and better handles unforeseen circumstances.

Note. This is a frequent process in human reasoning. When driving in traffic, we know where we want to go but are ready to change routes if we encounter unexpected obstacles. The same happens in problem-solving and inferential processes in a computer.

Types of Neural Networks

There are various categories of neural networks.

The main ones are:

  • Simple Neural Networks. Characterized by an input cell layer (input nodes), an output cell layer (output nodes), and only one intermediate layer of hidden cells (intermediate nodes).
    an example of a simple neural network
  • Deep Feed Forward (DFF). Characterized by a series of input cells (input nodes), output cells (output nodes), and two or more series of hidden cells (intermediate nodes). For example, in machine learning, these networks characterize deep learning.
    an example of a two-layer neural network

Besides these categories, many others exist. I'll avoid listing them all to keep the topic concise.

 

 
 

Please feel free to point out any errors or typos, or share your suggestions to enhance these notes

FacebookTwitterLinkedinLinkedin
knowledge base

Artificial Intelligence (AI)