# Introduction

The element-wise operations have been discussed in the last post Maths in a Neural Network: Element-wise. This post will focus on how to represent all the equations found in the previous post in vectors.

# 1 Feed-forward

Let’s consider a 2-3-2 network.

## 1.1 Element-wise operations

## 1.2 Vectorization

Applying equation (1):

Applying equation (2):

These can be generalized for more than 3 layers:

# 2 Weight Update

Let’s consider a 2-3-2 network.

## 2.1 Element-wise operations

For output layer:

For hidden layers:

## 2.1 Vectorization

For output layer:

Lets vectorize the weight update equation, applying equation (5):

Let’s represent the weight as a mitrix and apply equation (6):

Where:

Define backprogated value delta:

For hidden layers:

Applying equation (7):

Let’s represent the weight as a mitrix and apply equation (8):

Define backprogated value delta:

These can be generalized for more than 3 layers, total L – 1 layers:

Output layer:

Hidden layer:

# 3 Summary

## 3.1 Feed-forward:

## 3.2 Weight update:

Output layer:

Hidden layer:

# Next

- Maths in a Neural Network: Element-wise
- Maths in a Neural Network: Vectorization
- Code a Neural Network with Numpy
- Maths in a Neural Network: Batch Training

## 2 thoughts on “Maths in a Neural Network: Vectorization”