When you first heard the word ‘deep learning’, you probably assumed that it was the same as machine learning. This actually couldn’t be further from the truth.

**Deep learning** is a type of machine learning and AI. It is considered an advanced subfield of machine learning that uses multi-layered neural networks (composed of nodes or neurons) to learn progressively. These layers are used to perform hidden transformations on data.

Before diving deeper into neural networks, let’s go over some important math concepts in machine learning.

A **function** defines a relationship between an independent and dependent variable. Functions are generally represented as y=f(x) where the function takes in some input x and gives output y.

A **derivative** is a change in y for a small change in x. In other words, it is the slope of a function at a point or the rate of change at a single point. Again, the slope is the change in y / change in x.

Neural networks allow us to model nonlinear and complex relationships within data. Since many real-world problems have non-linear complex relationships, neural networks prove especially useful. The accuracy of neural networks also increases proportionally to the amount of data, which can’t be said for other traditional machine learning algorithms.

The most basic form of a neural network is the **perceptron**. It is essentially a linear machine-learning algorithm for binary classification tasks.

Think of a linear function y=mx+b that you might have learned in math class; that is similar to what a perceptron is doing. In the image above, the importance of the input x1, x2, and x3 are determined by the respective weights w1, w2, and w3 assigned to the input. The formula is output = w1x1 + w2x2 + w3x3. If the output is above a certain threshold value, it will result in a 1; if not, it will result in a 0 (an example of binary classification).

Obviously, this simple model is not ideal. A better model would use multiple layers: an input layer, hidden layer(s), and an output layer. The **input layer** represents the dimensions of the input vector (or list). The **hidden layer** consists of intermediate nodes that divide the input into different regions. Think of the hidden layer as a function f(x) that transforms an input into a given output. The **output layer** is simply the output of the network or the network’s final decision.

In Deep Learning Part 2 and 3, we will learn more about how neural networks work.