A beginners guide to understanding neural networks
Don’t you just love when everything is convenient?😁
Call me lazy but I love it when those tedious things I really don’t want to do are done for me. Could you imagine having to sort through ALL your important and spam emails 📧? I’m so grateful that my inbox automatically does that.
Do you remember when people used to use maps 🗺 to figure out where they were going?
I still remember when I was young how my dad used to pull out this HUGE map of Toronto every time we wanted to go somewhere new. The thing is the map didn’t tell us how much traffic was on each route, so it was pretty much a guessing game to determine which route would get us there the fastest. Now we just click on the Google Maps app on our phones, put in the location, and know exactly how and when we will get there.📍
Even the convenience of something simple like face ID on my phone📱. I used to always forget my phone password but now I don’t have to worry about that ever again.
These are things that are all incorporated into our daily lives and we don’t even notice them but in reality, we should be really grateful that our lives have been made this easy.
Do you want to know something cool? ALL the examples I mentioned above have all been built through one technology…
Artificial Intelligence! 🤖
There are so many more applications of this technology that can make our lives even easier.
For example, I don’t remember the last time I actually did my English homework. My teacher thinks that I do it every day but I’ll let you in on a little secret, I get a robot to do it for me!
Don’t believe me?
Search Openai.com on your browser and use their playground tool. You can use it to generate answers for any prompt you give it, like that English essay you really don’t want to write (you’re welcome to all you high school students).
Look at this cool text the AI wrote using the playground tool:
What’s even cooler is that this playground tool and all these other amazing features on this website are completely made using AI technology. 🤯
All these amazing AI applications made me want to dive deeper and understand how software and algorithms are capable of all of this. This is when I discovered…
Neural Networks! 🧠
Neural networks are models that look at data (inputs 🔢) and train themselves to recognize patterns within this data. Using these patterns they are able to figure out the set of calculations they need to make in order to predict outputs.
I like to think of neural networks as robots that are learning to make new things, like a chair for example. Instead of following step-by-step instructions from humans on how to make the chairs, the robots are given examples of chairs and try to figure out for themselves how to use inputs like wood and cotton, and turn them into outputs, the chairs! 🪑
If neural networks are the technology used to develop all these things like the Google Maps systems, face ID, sorting emails, etc, then it must be pretty complicated, right? Well not if you break it down into parts.
Let’s use this diagram of a neural network model to do so:
Input Layer:
Every layer of a neural network is made up of these things called “nodes” which are represented by the circles in the diagram 🟢 🟣 🔴. The nodes on the input layer of a neural network all represent a different feature of the input.
Remember how I talked about how your inbox sorts through all your emails, the nodes of the neural network that sort your emails would be the features of that email, such as the sender, word count, vocabulary used, etc.
Output Layer:
Let’s skip straight to the output layer next. The output layer of a neural network consists of one node and as you can guess, that node is the output or prediction that your model makes.
Thinking back to your emails, the output layer of that neural network would tell us whether that email has been classified as spam or not.
Hidden Layer:
So we understand what the input and output layers do but we skipped a step, what is this special layer in the middle?🤔
The hidden layer of a neural network is where all the magic happens 🪄, it’s where all the calculations take place to turn the outputs of one layer into the inputs of another!
This process looks something like this:
“What the hell am I looking at?!!” 😠
That’s the first thing that I said when I saw that diagram, and I’m sure many of you were thinking the same thing.
Don’t worry, it actually makes a lot of sense.
The way that neural networks make predictions is by using a Line of Best Fit 📈. This line is determined by the training that a neural network model goes through. The training process more or less is composed of these 5 steps:
- The model is given a data set based on one of the input nodes
- This data set is plotted on a graph
- The model finds a line of best fit
- This line of best fit is used to make predictions
- The process is repeated until a line of best fit is found for all the inputs
When you think of it like that, it’s pretty simple, right?
The way the model gets this line of best fit is by using linear regression 📉.
Linear regression is this super boring math concept that involves finding errors and using derivatives which literally no one reading this article wants to learn about (trust me). For now, we’re going to say that linear regression is when the model uses guessing and checking to find a line that is closest to all the data points in the dataset we used.
A line of best fit is linear, which means it is in the form y = mx + b. The m value is important because that is the weight, and the b value is also important because that is our bias.
So far, we got the weight and bias of one input node but we want to get the weights and biases of every single input node.
“How do we do that?”
We get the line of best fit for every single input node. This is where the last step comes in, repeat! 🔁
Okay, so you understand how to get the weights and biases, but what is the point of these values?
Remember that really complicated diagram, well all those W’s are the weights and the bias in the green circle is the bias values we just calculated.
Do you know what that means? We can calculate the sum of the weighted inputs and put it through the activation function! 🥳
Yes, I know that just sounds like a bunch of gibberish but let’s make it a bit more fun, let’s make a cake! 🎂
For our cake, we are going to use:
- 4 eggs (50g per egg) = 200g
- 3 cups of flour (100g per cup) = 300g
- 10 tablespoons of sugar (20g per tablespoon) = 200g
- 2 sticks of butter (80g per stick) = 160g
TOTAL = 860g
In order for us to sell this cake, it needs to be 1000 grams or higher (I know, this cake is HUGE!). While we were making our cake we accidentally added an extra cup of flour and 1 extra egg which is an extra 150g, but hopefully, this doesn’t affect our cake too much.
Let’s put all these values together:
This diagram looks familiar…🤔
Through this cake example, it becomes really easy to understand how a neural network goes from one layer to the next.
First, all the input values are multiplied by a weight and added together (860g). Then this value is added to the bias (150g), and you get a sum of weighted inputs (1010g).
The sum of weighted inputs is then compared to an activation function, which is basically a threshold that tells you if this value is high enough to go to the next layer. In our case, the activation function was if the cake weighed 1000g or not.
If the value is higher than the threshold, it “activates” the next layer, and if it is lower than the threshold, then the next layer does not get activated. In the cake situation, the next layer is if the cake is sold or not. Since our cake was heavier than 1000g, it was sold!
If you got up to this point, give yourself a pat on the back, you just learned how a neural network works! 🎉
Okay Cool… What Now?
Remember at the start of the article I was talking about technology that has made our lives more convenient, well neural networks are used to build all those things!
Face ID?
Facial recognition on your phone uses a neural network model called convolutional neural networks (CNN) 🧠. These neural networks are really good at image processing.
“How does it work?”
Let’s use my friend Bob as an example. Bob just bought the new iPhone 14📱 and he wants to set up his face ID. He decides to go to his settings where the phone asks him to scan his face. Here the phone is basically splitting up his face into small sections and each section is assigned a value. These are the values of our input nodes.
We know that everyone doesn’t have the same face, which means that these values will be unique for everyone!
The phone uses these sections as inputs to train the neural network model and after a minute or two, the model will be made to specifically recognize Bob’s face. This process of the phone scanning your face is the exact same as the training process we learned about earlier!
There is only one small thing we need to change. The example I used was a 2D image. In real life, we have three dimensions which is why when you watch those movies that have really high-tech face scans, they might look something like this:
Even though it is 3D and the sections aren’t in perfect squares anymore, the process is the exact same.
How about Auto-Correct and Grammar Checkers? 📝
Other cool technologies that make my life easier are auto-correct and Grammarly. I can’t count the number of times auto-correct has saved me the embarrassment of sending a text message full of gibberish. Auto-correct has literally made my text messages go from this:
To this:
I’m sure that most of the sentences I wrote in this article have been fixed by Grammarly in some way.
Auto-correct and grammar checkers use something called Natural Language Processing (NLP), which is developed using a Recurrent Neural Network (RNN) 🧠. RNNs are neural networks that are really good at recognizing patterns, like the structure of a sentence.
Let’s say I am writing a text message starting with the word “An”. To train the neural network, I would give it a bunch of sentences that start with the word “An” and over time it should recognize these patterns:
- The next word will be a noun (An elephant…) OR
- The next word will be an adjective (An amazing…)
- The next word will start with a vowel
Now the neural network would recognize patterns if another word was added to this two-word sentence. With enough training, the neural network should be able to understand sentence structure just as well as a human, and even start to offer predictions for what you should say next!
Do you know how you get those three suggestions for words that you should say next in your text message 💬 ? Those are the predictions of the neural network and often times the prediction flows well with the sentence you’re writing.
So neural networks can understand sentence structure, but what if the sentence has grammatical or spelling errors:
This sentence has the right structure, but it isn’t written very well…😕
This is where NLP comes into play. NLP is simply how computers make sense of the words we say/write. You could also say that NLP helps computers understand language inputs or words and which words make sense together, and which ones don’t.
To figure this out, they look at the words AROUND the word that doesn’t make sense, and based on their training, they replace it.
What will the future hold? 👀
There are already SO many technologies that I use on a day-to-day basis that have been made using neural networks. My life has truly never been easier!
I’m still only 16 years old and have my whole life ahead of me, which makes me wonder how many MORE convenient new things will be invented in my lifetime. We are already on the verge of some pretty cool things like…
Self-driving Cars! 🚗
Driving is a complicated process to learn so we need to break it down for our cars. Driving can be broken down into 2 components:
1. Analyzing the environment 🧐
2. Making decisions 🤝
This means to make self-driving cars, they have to be REALLY good at these two things. Luck for us, we have neural networks.
While driving we use our eyes to scan the environment 👁. So how do we give our car eyes? Using cameras of course 📸 ! The data captured by these cameras can be used as inputs to our CNN.
Similar to how face ID uses our faces as inputs, the objects scanned by the cameras are used as inputs in the neural network. At first, the neural networks won’t really make much of this information, but if we TRAIN them to recognize things like cars 🚗, street signs ⛔️, and traffic lights🚦, by showing them a bunch of examples of these things, then it gets interesting.
So now that our cars can pick up on things like traffic lights and street signs, how are they going to use this information? They’re going to use it to train our neural networks of course! Essentially, we need to prepare our cars for all situations on the road so they can make decisions (or predictions) accordingly.
Let’s use an example. Let’s say our cameras scan a stop sign coming up 🛑. Without any training our neural network going to be like:
“cool, a red shape.”
But, if we train it so it stops about 1 foot behind this cool red shape every time it sees it, then our car will actually start to learn how to drive itself.
This is one example of how we will have to train our cars, but eventually, we will train our cars to make the right decision for any scenario they may encounter while driving, which means they can drive all by themselves!
This is just one example of the innovations yet to come because of AI and neural network technology, but the future of convenience is truly endless.
With that being said, there is one thing for sure, I’m definitely looking forward to it! 😁