Everyone tries to forecast the future. Bankers need to predict credit worthiness of customers. Marketing analyst want to predict future sales. Economists want to predict economic cycles. And everybody wants to know whether the stock market will be up or down tomorrow. Now you can incorporate neural network as a powerful forecasting tool with MS Excel.
Understanding neural network is not difficult. This book is different from most of the books on this subject available on the market.
- Programming skills like C++, Java NOT required
Minimal mathematic knowledge
Rocket science made easy
The only requirement that you need is familiarity with MS Excel. If you want to build neural network based forecasting model with MS Excel, then reading this book is a great way to start.
Now you can study at home with your own personal neural network model and perform practical experiments that help you fully understand how easy neural networks can be.
- Study neural networks through practical experiments.
- Design and evaluate your own neural networks.
- Observe a neural network's response graphically.
- Combine several neural networks to give advanced behaviour.
This book comes with 5 practical models that act as a starting point allowing you to experiment with neural network training and testing. All these neural network models are created with MS Excel spreadsheet. These models include
- Determining Risk For Credit Approval,
- Sales Forecasting,
- Predicting Dow Jones/stock weekly prices,
- Predicting Real Estate Value,
- Classify Type of Flowers
Once you are comfortable with the practicalities of using a neural network then perhaps you can simply tailor one of the preexisting spreadsheets for your own use. However, if you need to develop your own unique model then you will find that this book has the materials that you can reference to build one on your own.
If one of the accompanying neural network models is suitable and need no customizing then it is a fairly simple matter to set up your analysis. Your data is placed into the input field, the neural network parameters are specified together with any goals or outputs. Then your neural network is trained after which it is shown the data to be analysed.
After that, you can act as teacher for a neural network by providing it with data and letting it know the goals it should learn. The neural network can then train itself using the data and goals provided, and during training can provide feedback on how well it is doing. Once you are satisfied that it has trained sufficiently well then it is ready to make a prediction about some new data. Assuming that the new data is derived from the same or similar sources as the training data then the neural network will be able to recognize features consistent with its past learning and advise you on its evaluation or prediction.
What can neural networks do for me?
There are a wide range of problems that can be solved using neural networks. Typical problems range from investment analysis, gambling, property analysis through to image and speech recognition. New applications for neural networks are being found all the time and you just need some inventiveness and creativity to see if your problem can be solved using this approach.
Instructions on how to build neural network model with Excel will be explained step by step by looking at the 5 main sections shown below…
- Selecting and transforming data
- the neural network architecture,
- simple mathematic operations inside the neural network model
- training the model and
- using the trained model for forecasting
If you do not want build the neural network manually, you can click here to try 4Cast XL, a neural network based software. With 4Cast XL, the tasks of building a neural network model is fully automated.
Theory And Technical Stuff
Neural networks are very effective when lots of examples must be analyzed, or when a structure in these data must be analyzed but a single algorithmic solution is impossible to formulate. When these conditions are present, neural networks are use as computational tools for examining data and developing models that help to identify interesting patterns or structures in the data. The data used to develop these models is known as training data. Once a neural network has been trained, and has learned the patterns that exist in that data, it can be applied to new data thereby achieving a variety of outcomes. Neural networks can be used to
learn to predict future events based on the patterns that have been observed in the historical training data;
learn to classify unseen data into pre-defined groups based on characteristics observed in the training data;
learn to cluster the training data into natural groups based on the similarity of characteristics in the training data.
We have seen many different neural network models that have been developed over the last fifty years or so to achieve these tasks of prediction, classification, and clustering. In this book we will be developing a neural network model that has successfully found application across a broad range of business areas. We call this model a multilayered feedforward neural network (MFNN) and is an example of a neural network trained with supervised learning.
We feed the neural network with the training data that contains complete information about the characteristics of the data and the observable outcomes in a supervised learning method. Models can be developed that learn the relationship between these characteristics (inputs) and outcomes (outputs). For example, we can develop a MFNN to model the relationship between money spent during last week’s advertising campaign and this week’s sales figures is a prediction application. Another example of using a MFNN is to model and classify the relationship between a customer’s demographic characteristics and their status as a high-value or low-value customer. For both of these example applications, the training data must contain numeric information on both the inputs and the outputs in order for the MFNN to generate a model. The MFNN is then repeatedly trained with this data until it learns to represent these relationships correctly.
For a given input pattern or data, the network produces an output (or set of outputs), and this response is compared to the known desired response of each neuron. For classification problems, the desired response of each neuron will be either zero or one, while for prediction problems it tends to be continuous valued. Correction and changes are made to the weights of the network to reduce the errors before the next pattern is presented. The weights are continually updated in this manner until the total error across all training patterns is reduced below some pre-defined tolerance level. We call this learning algorithm as the backpropagation.
Process of a backpropagation
Forward pass, where the outputs are calculated and the error at the output units calculated.
Backward pass, the output unit error is used to alter weights on the output units. Then the error at the hidden nodes is calculated (by back-propagating the error at the output units through the weights), and the weights on the hidden nodes altered using these values.
The main steps of the back propagation learning algorithm are summarized below:
Step 1: Input training data.
Step 2: Hidden nodes calculate their outputs.
Step 3: Output nodes calculate their outputs on the basis of Step 2.
Step 4: Calculate the differences between the results of Step 3 and targets.
Step 5: Apply the first part of the training rule using the results of Step 4.
Step 6: For each hidden node, n, calculate d(n). (derivative)
Step 7: Apply the second part of the training rule using the results of Step 6.
Steps 1 through 3 are often called the forward pass, and steps 4 through 7 are often called the backward pass. Hence, the name: back-propagation.
For each data pair to be learned a forward pass and backwards pass is performed. This is repeated over and over again until the error is at a low enough level (or we give up).
Calculations and Transfer Function
The behaviour of a NN (Neural Network) depends on both the weights and the input-output function (transfer function) that is specified for the units. This function typically falls into one of three categories:
For linear units, the output activity is proportional to the total weighted output.
For threshold units, the output is set at one of two levels, depending on whether the total input is greater than or less than some threshold value.
For sigmoid units, the output varies continuously but not linearly as the input changes. Sigmoid units bear a greater resemblance to real neurons than do linear or threshold units, but all three must be considered rough approximations.
It should be noted that the sigmoid curve is widely used as a transfer function because it has the effect of "squashing" the inputs into the range [0,1]. Other functions with similar features can be used, most commonly tanh which has an output range of [-1,1]. The sigmoid function has the additional benefit of having an extremely simple derivative function for backpropagating errors through a feed-forward neural network. This is how the transfer functions look like:
To make a neural network performs some specific task, we must choose how the units are connected to one another (see Figure 1.1), and we must set the weights on the connections appropriately. The connections determine whether it is possible for one unit to influence another. The weights specify the strength of the influence.
Typically the weights in a neural network are initially set to small random values; this represents the network knowing nothing. As the training process proceeds, these weights will converge to values allowing them to perform a useful computation. Thus it can be said that the neural network commences knowing nothing and moves on to gain some real knowledge.
To summarize, we can teach a three-layer network to perform a particular task by using the following procedure:
We present the network with training examples, which consist of a pattern of activities for the input units together with the desired pattern of activities for the output units.
We determine how closely the actual output of the network matches the desired output.
We change the weight of each connection so that the network produces a better approximation of the desired output.
The advantages of using Artificial Neural Networks software are:
They are extremely powerful computational devices
Massive parallelism makes them very efficient.
They can learn and generalize from training data – so there is no need for enormous feats of programming.
They are particularly fault tolerant – this is equivalent to the “graceful degradation” found in biological systems.
They are very noise tolerant – so they can cope with situations where normal symbolic systems would have difficulty.
In principle, they can do anything a symbolic/logic system can do, and more.
Real life applications
The applications of artificial neural networks are found to fall within the following broad categories:
Manufacturing and industry:
- Beer flavor prediction
- Wine grading prediction
- For highway maintenance programs
- Missile targeting
- Criminal behavior prediction
Banking and finance:
- Loan underwriting
- Credit scoring
- Stock market prediction
- Credit card fraud detection
- Real-estate appraisal
Science and medicine:
- Protein sequencing
- Tumor and tissue diagnosis
- Heart attack diagnosis
- New drug effectiveness
- Prediction of air and sea currents
"The interactive book is really handy for learning the content of the book. The dynamic examples illustrate the theories clearly and give a deep impression to readers."
-- Dr. Jonathan Choo, Kentucky State University
"I thought the content was very good and certainly more complete than most books on the market today. I particularly liked the idea of having practical examples that show how the networks can actually be used."
-- Berk Cyma, Metal Africa LLC, South Africa
"The templates are excellent. I am very happy with the results and will be spending a lot less on consultants."
-Ahmed Satil, Bangalore, India
"Great analysis, templates and resourcefulness. I love the quality of the information and it is delivered in an easy-to-understand format."
-Sue Mayne, General Electric (USA)
"The tools and templates are very thorough and just by themselves provided great detail. A wonderful and timeless value."
-John Warner, System Engineer, AOL
"Good research and excellent documentation well worth the money. I've been working in technical research for the past 15 years, and this is the lowest cost and highest value work I have ever seen."
-Roy Lee, UMIST (Australia)