Where other techniques we discussed were inspired by biologic systems none goes so far as Artificial Neural Networks (ANN). They are directly inspired by how nervous systems, specifically the brain works. They work by finding patterns between various inputs and outputs and adjusting itself to better solve them.
ANN’s are able to learn by experience and use that knowledge to come up with better solutions. They can adapt to a changing problem by changing the connections between the artificial neurons that it is composed of. While they have existed for a long time, since the 60’s they had been in a relative lull until the 1980’s when the back-propagation algorithm was created. The algorithm is used to calculate a gradient that can be used to tweak the values of connections to neurons which optimised the learning process. Since then ANN’s have spread dramatically and can now be found in image processing, speech recognition and many more applications.
ANN’s are relatively easy to use as the implementation does not require a complicated codebase especially considering how many of the common elements of the algorithm can be found as third person libraries. It works for any function thrown at it so it can be quite flexible to what kind of problems it can tackle and is especially suitable for complex problems like image recognition where a traditional programming would be exorbitantly costly in development time.
However, to get accurate results ANN’s require a lot of training. This means that a developer needs to spend a lot of time obtaining sanitised data, that is, data that can be used by the algorithm, to learn. Additionally, as most of the solution is created within the algorithm it can be a bit of a opaque box as the developer would not be able to follow the how a solution was reached, only being able to see the input and the output.
Michael A. Nielsen, “Neural Networks and Deep Learning”, Determination Press, 2015.
Available from: http://neuralnetworksanddeeplearning.com