3.3. Where and how do neural networks gather obtained

3.3. Where and how do neural networks gather obtained
information?
(translation by Agata Krawcewicz, [email protected])
Let us let us take a closer look at the process of learning with a teacher. How does it happen
that the network gains and gathers knowledge? Well, a key-notion here are the weights on the
entrance of each neuron, which were described in the previous chapter. Let us remind: every
neuron has many inputs, by means of which it takes signals from other neurons and input
signals given to the network as data for its calculations. The parameters called weights are
united with these entries; every input signal is first multiplied by the weight, and only later
added up with other signals. If we change values of the weights - the neuron will begin to
function in other way in the network, and as a result - the whole network will begin to work
in an another way. The whole art of learning the network relies on the fact that we should
choose weights in such a manner, that all neurons would perform the exact tasks, which the
network demands from them.
There can be thousands neurons in the network, and every one of them can have hundreds of
inputs - so it is impossible for all these inputs to define the necessary weights at one time and
arbitrarily by oneself. One can however design and realize the process of learning relying on
starting activity of network, with the certain random set of weights, and gradually improving
them. In every step of the process of learning, the value of weights from one or several
neurons undergoes a change, and the rules of these changes are made in the way, that every
neuron is able to qualify, all by itself, which of its own weights it has to change, and which
way (increase or decrease), and also how much. Of course when determining necessary
changes of weights the neuron can use the information being descended from the teacher (as
far as we use learning with a teacher), however, it does not change the fact that the process
changing the weights itself (being the only memory trace in the network) runs in every neuron
of the network spontaneously and independently, thanks to what it can be realized without
the necessity of the first-hand stable supervision from the person supervising this process.
What is more, the process of the learning one neuron is independent from, how any other
neuron learns, so learning can be conducted simultaneously in all neurons of the network (of
course under condition of constructing a suitable network as the electronic system, and not in
a form of simulation program). It also allows us to reach very high speeds of learning and a
surprisingly dynamic increase of "qualifications" of the network, which literally grows wiser
and wiser in front of us!
I will once again stress, because it has a key-meaning: the teacher need not get into the details
of the process of learning - it will be sufficient that the person will give the network an
example of correct solution. The network will compare its own solution, which it obtained
from the example that we used, which originated from given learning set, with the solution
which is recorded in the learning set as a model solution (so most probably correct).
Algorithms of learning are constructed the way, that the knowledge about the value of the
error which the network makes, is sufficient to correct values of its weights, whereat every
neuron separately (controlled with the mentioned algorithm) corrects its own weights on all
entries all by itself - if it only gets the message, what error was committed. Is very simple and
the simultaneously efficient mechanism, shown symbolically in fig. 3.3. Its systematic use
causes the network to perfect its own activity, till at last it can solve all assignments from the
learning set and on the grounds of generalization of this knowledge - it can also solve other
assignments which will be introduced to it on the stage of "examination".
Fig. 3.3. Typical step of neural network learning
The manner of the learning of the network, described earlier, is used most often, but in some
assignments (for example at the recognition of images) one need not give the network the
exact value of the desired output signal, but for efficient learning it is sufficient to give the
network only general information on the subject, whether its current behavior is correct, or
not. At times one speaks directly about signals "of the prize" and "the punishment", on the
ground of which all neurons of the network, all by themselves, find and introduce proper
corrections to their own activity. This analogy to the training of animals is not quite
accidental!