The fulcrum of the engine is the Layer object. It is composed by N neurons(that can be set by the attribute 'rows').
Imagine a feed-forward neural net composed by three layers like the following:
To build this net with joone, we must create three Layer objects and two Synapse objects:
SigmoidLayer layer1 = new SigmoidLayer();
SigmoidLayer layer2 = new SigmoidLayer();
SigmoidLayer layer3 = new SygmoidLayer();
FullSynapse synapse1 = new FullSynapse();
FullSynapse synapse2 = new FullSynapse();
Then we complete the net connecting the three layers with the synapses:
Here you can see, each synapse is the output synapse of a layer and the input synapse of the next layer in the net.
This simple net is ready, but it can't do any useful job, because there aren't the components to read/write the data
that the net must elaborate. Look at the next example to learn how to build a real net, which can be trained and used
for a real problem.
Suppose we must build a net to teach on the classical XOR problem.
In this example, the net must learn the following XOR truth table:
The XOR truth table
|Input 1||Input 2||Output
So, we must create a file containing this values:
Each column must be separated by a semicolon; the decimal point is not mandatory if the numbers are integer.
Write this file with a text editor and save it on the file system (for instance c:\joone\xor.txt in a Windows environment).
Now we'll build the neural net that, as the literature says, must have three layers:
- An input layer with 2 neurons, to map the two inputs of the XOR function.
- A hidden layer with 3 neurons, a good value to speed up the net's convergence.
- An output layer with 1 neuron, to represent the XOR function's output.
As shown by the following figure:
First, we create the three layers (two of them use the sigmoid transfer function):
LinearLayer input = new LinearLayer();
SigmoidLayer hidden = new SigmoidLayer();
SigmoidLayer output = new SygmoidLayer();
Set their dimensions:
Now we build the neural net connecting the layers, so we create the two synapses; we use the FullSynapse,
that connects all the neurons on its input with all the neurons on its output (see the above figure):
FullSynapse synapse_IH = new FullSynapse(); /* Input -> Hidden conn. */
FullSynapse synapse_HO = new FullSynapse(); /* Hidden -> Output conn. */
First, we connect the input layer with the hidden layer:
And then, the hidden layer with the output layer:
Create the NeuralNet object and add the layers:
NeuralNet nnet = new NeuralNet();
Now we extract the reference of the Monitor object, in order to provide the network with all the parameters needed for its work:
Monitor monitor = nnet.getMonitor();
The application registers itself as a monitor's listener, so it can receive the notifications of termination from the net.
To do this, the application must implement the org.joone.engine.NeuralNetListener interface.
Now we must define an input for the net, then we create a org.joone.io.FileInputStream and give it all the parameters:
FileInputSynapse inputStream = new FileInputSynapse();
/* The first two columns contain the input values */
/* This is the file that contains the input data */
We add the input synapse to the first layer. The input synapse extends the Synapse object, then it can be attached
to a layer like a synapse; so the layer doesn't deal with the kind of its input objects.
A neural net can learn from examples, so we must provide to it with the right responses.
For each input, in fact, the net must be provided with the difference between the desired response and the effective
response gave from the net; the org.joone.engine.learning.TeachingSynapse is the object that has this task:
TeachingSynapse trainer = new TeachingSynapse();
/* Setting of the file containing the desired responses, provided by a FileInputSynapse */
FileInputSynapse samples = new FileInputSynapse();
/* The output values are on the third column of the file */
/* We add it to the neural network */
The TeacherSynapse object extends the Synapse object, then we can add it as the output of the last layer of the net.
We set all the training parameters of the net:
monitor.setTrainingPatterns(4); /* # of rows contained in the input file */
monitor.setTotCicles(2000); /* How many times the net must be trained on the input patterns */
monitor.setLearning(true); /* The net must be trained */
nnet.go(); /* The network starts the training job */
(You can find the source code in the CVS repository into the org.joone.example package)
If this example seems too complex (it's only a small net with only 5 neurons!), remember that this is a low-level approach;
here we have used only the core engine coding in java.
You can build neural nets with joone in the following three ways (ordered by decreasing difficulty):
- Like the above example, by writing java code that uses the Core Engine
- By using the JooneTools helper class in order to simplify the use of a neural network, by hiding the complexity of the core engine
- By using the GUI editor provided with joone (Click here to see the XOR problem built with the editor)