Encog java core Apache 2 Encog java core Encog is an advanced machine learning framework that supports a variety of advanced algorithms, as well as support classes to normalize and process data. Machine learning algorithms such as Support Vector Machines, Artificial Neural Networks, Genetic Programming, Bayesian Networks, Hidden Markov Models, Genetic Programming and Genetic Algorithms are supported. License: Apache 2 , .

Overview

Encog Machine Learning Framework

Build Status

Encog Machine Learning Framework

Encog is a pure-Java/C# machine learning framework that I created back in 2008 to support genetic programming, NEAT/HyperNEAT, and other neural network technologies. Originally, Encog was created to support research for my master’s degree and early books. The neural network aspects of Encog proved popular, and Encog was used by a number of people and is cited by 952 academic papers in Google Scholar. I created Encog at a time when there were not so many well developed frameworks, such as TensorFlow, Keras, DeepLearning4J, and many others (these are the frameworks I work with the most these days for neural networks).

Encog continues to be developed (and bugs fixed) for the types of models not covered by the large frameworks and to provide a pure non-GPU Java/C# implementation of several classic neural networks. Because it is pure Java, the source code for Encog can be much simpler to adapt for cases where you want to implement the neural network yourself from scratch. Some of the less mainstream technologies supported by Encog include NEAT, HyperNEAT, and Genetic Programming. Encog has minimal support for computer vision. Computer vision is a fascinating topic, but just has never been a research interest of mine.

Encog supports a variety of advanced algorithms, as well as support classes to normalize and process data. Machine learning algorithms such as Support Vector Machines, Neural Networks, Bayesian Networks, Hidden Markov Models, Genetic Programming and Genetic Algorithms are supported. Most Encog training algorithms are multi-threaded and scale well to multicore hardware.

Encog continues to be developed, and is used in my own research, for areas that I need Java and are not covered by Keras. However, for larger-scale cutting edge work, where I do not need to implement the technology from scratch, I make use of Keras/TensorFlow for my own work.

For more information: Encog Website

Simple Java XOR Example in Encog

import org.encog.Encog;
import org.encog.engine.network.activation.ActivationReLU;
import org.encog.engine.network.activation.ActivationSigmoid;
import org.encog.ml.data.MLData;
import org.encog.ml.data.MLDataPair;
import org.encog.ml.data.MLDataSet;
import org.encog.ml.data.basic.BasicMLDataSet;
import org.encog.neural.networks.BasicNetwork;
import org.encog.neural.networks.layers.BasicLayer;
import org.encog.neural.networks.training.propagation.resilient.ResilientPropagation;

public class XORHelloWorld {

	/**
	 * The input necessary for XOR.
	 */
	public static double XOR_INPUT[][] = { { 0.0, 0.0 }, { 1.0, 0.0 },
			{ 0.0, 1.0 }, { 1.0, 1.0 } };

	/**
	 * The ideal data necessary for XOR.
	 */
	public static double XOR_IDEAL[][] = { { 0.0 }, { 1.0 }, { 1.0 }, { 0.0 } };

	/**
	 * The main method.
	 * @param args No arguments are used.
	 */
	public static void main(final String args[]) {

		// create a neural network, without using a factory
		BasicNetwork network = new BasicNetwork();
		network.addLayer(new BasicLayer(null,true,2));
		network.addLayer(new BasicLayer(new ActivationReLU(),true,5));
		network.addLayer(new BasicLayer(new ActivationSigmoid(),false,1));
		network.getStructure().finalizeStructure();
		network.reset();

		// create training data
		MLDataSet trainingSet = new BasicMLDataSet(XOR_INPUT, XOR_IDEAL);

		// train the neural network
		final ResilientPropagation train = new ResilientPropagation(network, trainingSet);

		int epoch = 1;

		do {
			train.iteration();
			System.out.println("Epoch #" + epoch + " Error:" + train.getError());
			epoch++;
		} while(train.getError() > 0.01);
		train.finishTraining();

		// test the neural network
		System.out.println("Neural Network Results:");
		for(MLDataPair pair: trainingSet ) {
			final MLData output = network.compute(pair.getInput());
			System.out.println(pair.getInput().getData(0) + "," + pair.getInput().getData(1)
					+ ", actual=" + output.getData(0) + ",ideal=" + pair.getIdeal().getData(0));
		}

		Encog.getInstance().shutdown();
	}
}
Comments
  • Add dropout for neural networks

    Add dropout for neural networks

    Dropout is the concept of training a neural network whose hidden layer neurons have only a .5 probability of being present in any training run. The end result is that you are effectively training an ensemble of neural networks that have massive weight sharing. This speeds training greatly (relative to training the entire ensemble separately), and eliminates some of the overfitting often associated with neural nets. For instance, neural nets with dropout never need to have their training stopped early to avoid overfitting.

    Discussed in this paper: http://arxiv.org/abs/1207.0580 And in this Google tech talk: http://www.youtube.com/watch?v=DleXA5ADG78

    I'd be willing to contribute code if this is something you'd be willing to include. Thanks!

    opened by alexrobbins 25
  • Added incremental building of ensembles and fixed adaboost

    Added incremental building of ensembles and fixed adaboost

    Adaboost implementation had a bug which resulted in the D vector of dataset weights being incorrect and in some extreme cases leading to non-termination and infinity errors.

    Ensembles now can have "steps" where they have a member added on to the existing ensemble. This only works for some ensemble techniques that can be expanded this way.

    opened by nitbix 19
  • Encog Core is not saving large format Neural Networks Correctly

    Encog Core is not saving large format Neural Networks Correctly

    EG files store the weight array slightly differently for large format networks, so that the weights are not on a single ginormous line, that can't be read into memory. Training is failing because these networks are not being either loaded or saved correctly and the end result is an array of zeros for most of the weight matrix. Such a neural network is not trainable.

    --- From original report-- It seems that error reporting is broken in the latest Workbench (built from git sources) - at least for RProp and SVMSearch - "Current Error" just hangs after couple iterations (and the chart is also freezed, if displayed). Interestingly, it works with QProp, for example. (I have no problems with Workbench 3.0.1, using the same data.)

    opened by PetrToman 18
  • SVM ArrayIndexOutOfBoundsException

    SVM ArrayIndexOutOfBoundsException

    Hello, running the latest Encog Workbench (built from git sources) on the data below, I get the following exception:

    We are very sorry but an unexpected error has occured. Would you consider sending this information to us? No personal information will be transmitted, just what you see below.

    This information is very useful to us to make Encog a better program.

    Encog Version: 3.1.0 Encog Workbench Version: 3.1 Java Version: 1.6.0_30 Java Vendor: Sun Microsystems Inc. OS Name: Windows Vista OS Arch: x86 OS Version: 6.0 Core Count: 2 ISO3 Country: USA Display Country: United States Radix: .

    Grouping: ,

    Exception: java.lang.ArrayIndexOutOfBoundsException: -1 org.encog.mathutil.libsvm.Solver_NU.select_working_set(svm.java:1069) org.encog.mathutil.libsvm.Solver.Solve(svm.java:540) org.encog.mathutil.libsvm.Solver_NU.Solve(svm.java:962) org.encog.mathutil.libsvm.svm.solve_nu_svc(svm.java:1437) org.encog.mathutil.libsvm.svm.svm_train_one(svm.java:1567) org.encog.mathutil.libsvm.svm.svm_train(svm.java:2097) org.encog.ml.svm.training.SVMTrain.iteration(SVMTrain.java:235) org.encog.ml.svm.training.SVMSearchTrain.iteration(SVMSearchTrain.java:271) org.encog.app.analyst.commands.CmdTrain.performTraining(CmdTrain.java:219) org.encog.app.analyst.commands.CmdTrain.executeCommand(CmdTrain.java:121) org.encog.app.analyst.EncogAnalyst.executeTask(EncogAnalyst.java:487) org.encog.app.analyst.EncogAnalyst.executeTask(EncogAnalyst.java:514) org.encog.workbench.tabs.analyst.AnalystProgressTab.run(AnalystProgressTab.java:335) java.lang.Thread.run(Unknown Source)

    This problem was also reported here:

    http://www.heatonresearch.com/node/2368 (partly fixed?) http://www.heatonresearch.com/node/2398 (no replies)


    data.csv:

    i1,i2,i3,i4,i5,i6,i7,i8,i9,y 0.0003,1.666666667,1,0.000124023,-0.000225,-0.000704,-0.001492,-0.001547,2.07E-05,1 0.0003,1,1,0.000348262,0.000155,-0.000344,-0.001144,-0.0012415,5.80E-05,1 0.0006,2.25,10,0.000234719,-0.000195,-0.003012,-0.003864,-0.004737,3.91E-05,0 0.0004,2,3,5.80E-05,-2.00E-05,-0.002582,-0.003959,-0.0048975,9.66E-06,1 0.0004,1.5,1,0.00039255,0.00042,-0.002062,-0.003513,-0.004471,6.54E-05,0 0.0002,1.5,0.5,0.000156911,0.00041,-0.0018,-0.003533,-0.0045565,2.62E-05,0 0.0003,1.666666667,1,6.20E-05,-0.000135,-0.000922,-0.0034,-0.0045225,1.03E-05,0 0.0003,1,1,0.000449639,0.00033,-0.000272,-0.002828,-0.00396,7.49E-05,0 0.0006,1.2,2,0.000376355,-0.00017,-0.001406,-0.004004,-0.005835,6.27E-05,0 -1.00E-04,4,0.166666667,0.000236876,-0.00013,-0.001478,-0.004007,-0.005891,3.95E-05,0 0.0008,1.714285714,1.5,-0.000290168,-0.00089,-0.003022,-0.005279,-0.0079045,-4.84E-05,1 0.0005,1.2,0.5,0.000288856,-0.000135,-0.002226,-0.004466,-0.0071965,4.81E-05,0 0.0002,2.666666667,0.666666667,0.000246055,0.000455,0.00085,-0.00101,-0.0044385,4.10E-05,0 0.0012,1.153846154,0.25,0.000585271,0.00062,0.00124,-0.000548,-0.004005,9.75E-05,0 -1.00E-04,2.5,6,-0.000184848,-0.000125,0.00069,-0.000868,-0.0042985,-3.08E-05,1 0.0011,1,1,0.000784416,0.0009,0.001764,0.000259,-0.0031595,0.000130736,1 0.0012,1.25,0.25,0.000791247,0.00112,0.002248,0.00105,-0.0022235,0.000131875,1 0.0008,1,1,-2.17E-05,-0.000665,-0.00093,0.00021,-0.002472,-3.61E-06,1 0.0007,1,1,0.000657545,0.000395,-0.000198,0.001018,-0.001595,0.000109591,1

    opened by PetrToman 18
  • Implement regularization

    Implement regularization

    Hello, please consider implementing regularization, as it is essential to deal with the overfitting problem.

    I recommend watching 12 min. video "Regularization and Bias/Variance" of lesson X. ADVICE FOR APPLYING MACHINE LEARNING at https://class.coursera.org/ml/lecture/preview (Stanford ML course).

    It would also be useful to enhance Encog Analyst - it could split data into 3 sets (training, cross validation, testing) and try to find the optimal regularization parameter automatically.

    opened by PetrToman 15
  • Evaluate adding LSTM network to Encog

    Evaluate adding LSTM network to Encog

    There have been several requests to add a LSTM network to Encog. Some discussion of it here. http://www.heatonresearch.com/comment/1231#comment-1231

    Wikipedia Entry: http://en.wikipedia.org/wiki/Long_short_term_memory More formal description: ftp://ftp.idsia.ch/pub/juergen/lstm.pdf

    At this point I am unfamiliar with this architecture, so I am adding this issue to track it. Any suggestions/comments are welcome.

    Initial thoughts... could this be implemented with the freeform networks. Or would it be better to create a new MLMethod.

    Enhancement 
    opened by jeffheaton 12
  • org.encog.ml.ea.opp.CompoundOperator not Serializable

    org.encog.ml.ea.opp.CompoundOperator not Serializable

    I'm trying to serialize my BasicEA object which implements Serializable, but I get this exception: java.io.NotSerializableException: org.encog.ml.ea.opp.CompoundOperator Is it safe to just go into the source and make that class serializable?

    Bug 
    opened by dsmaugy 11
  • Workbench: Cannot rename .egb file

    Workbench: Cannot rename .egb file

    • Once .egb file is open in Workbench, it cannot be renamed via popup menu / "Properties" (nothing happens).
    • This "Properties" option should also be called "Rename" as it only shows a file rename dialog.
    opened by PetrToman 10
  • Elliott Activation Function

    Elliott Activation Function

    Here's the code the Elliott Activation function in case someone is interested. It is not as popular as tanh and sigmoid but I've seen it used in a few papers.

    The implementation is based on this report:

    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.46.7204&rep=rep1&type=pdf

    Since I discovered that something like 70% of the training time is spent in Math.tanh or Math.exp I was looking for a cheap alternative. The main advantage of this activation function is that it is very fast to compute. It is bounded between -1 and 1 like tanh but will reach those values more slowly so it might be more suitable for classification tasks.

    I've had very mixed and results with this implementation so far. Used with Rprop on a xor problem it seems to perform quite badly in terms of number of iterations and getting stuck in local minima or not being able to go below high MSE values. It is quite unexpected so I'm wondering if maybe there's a mistake somewhere with the derivative.

    On the other hand I've also observed excellent results with evolutionary algorithms like GA (and my version of PSO) with often very fast convergence compared to tanh and sigmoid. That's why I put this code here in case it might be useful to someone else.

    /*
     */
    package org.encog.engine.network.activation;
    
    /**
     * Computationally efficient alternative to ActivationTANH.
     * Its output is in the range [-1, 1], and it is derivable.
     * 
     * It will approach the -1 and 1 more slowly than Tanh so it 
     * might be more suitable to classification tasks than predictions tasks.
     * 
     * Elliott, D.L. "A better activation function for artificial neural networks", 1993
     * http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.46.7204&rep=rep1&type=pdf
     */
    public class ActivationElliott implements ActivationFunction {
    
        /**
         * Serial id for this class.
         */
        private static final long serialVersionUID = 1234L;
    
        /**
         * The parameters.
         */
        private final double[] params;
    
        /**
         * Construct a basic HTAN activation function, with a slope of 1.
         */
        public ActivationElliott() {
            this.params = new double[0];
        }
    
        /**
         * {@inheritDoc}
         */
        @Override
        public final void activationFunction(final double[] x, final int start,
                final int size) {
            for (int i = start; i < start + size; i++) {
                x[i] = 1.0 / (1.0 + (Math.abs(x[i])));
            }
        }
    
        /**
         * @return The object cloned;
         */
        @Override
        public final ActivationFunction clone() {
            return new ActivationElliott();
        }
    
        /**
         * {@inheritDoc}
         */
        @Override
        public final double derivativeFunction(final double b, final double a) {
            return (1.0 - a) * (1.0 - a);
        }
    
        /**
         * {@inheritDoc}
         */
        @Override
        public final String[] getParamNames() {
            final String[] result = {};
            return result;
        }
    
        /**
         * {@inheritDoc}
         */
        @Override
        public final double[] getParams() {
            return this.params;
        }
    
        /**
         * @return Return true, Elliott activation has a derivative.
         */
        @Override
        public final boolean hasDerivative() {
            return true;
        }
    
        /**
         * {@inheritDoc}
         */
        @Override
        public final void setParam(final int index, final double value) {
            this.params[index] = value;
        }
    
    }
    
    
    opened by gnitr 9
  • Recurrent freeform networks are broken (NullPointerException)

    Recurrent freeform networks are broken (NullPointerException)

    As I wrote here http://www.heatonresearch.com/comment/6404#comment-6404, recurrent freeform networks are broken.

    There are many ways to reproduce the problem, here I took the "ElmanXOR" example and I tried to convert the Elman network to a freeform Elman network (applying minimal changes).

    public class ElmanXOR {
    
        // *** USE THE FreeformNetwork.createElman() METHOD ***
        /*static BasicNetwork createElmanNetwork() {
            // construct an Elman type network
            ElmanPattern pattern = new ElmanPattern();
            pattern.setActivationFunction(new ActivationSigmoid());
            pattern.setInputNeurons(1);
            pattern.addHiddenLayer(6);
            pattern.setOutputNeurons(1);
            return (BasicNetwork)pattern.generate();
        }*/
    
        static BasicNetwork createFeedforwardNetwork() {
            // construct a feedforward type network
            FeedForwardPattern pattern = new FeedForwardPattern();
            pattern.setActivationFunction(new ActivationSigmoid());
            pattern.setInputNeurons(1);
            pattern.addHiddenLayer(6);
            pattern.setOutputNeurons(1);
            return (BasicNetwork)pattern.generate();
        }
    
        public static void main(final String args[]) {
    
            final TemporalXOR temp = new TemporalXOR();
            final MLDataSet trainingSet = temp.generate(120);
    
            //final BasicNetwork elmanNetwork = ElmanXOR.createElmanNetwork();
            // *** USE THE FreeformNetwork.createElman() METHOD ***
            final FreeformNetwork elmanNetwork = FreeformNetwork.createElman(1, 6, 1, new ActivationSigmoid());
            final BasicNetwork feedforwardNetwork = ElmanXOR
                    .createFeedforwardNetwork();
    
            //final double elmanError = ElmanXOR.trainNetwork("Elman", elmanNetwork,
            //      trainingSet);
            // *** USE THE EncogUtility.trainToError() METHOD ***
            EncogUtility.trainToError(elmanNetwork, trainingSet, 0.000001);
            final double feedforwardError = ElmanXOR.trainNetwork("Feedforward",
                    feedforwardNetwork, trainingSet);       
    
            //System.out.println("Best error rate with Elman Network: " + elmanError);
            System.out.println("Best error rate with Feedforward Network: "
                    + feedforwardError);
            System.out
                    .println("Elman should be able to get into the 10% range,\nfeedforward should not go below 25%.\nThe recurrent Elment net can learn better in this case.");
            System.out
                    .println("If your results are not as good, try rerunning, or perhaps training longer.");
    
            Encog.getInstance().shutdown();
        }
    
        public static double trainNetwork(final String what,
                final BasicNetwork network, final MLDataSet trainingSet) {
            // train the neural network
            CalculateScore score = new TrainingSetScore(trainingSet);
            final MLTrain trainAlt = new NeuralSimulatedAnnealing(
                    network, score, 10, 2, 100);
    
            final MLTrain trainMain = new Backpropagation(network, trainingSet,0.000001, 0.0);
    
            final StopTrainingStrategy stop = new StopTrainingStrategy();
            trainMain.addStrategy(new Greedy());
            trainMain.addStrategy(new HybridStrategy(trainAlt));
            trainMain.addStrategy(stop);
    
            int epoch = 0;
            while (!stop.shouldStop()) {
                trainMain.iteration();
                System.out.println("Training " + what + ", Epoch #" + epoch
                        + " Error:" + trainMain.getError());
                epoch++;
            }
            return trainMain.getError();
        }
    }
    

    What I always get is a NullPointerException when I try to train the network.

    Thanks.

    opened by ekerazha 8
  • Workbench: open .ega tab after running Analyst Wizard

    Workbench: open .ega tab after running Analyst Wizard

    After generating .ega file using Analyst Wizard in Workbench, .ega file should be open in a new tab. It is a logical action (though a double-click does the job) and I think it would be handy (not only) for newcomers.

    opened by PetrToman 8
  • Test Smell: Assertion with the wrong parameter order

    Test Smell: Assertion with the wrong parameter order

    Hi!

    description: Referring to the API document of ''org.junit.Test'' , the correct API of ''AssertEquals'' is ''assertEquals(Object expected, Object actual)''. However, we detect that some assertions in your test code have the wrong parameter orders. For example, the test case named ''testAddressFunctions()'' in ''TestAddress.java'' writes the assertion into ''Assert.assertEquals( address.getOriginal(), a);'', ''Assert.assertEquals( address.getUrl().getHost(), "www.httprecipes.com");'', ''Assert.assertEquals( address2.getOriginal(), a);'', and ''Assert.assertEquals( address3.getOriginal(), a);''.

    Negative: Once the test case fails, the ''assertEquals()'' assertion with the wrong parameter order will give the wrong log information. The log information will say: "expected [false] but found [true]", where it should have said "expected [true] but found [false]". This is confusing, to say the least, and you shouldn't have to deal with a possible misdirection of that message.

    Solution: Generally, the excepted value should be a known value, such as a real number, a string, etc. The actual value should be the result -f the-method-under-test. Therefore, the ''Assert.assertEquals( address.getOriginal(), a);'' should be changed into ''Assert.assertEquals(a, address.getOriginal());''; the ''Assert.assertEquals( address.getUrl().getHost(), "www.httprecipes.com");'' should be changed into ''Assert.assertEquals("www.httprecipes.com", address.getUrl().getHost());''; ....

    We list the test cases with the same problem as follows: ''testAddressFunctions()'' in TestAddress.java ''testClone()'' in TestBiPolarNeuralData.java ''testAStar()'' in TestSearch.java ''testBredthFirstSearch()'' in TestSearch.java ''testDepthFirstSearch()'' in TestSearch.java ''check2D()'' in TestNormArray.java ''check1D()'' in TestNormArray.java ''check()'' in TestMapped.java ...

    opened by TestSmell 0
  • Gradient is zero

    Gradient is zero

    The function Train.getLastGradient() will return an array containing only zeros, if it is called after Train.iteration(). This is still true even if Train.calculateGradients() is called right before. Also, inside the Function Train.updateWeight(double[] gradients, double[] lastGradients, int index), the lastGradients appear to be only zeros as well. I did a quick check and found that the learn() Function seems to allways reset the gradient values to 0, so this might be the source of the issue. If someone could confirm that this is a bug, I would be happy to attempt to fix it.

    opened by PLEXATIC 0
  • Freeform Propagation Training - layerDelta Add not Set?

    Freeform Propagation Training - layerDelta Add not Set?

    https://github.com/jeffheaton/encog-java-core/blob/06bed745403a1a670675b606b6ae483fbf7a6b97/src/main/java/org/encog/neural/freeform/training/FreeformPropagationTraining.java#L160

    should this be: fromNeuron.addTempTraining(0, layerDelta);

    since the existing stored value is not always zero (from previous iterations of the same training).

    unit tests still pass after this change.

    opened by automenta 0
  • EncogUtility.convertCSV2Binary bug

    EncogUtility.convertCSV2Binary bug

    EncogUtility.convertCSV2Binary does not use the header parameter. False is hard coded in the logic.

    	public static void convertCSV2Binary(final File csvFile,
    			final File binFile, final int inputCount, final int outputCount,
    			final boolean headers) {
    		binFile.delete();
    		final CSVNeuralDataSet csv = new CSVNeuralDataSet(csvFile.toString(),
    				inputCount, outputCount, false); // BUG HERE
    		final BufferedMLDataSet buffer = new BufferedMLDataSet(binFile);
    		buffer.beginLoad(inputCount, outputCount);
    		for (final MLDataPair pair : csv) {
    			buffer.add(pair);
    		}
    		buffer.endLoad();
    	}
    
    
    opened by shikhirsingh 0
  • Engocs Sigmoid returns unexplainable values

    Engocs Sigmoid returns unexplainable values

    Hey,

    I have created a Network like this:

    `nn = new BasicNetwork(); nn.addLayer(new BasicLayer(null, true, 21)); nn.addLayer(new BasicLayer(new ActivationSigmoid(), true, 200)); nn.addLayer(new BasicLayer(new ActivationSigmoid(), true, 200)); nn.addLayer(new BasicLayer(new ActivationSigmoid(), true, 200)); nn.addLayer(new BasicLayer(new ActivationSigmoid(), true, 100)); nn.addLayer(new BasicLayer(new ActivationSigmoid(), true, 50));

         nn.addLayer(new BasicLayer(new ActivationSigmoid(), false, 4));
    
         nn.getStructure().finalizeStructure();
         nn.reset();`
    

    After this I created an Output method:

    ` public double[] getOutput(MLData input) {

    double[] output = nn.compute(input).getData();
    for(double w: output) {
    	if(w>1 || w < 0.0)System.out.println(w);
    }
    return output;}
    

    `

    This NN is able to return Values smaller zero and bigger one. How on earth is this possible. I check your sigmoids; they work well. Are there weights after the last layer? How to destroy them?

    Support 
    opened by NiclasSchwalbe 0
  • Trained neural network outputs the same results for all evaluation rows

    Trained neural network outputs the same results for all evaluation rows

    There seems to be no problem when training my network because it converges and falls below 0.01 error. However when I load my trained network, and introduce the evaluation set, it outputs the same results for all the evaluation set rows (the actual prediction, not the training phase). I trained my network with resilient propagation with 9 inputs, 1 hidden layer with 7 hidden neurons and 1 output neuron. UPDATE: My data is normalized using min-max. i am trying to predict an electric load data.

    Here is the sample data, first 9 rows are the inputs while the 10th is the ideal value:

    0.5386671932975533, 1100000.0, 0.0, 1.0, 40.0, 1.0, 30.0, 9.0, 2014.0 , 0.5260616667545941
    0.5260616667545941, 1100000.0, 0.0, 1.0, 40.0, 2.0, 30.0, 9.0, 2014.0, 0.5196499668339777
    0.5196499668339777, 1100000.0, 0.0, 1.0, 40.0, 3.0, 30.0, 9.0, 2014.0, 0.5083828048375548
    0.5083828048375548, 1100000.0, 0.0, 1.0, 40.0, 4.0, 30.0, 9.0, 2014.0, 0.49985462144799725
    0.49985462144799725, 1100000.0, 0.0, 1.0, 40.0, 5.0, 30.0, 9.0, 2014.0, 0.49085956670499675
    0.49085956670499675, 1100000.0, 0.0, 1.0, 40.0, 6.0, 30.0, 9.0, 2014.0, 0.485008112408512
    

    Here's the full code:

    public class ANN
    {	
    //training
    //public final static String SQL = "SELECT load_input, day_of_week, weekend_day, type_of_day, week_num, time, day_date, month, year, ideal_value FROM sample WHERE (year,month,day_date,time) between (2012,4,1,1) and (2014,9,29, 96) ORDER BY ID";
    //testing
    public final static String SQL = "SELECT load_input, day_of_week, weekend_day, type_of_day, week_num, time, day_date, month, year, ideal_value FROM sample WHERE (year,month,day_date,time) between (2014,9,30,1) and (2014,9,30, 92) ORDER BY ID";
    //validation
    //public final static String SQL = "SELECT load_input, day_of_week, weekend_day, type_of_day, week_num, time, day_date, month, year, ideal_value FROM sample WHERE (year,month,day_date,time) between (2014,9,30,93) and (2014,9,30, 96) ORDER BY ID";
    public final static int INPUT_SIZE = 9;
    public final static int IDEAL_SIZE = 1;
    public final static String SQL_DRIVER = "org.postgresql.Driver";
    public final static String SQL_URL = "jdbc:postgresql://localhost/ANN";
    public final static String SQL_UID = "postgres";
    public final static String SQL_PWD = "";
    
    public static void main(String args[])
    {	
    	Mynetwork();
    	//train network. will add customizable params later.
    	//train(trainingData());
    	//evaluate network
    	evaluate(trainingData());
    	Encog.getInstance().shutdown();
    }
    public static void evaluate(MLDataSet testSet)
    {
    	BasicNetwork network = (BasicNetwork)EncogDirectoryPersistence.loadObject(new File("directory"));
    	
    	// test the neural network
    	System.out.println("Neural Network Results:");
    	for(MLDataPair pair: testSet ) {
    		final MLData output = network.compute(pair.getInput());
    		System.out.println(pair.getInput().getData(0) + "," + pair.getInput().getData(1) + "," + pair.getInput().getData(2) + "," + pair.getInput().getData(3) + "," + pair.getInput().getData(4) + "," + pair.getInput().getData(5) + "," + pair.getInput().getData(6) + "," + pair.getInput().getData(7) + "," + pair.getInput().getData(8) + "," + "Predicted=" + output.getData(0) + ", Actual=" + pair.getIdeal().getData(0));
    	}
    }
    public static BasicNetwork Mynetwork()
    {
    	//basic neural network template. Inputs should'nt have activation functions
    	//because it affects data coming from the previous layer and there is no previous layer before the input.
    	BasicNetwork network = new BasicNetwork();
    	//input layer with 2 neurons.
    	//The 'true' parameter means that it should have a bias neuron. Bias neuron affects the next layer.
    	network.addLayer(new BasicLayer(null , true, 9));
    	//hidden layer with 3 neurons
    	network.addLayer(new BasicLayer(new ActivationSigmoid(), true, 5));
    	//output layer with 1 neuron
    	network.addLayer(new BasicLayer(new ActivationSigmoid(), false, 1));
    	network.getStructure().finalizeStructure() ;
    	network.reset();
    	
    	return network;
    }
    public static void train(MLDataSet trainingSet)
    {
    	//Backpropagation(network, dataset, learning rate, momentum)
    	//final Backpropagation train = new Backpropagation(Mynetwork(), trainingSet, 0.1, 0.9);
    	final ResilientPropagation train = new ResilientPropagation(Mynetwork(), trainingSet);
    	//final QuickPropagation train = new QuickPropagation(Mynetwork(), trainingSet, 0.9);
    	
    	int epoch = 1;
    	 
    	do {
    		train.iteration();
    		System.out.println("Epoch #" + epoch + " Error:" + train.getError());
    		epoch++;
    	} while((train.getError() > 0.01)); 
    	System.out.println("Saving network");
    	System.out.println("Saving Done");
    	EncogDirectoryPersistence.saveObject(new File("directory"), Mynetwork());
    }
    public static MLDataSet trainingData()
    {
    	MLDataSet trainingSet = new SQLNeuralDataSet(
    			ANN.SQL,
    			ANN.INPUT_SIZE,
    			ANN.IDEAL_SIZE,
    			ANN.SQL_DRIVER,
    			ANN.SQL_URL,
    			ANN.SQL_UID,
    			ANN.SQL_PWD);
    	
    	return trainingSet;
    }
    

    }

    Here is my result:

    Predicted=0.4451817588640455, Actual=0.5260616667545941
    Predicted=0.4451817588640455, Actual=0.5196499668339777
    Predicted=0.4451817588640455, Actual=0.5083828048375548
    Predicted=0.4451817588640455, Actual=0.49985462144799725
    Predicted=0.4451817588640455, Actual=0.49085956670499675
    Predicted=0.4451817588640455, Actual=0.485008112408512
    Predicted=0.4451817588640455, Actual=0.47800504210686795
    Predicted=0.4451817588640455, Actual=0.4693212349328293
    (...and so on with the same "predicted")
    

    Results im expecting (I changed the "predicted" with something random for demonstration purposes, indicating that the network is actually predicting):

    Predicted=0.4451817588640455, Actual=0.5260616667545941
    Predicted=0.5123312331212122, Actual=0.5196499668339777
    Predicted=0.435234234234254365, Actual=0.5083828048375548
    Predicted=0.673424556563455, Actual=0.49985462144799725
    Predicted=0.2344673345345544235, Actual=0.49085956670499675
    Predicted=0.123346457544324, Actual=0.485008112408512
    Predicted=0.5673452342342342, Actual=0.47800504210686795
    Predicted=0.678435234423423423, Actual=0.4693212349328293
    
    Support 
    opened by karlarnejo 1
Releases(v3.4)
  • v3.4(Aug 30, 2017)

  • v3.3(Oct 12, 2014)

  • v3.2(Jan 12, 2014)

    Encog 3.2 added genetic programming. This prompted a rewrite of the genetic machine learning algorithm, NEAT and the addition of HyperNEAT. Additionally, code generation was added for Java, Ninjatrader & MT4. Specific issues included in this release are listed here.

    #163: testTemporal fails (and so does mvn package) #162: Train Type for RBF "svd" should be "rbf-svd" #160: Recurrent freeform networks are broken (NullPointerException) #159: Predicting values with TemporalMLDataSet? #158: PNN/GRNN regression in Encog Analyst does not finish training #157: Please have a look, if this is a bug in PersistBasicPNN.java #156: Question for AnalystWizard.java (Bug?) #155: Bug in EngineConcurrency.java #154: Issue with Image downsampling #149: Workbench: Tab remains "dirty" after training #148: MutatePerturb.performOperation calculation is not correct #146: Workbench: NEAT training panel does not repaint (exception) #145: Workbench: Analyst generates C# code that doesn't compile #143: Workbench: Missing "task-balance" option in combo #142: Workbench: Empty countPer in .ega #141: Add online training to Encog #136: Copy-paste error in ActivationGaussian #125: Encog Analyst Query window with limited entries #124: Performance improvements #123: FoldedDataSet sizes #121: Persist fix - fixes activation function and limited network persistence #114: Realtime Analyist Wizard Source Fields #111: RandomTrainingFactory.generate() produces values out of given range #106: Problem in the Training Algorithm ScaledConjugateGradient #101: Workbench: missing combo in "Evaluate method" dialog #98: Workbench: forgotten old dependency in pom.xml #84: Possible parsing problem #67: Allow trainers to be serializable #55: Workbench: Wrong training set preselected #43: Workbench: add "Select all" button to Scatter Plot dialog #40: Workbench: save log level

    Source code(tar.gz)
    Source code(zip)
    encog-core-3.2.0-release.zip(10.62 MB)
    encog-examples-3.2.0-release.zip(3.88 MB)
    encog-workbench-3.2.0-release.zip(26.20 MB)
Owner
Jeff Heaton
Computer scientist that specializes in data science and artificial intelligence. Adjunct faculty at WUSTL.
Jeff Heaton
java deep learning algorithms and deep neural networks with gpu acceleration

Deep Neural Networks with GPU support Update This is a newer version of the framework, that I developed while working at ExB Research. Currently, you

Ivan Vasilev 1.2k Jan 6, 2023
Datumbox is an open-source Machine Learning framework written in Java which allows the rapid development of Machine Learning and Statistical applications.

Datumbox Machine Learning Framework The Datumbox Machine Learning Framework is an open-source framework written in Java which allows the rapid develop

Vasilis Vryniotis 1.1k Dec 9, 2022
Model import deployment framework for retraining models (pytorch, tensorflow,keras) deploying in JVM Micro service environments, mobile devices, iot, and Apache Spark

The Eclipse Deeplearning4J (DL4J) ecosystem is a set of projects intended to support all the needs of a JVM based deep learning application. This mean

Eclipse Foundation 12.7k Dec 30, 2022
A scale demo of Neo4j Fabric spanning up to 1129 machines/shards running a 100TB (LDBC) dataset with 1.2tn nodes and relationships.

Demo application instructions Overview This repository contains the code necessary to reproduce the results for the Trillion Entity demonstration that

Neo4j 84 Nov 23, 2022
Oryx 2: Lambda architecture on Apache Spark, Apache Kafka for real-time large scale machine learning

Oryx 2 is a realization of the lambda architecture built on Apache Spark and Apache Kafka, but with specialization for real-time large scale machine l

Oryx Project 1.8k Dec 28, 2022
Oryx 2: Lambda architecture on Apache Spark, Apache Kafka for real-time large scale machine learning

Oryx 2 is a realization of the lambda architecture built on Apache Spark and Apache Kafka, but with specialization for real-time large scale machine l

Oryx Project 1.7k Mar 12, 2021
A group of neural-network libraries for functional and mainstream languages

Synapses A group of neural-network libraries for functional and mainstream languages! Choose a programming language: Clojure C# Elixir F# Gleam Java J

Dimos Michailidis 65 Nov 9, 2022
statistics, data mining and machine learning toolbox

Disambiguation (Italian dictionary) Field of turnips. It is also a place where there is confusion, where tricks and sims are plotted. (Computer scienc

Aurelian Tutuianu 63 Jun 11, 2022
Use this to open hidden activities on MIUI.

miui_hidden_libs Use this to open hidden activities on MIUI. Translate for your language: https://drive.google.com/file/d/1---II4WVVvPIn3cPodTC52VPVG3

ios7jbpro 51 Nov 10, 2022
Library for helping mods that use graph networks, like Wired Redstone

GraphLib Library for helping mods that use graph networks, like Wired Redstone. GraphLib and HCTM-Base This library is based on HCTM-Base by 2xsaiko a

Data 6 Nov 5, 2022
Bazel training materials and codelabs focused on beginner, advanced and contributor learning paths

Bazel-learning-paths This repo has materials for learning Bazel: codelabs, presentations, examples. We are open sourcing the content for training engi

null 18 Nov 14, 2022
Chih-Jen Lin 4.3k Jan 2, 2023
Reference implementation for MINAS (MultI-class learNing Algorithm for data Streams), an algorithm to address novelty detection in data streams multi-class problems.

Reference implementation for MINAS (MultI-class learNing Algorithm for data Streams), an algorithm to address novelty detection in data streams multi-class problems.

Douglas M. Cavalcanti 4 Sep 7, 2022
Transform ML models into a native code (Java, C, Python, Go, JavaScript, Visual Basic, C#, R, PowerShell, PHP, Dart, Haskell, Ruby, F#, Rust) with zero dependencies

m2cgen m2cgen (Model 2 Code Generator) - is a lightweight library which provides an easy way to transpile trained statistical models into a native cod

Bayes' Witnesses 2.3k Jan 4, 2023
MALLET is a Java-based package for statistical natural language processing, document classification, clustering, topic modeling, information extraction, and other machine learning applications to text.

MALLET is a Java-based package for statistical natural language processing, document classification, clustering, topic modeling, information extraction, and other machine learning applications to text.

null 900 Jan 2, 2023
Java Statistical Analysis Tool, a Java library for Machine Learning

Java Statistical Analysis Tool JSAT is a library for quickly getting started with Machine Learning problems. It is developed in my free time, and made

null 752 Dec 20, 2022
Tribuo - A Java machine learning library

Tribuo - A Java prediction library (v4.2) Tribuo is a machine learning library in Java that provides multi-class classification, regression, clusterin

Oracle 1.1k Dec 28, 2022