java deep learning algorithms and deep neural networks with gpu acceleration

Overview

Deep Neural Networks with GPU support

Update This is a newer version of the framework, that I developed while working at ExB Research. Currently, you can build the project, but some of the tests are not working. If you want to access the previous version it's available in the old branch.

This is a Java implementation of some of the algorithms for training deep neural networks. GPU support is provided via the OpenCL and Aparapi. The architecture is designed with modularity, extensibility and pluggability in mind.

Git structure

I'm using the git-flow model. The most stable (but older) sources are available in the master branch, while the latest ones are in the develop branch.

If you want to use the previous Java 7 compatible version you can check out this release.

Neural network types

  • Multilayer perceptron
  • Convolutional networks with max pooling, average poolng and stochastic pooling.
  • Restricted Boltzmann Machine
  • Autoencoder
  • Deep belief network
  • Stacked autoencodeer

Training algorithms

  • Backpropagation - supports multilayer perceptrons, convolutional networks and dropout.
  • Contrastive divergence and persistent contrastive divergence implemented using these and these guidelines.
  • Greedy layer-wise training for deep networks - works for stacked autoencoders and DBNs, but supports any kind of training.

All the algorithms support GPU execution.

Out of the box supported datasets are MNIST, CIFAR-10/CIFAR-100, IRIS and XOR, but you can easily implement your own.

Experimental support of RGB image preprocessing operations - affine transformations, cropping, and color scaling (see Generaltest.java -> testImageInputProvider).

Activation functions

  • Sigmoid
  • Tanh
  • ReLU
  • LRN
  • Softplus
  • Softmax

All the functions support GPU execution. They can be applied to all types of networks and all training algorithms. You can also implement new activations.

How to build the library

  • Java 8.
  • To build the project you need maven.
  • Depending on your environment you might need to download the relevant aparapi .dll or .so file (located in the root of each archive) from here and add it's location to the system PATH variable. (This)[https://code.google.com/p/aparapi/wiki/DevelopersGuideLinux] is a guide on how to set up OpenCL in linux environment.

How to run the samples

The samples are organized as unit tests. If you want see examples on various popular datasets you can go to nn-samples/src/test/java/com/github/neuralnetworks/samples/.

Library structure

There are two projects:

  • nn-core - contains the full implementation.
  • nn-samples - contains implementations of popular datasets and
  • nn-performance - some performance metrics.
  • nn-userinterface - unfinished work on visual network representation.

The software design is tiered, each tier depending on the previous ones.

Network architecture

This is the first "tier". Each network is defined by a list of layers. Each layer has a set of connections that link it to the other layers of the network, making the network a directed acyclic graph. This structure can accommodate simple feedforwad nets, but also more complex architectures like http://www.cs.toronto.edu/~hinton/absps/imagenet.pdf. You can build your own specific network.

Data propagation

This tier is propagating data through the network. It takes advantage of it's graph structure. There are two main base components:

  • LayerCalculator - propagates data through the graph. It receives target layer and input data clamped to a given layer (considered an input layer). It ensures that the data is propagated through the layers in the correct order and that all the connections in the graph are calculated. For example, during the feedforward phase of backpropagation the training data is clamped to the input layer and is propagated to the target layer (the output layer of the network). In the bp phase the output error derivative is clamped as "input" to the layer and the weights are updated using breadth-first graph traversal starting from the output layer. Essentially the role of the LayerCalculator is to provide the order, in which the network layers are calculated.
  • ConnectionCalculator - base class for all neuron types (sigmoid, rectifiers, convolutional etc.). After the order of calculation of the layers is determined by the LayerCalculator, then the list of input connections for each layer is calculated by the ConnectionCalculator.

GPU

Most of the ConnectionCalculator implementations are optimized for GPU execution. There are two implementations - Native OpenCL and Aparapi. Aparapi imposes some important restrictions on the code that can be executed on the GPU. The most significant are:

  • only one-dimensional arrays (and variables) of primitive data types are allowed. It is not possible to use complex objects.
  • only member-methods of the Aparapi Kernel class itself are allowed to be called from the GPU executable code.

Therefore before each GPU calculation all the data is converted to one-dim arrays and primitive type variables. Because of this all Aparapi neuron types are using either AparapiWeightedSum (for fully connected layers and weighted sum input functions), AparapiSubsampling2D (for subsampling layers) or AparapiConv2D (for convolutional layers). Most of the data is represented as one-dimensional array by default (for example Matrix).

The native OpenCL implementation does not have these restrictions.

Training

All the trainers are using the Trainer base class. They are optimized to run on the GPU, but you can plug-in other implementations and new training algorithms. The training procedure has training and testing phases. Each Trainer receives parameters (for example learning rate, momentum, etc) via Properties (a HashMap). For the supported properties for each trainer please check the TrainerFactory class.

Input data

Input is provided to the neural network by the trainers via TrainingInputProvider interface. Each TrainingInputProvider provides training samples in the form of TrainingInputData (default implementation is TrainingInputDataImpl). The input can be modified by a list of modifiers - for example MeanInputFunction (for subtracting the mean value) and ScalingInputFunction (scaling within a range). Currently MnistInputProvider and IrisInputProvider are implemented.

Author

Ivan Vasilev (ivanvasilev [at] gmail (dot) com)

License

MIT License

Comments
  • My networks have a 53% error on the training set

    My networks have a 53% error on the training set

    When I test my trained networks on the training sets themselves, I end up with 53% error.

    Does that seem somewhat strange? Perhaps the features are just not good features.

    opened by yeison 7
  • failed test, FFNNtest.testParallelNetworks during build

    failed test, FFNNtest.testParallelNetworks during build

    com.github.neuralnetworks.test.FFNNTest > testParallelNetworks FAILED When do gradle build, gotten the following msg (on Ubuntu 12.10 32 bits):

    java.lang.AssertionError: expected:<1.32> but was:<1.2000000476837158> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:494) at org.junit.Assert.assertEquals(Assert.java:592) at com.github.neuralnetworks.test.FFNNTest.testParallelNetworks(FFNNTest.java:365)

    opened by artemisep 5
  • Failed to load aparapi

    Failed to load aparapi

    I downloaded the latest code and imported the project as a Maven project into Eclipse. When I run JUnit test for AETest on my MacBook Pro (with OS X 10.9.2, i5 and Intel Iris 1024 MB), I got the following error:

    TRAINING testAEBackpropagation...
    Check your environment. Failed to load aparapi native library aparapi_x86_64 or possibly failed to locate opencl native library (opencl.dll/opencl.so). Ensure that both are in your PATH (windows) or in LD_LIBRARY_PATH (linux).
    Apr 29, 2014 9:08:31 PM com.amd.aparapi.KernelRunner warnFallBackAndExecute
    WARNING: Reverting to Java Thread Pool (JTP) for class com.github.neuralnetworks.calculation.neuronfunctions.AparapiSigmoid$AparapiSigmoidFunction: CPU request can't be honored not CPU device
    Apr 29, 2014 9:08:31 PM com.amd.aparapi.KernelRunner warnFallBackAndExecute
    WARNING: Reverting to Java Thread Pool (JTP) for class com.github.neuralnetworks.calculation.neuronfunctions.AparapiSigmoid$AparapiSigmoidFunction: CPU request can't be honored not CPU device
    Apr 29, 2014 9:08:31 PM com.amd.aparapi.KernelRunner warnFallBackAndExecute
    WARNING: Reverting to Java Thread Pool (JTP) for class com.github.neuralnetworks.training.backpropagation.BackPropagationSigmoid$AparapiBackpropSigmoid: CPU request can't be honored not CPU device
    Apr 29, 2014 9:08:31 PM com.amd.aparapi.KernelRunner warnFallBackAndExecute
    WARNING: Reverting to Java Thread Pool (JTP) for class com.github.neuralnetworks.training.backpropagation.BackPropagationSigmoid$AparapiBackpropSigmoid: CPU request can't be honored not CPU device
    Apr 29, 2014 9:08:31 PM com.amd.aparapi.KernelRunner warnFallBackAndExecute
    WARNING: Reverting to Java Thread Pool (JTP) for class com.github.neuralnetworks.training.backpropagation.BackPropagationSigmoid$AparapiBackpropSigmoid: CPU request can't be honored not CPU device
    Apr 29, 2014 9:08:31 PM com.amd.aparapi.KernelRunner warnFallBackAndExecute
    WARNING: Reverting to Java Thread Pool (JTP) for class com.github.neuralnetworks.training.backpropagation.BackPropagationSigmoid$AparapiBackpropSigmoid: CPU request can't be honored not CPU device
    

    I cannot figure out this problem... May I know someone has ever experienced the same problem? Really need some help here. Thanks!

    opened by diPDew 4
  • RBM test error is 100% in MnistTest

    RBM test error is 100% in MnistTest

    Why do I get a 100%error using the RBM test in MnistTest.java?

    TESTING testRBM... 2.275 s total time 0.2275 s per minibatch of 10 mini batches 10000/10000 samples (1.0, 100.0%) error

    opened by McDucky 3
  • Strange behavior when calculating Layers (probably Aparapi related)

    Strange behavior when calculating Layers (probably Aparapi related)

    Hi! First of all I have no experience with GPU processing and my personal computer don't even have one. I'm using your package inside a Cognitive Architecture project and at some point its involve calculate some inputs in a loop. There I call a self-made method that uses some lines from the "propagateForward".

    The method is: public void calculate(Matrix input){ Set<Layer> calculatedLayers = new UniqueList<Layer>(); calculatedLayers.add(mlp.getInputLayer()); activations.addValues(mlp.getInputLayer(), input); mlp.getLayerCalculator().calculate(mlp, mlp.getOutputLayer(), calculatedLayers, activations); } And I call it with "TrainingInputProvider.getNextInput().getInput()" as parameter.

    The problem is that after the first iteration (which seens to run without any issue) this "calculate" method thows a error:

    Exception in thread "Thread-7" java.lang.UnsatisfiedLinkError: com.amd.aparapi.KernelRunner.runKernelJNI(JLcom/amd/aparapi/Range;ZI)I

    and them the Thread is gone.

    I feel that I'm doing something wrong as I think that it should OR work though all the loop OR not work at all.

    Can you help me with this ?

    opened by wandgibaut 2
  • Should all tests pass well?

    Should all tests pass well?

    I ran all tests from withing Eclipse, which found all classes with @Test annotation and an them. Most of the passed, but two failed:

    1. com.github.neuralnetworks.samples.test.IrisTest.testAE()

    Last assertion failed: java.lang.AssertionError: expected: 0.0 but was:0.9866667

    1. com.github.neuralnetworks.test.AETest.testAEBackpropagation()

    Also last assertion failed: java.lang.AssertionError: expected:0.0 but was:0.1

    Is this normal? Are these tests expected to run well?

    opened by dims12 2
  • UniqueList purpose?

    UniqueList purpose?

    Ivan, What is the purpose of the UniqueList class? If you needed a Set that also preserves the order of insertion, you could use LinkedHashSet, otherwise just use HashSet?

    opened by hrstoyanov 2
  • .gitignore files, aparapi.jar removed and fixed path for MNIST datasets.

    .gitignore files, aparapi.jar removed and fixed path for MNIST datasets.

    I've made few changes that enabled me to run MNIST junit tests successfully. Not sure is the fixed RESOURCES_PATH a "best practice" (installation specific), but it makes downloading files into specific dir. easy.

    Also, I've removed "aparapi.jar" from lib dir, it seems to me that it is redundant to maven version ("aparapi-2013_01_23.jar").

    Please review.

    opened by vojkog 2
  • nn-core not building because BackpropagationConnectionCalculator is in a file called BackPropagationConnectionCalculator.java (with a capital P instead of a small p)

    nn-core not building because BackpropagationConnectionCalculator is in a file called BackPropagationConnectionCalculator.java (with a capital P instead of a small p)

    Eclipse complians that com.github.neuralnetworks.training.backpropagation.BackpropagationConnectionCalculator should be "in it's own file". I suppose the issue might be because I am using

    gradlew eclipse
    

    and then did an Import of nn-code into eclipse as an eclipse project instead of using gradle end-to-end?

    opened by vijay-v 2
  • Update Aparapi package name

    Update Aparapi package name

    Changed com.amd.aparapi to com.aparapi as per version 1.3.4 of the dependency (downloaded from maven central, and designated the recent release version on the Aparapi webpage). Corrected some code analysis warnings: Malformed JavaDocs, more Java 8 usage, simplification of conditionals, unnecessary boxing, and an inefficient array copy.

    opened by InonS 1
  • neuralnetworks is 2000 times slower using GPU than Theano using CPU

    neuralnetworks is 2000 times slower using GPU than Theano using CPU

    At first I couldn't get GPU runtimes to be any faster than CPU runtimes on neuralnetworks but eventually I got the GPU run faster, but only by making huge networks that would take forever to complete for example I modified the testLenetSmall function to have this network:

    NeuralNetworkImpl nn = NNFactory.convNN(new int[][] { { 28, 28, 1 }, { 5, 5, 120, 1 }, { 2, 2 }, { 5, 5, 120, 1 }, { 2, 2 },  { 3, 3, 120, 1 }, { 2, 2 }, {2048}, {2048}, {10} }, true);
    

    Basically I added a 3rd convolutional net, bumped up the number of filters in in all covnets to 120 (from 20 and 50), quadrupled the neurons in the final hidden layer and added another hidden layer with 2048 neurons. The GPU enabled version runs about 2.4 times faster, but it's still dog slow taking something like 12 - 14 seconds per batch (the batch size is 1) so training the entire dataset of 60000 images would take 8.3 to 9.7 days. So like 10 days per epoch on the GPU. Meanwhile I built a comparable network in Lasagne/Theano and it takes around 420 seconds per epoch on the CPU (in a VM at that) which is about 2000 times faster.

    opened by joelself 1
  • OpenCl problem

    OpenCl problem

    Hi, What does one do with this? I'm trying to run some tests, I started with /neuralnetworks/nn-samples/src/test/java/com/github/neuralnetworks/samples/test/CifarTest.java test1 and this is what I get ->

    Caused by: java.lang.IllegalArgumentException: Could not found resource cl/exboclkernels.cl in resource path
    	at com.github.neuralnetworks.calculation.operations.opencl.OCL.CopyLibrary(OCL.java:155)
    	at com.github.neuralnetworks.calculation.operations.opencl.OCL.loadNativeCodeFromJar(OCL.java:107)
    	at com.github.neuralnetworks.calculation.operations.opencl.OCL.loadNativeCodeFromJar(OCL.java:59)
    	at com.github.neuralnetworks.calculation.operations.opencl.OCL.<init>(OCL.java:38)
    	at com.github.neuralnetworks.calculation.operations.opencl.OpenCLCore.<init>(OpenCLCore.java:35)
    	at com.github.neuralnetworks.calculation.operations.opencl.OpenCLCore.<clinit>(OpenCLCore.java:15)
    
    

    I run Aparapi examples on my computer and they work, so OpenCL is working. Has anybody run into this?

    With regards, Logi

    opened by logip 0
  • Execution mode GPU failed: OpenCL execution seems to have failed (runKernelJNI returned -51) com.aparapi.internal.exception.AparapiException: OpenCL execution seems to have failed (runKernelJNI returned -51)

    Execution mode GPU failed: OpenCL execution seems to have failed (runKernelJNI returned -51) com.aparapi.internal.exception.AparapiException: OpenCL execution seems to have failed (runKernelJNI returned -51)

    Hi @ivan-vasilev !

    I've been trying to compare your package to my CPU backend TensorFlow. It seems that my puny GPUs can't handle the MNIST example (I have both an on-board Intel one as well as an AMD Radeon one). Running the MNIST example in your package with a CPU backend works without a problem, but when I require that AParAPI use the GPU backend I get the following warning (fallBackToNextDevice):

    WARNING: Execution mode GPU failed for AparapiBackpropReLU, modes=[AUTO], current = GPU: OpenCL execution seems to have failed (runKernelJNI returned -51)
    com.aparapi.internal.exception.AparapiException: OpenCL execution seems to have failed (runKernelJNI returned -51)
    	at com.aparapi.internal.kernel.KernelRunner.executeOpenCL(KernelRunner.java:1058)
    	at com.aparapi.internal.kernel.KernelRunner.executeInternalInner(KernelRunner.java:1519)
    	at com.aparapi.internal.kernel.KernelRunner.executeInternalOuter(KernelRunner.java:1180)
    	at com.aparapi.internal.kernel.KernelRunner.execute(KernelRunner.java:1170)
    	at com.aparapi.Kernel.execute(Kernel.java:2439)
    	at com.aparapi.Kernel.execute(Kernel.java:2396)
    	at com.aparapi.Kernel.execute(Kernel.java:2371)
    	at com.github.neuralnetworks.util.KernelExecutionStrategy$GPUKernelExecution.execute(KernelExecutionStrategy.java:42)
    	at com.github.neuralnetworks.calculation.neuronfunctions.AparapiFullyConnected.calculate(AparapiFullyConnected.java:151)
    	at com.github.neuralnetworks.training.backpropagation.BackPropagationConnectionCalculatorImpl.calculate(BackPropagationConnectionCalculatorImpl.java:73)
    	at com.github.neuralnetworks.calculation.LayerCalculatorBase.calculate(LayerCalculatorBase.java:44)
    	at com.github.neuralnetworks.training.backpropagation.BackPropagationLayerCalculatorImpl.backpropagate(BackPropagationLayerCalculatorImpl.java:33)
    	at com.github.neuralnetworks.training.backpropagation.BackPropagationTrainer.learnInput(BackPropagationTrainer.java:78)
    	at com.github.neuralnetworks.training.OneStepTrainer.train(OneStepTrainer.java:44)
    	at ml.sharony.ann.tf.examples.sentiment.BenchmarkTFCPU.test(BenchmarkTFCPU.java:134)
    

    What does runKernelJNI returned -51 mean?

    You can find the source code I'm running on my fork of your repo, under the benchmark-tf-cpu branch.

    opened by InonS 0
  • OS Differences

    OS Differences

    I coded on my MacBook, the code was working well, but not pretty fast. So I switched to my Windows desktop PC with a GPU, but the code just wouldn't run. I'm getting

    "Jan 31, 2017 4:32:05 PM com.amd.aparapi.internal.kernel.KernelRunner executeOpenCL WARNUNG: ### CL exec seems to have failed. Trying to revert to Java ###"

    every time I run the code. But minor changes will make the code work again. Code: int closest = -1; Some loops and raytracing later... if (closest > -1){ this.image[id] = 23; } Will produce an error, but just this.image[id] = 23; without the conditionla statement works great. Please help me im confused!

    Regards Julius

    opened by juliusmh 0
  • How can I train my net with deep learning ?

    How can I train my net with deep learning ?

    I read your amazing explanation on https://www.toptal.com/machine-learning/an-introduction-to-deep-learning-from-perceptrons-to-deep-networks. I need your help I have ARFF file , I can change it to any other format like excel or csv , this file contains binary features(0,1) and two label classes (0,1) I want to use deep learning especially MLP and DBN . Can you tell me in details how can I do that ?Is there a jar file to add in my project and sample code if available. Thanks

    opened by dinasaif 0
  • Can the examples run without opencl.so ?

    Can the examples run without opencl.so ?

    Okay I have downloaded the examples. But I keep getting the following messages in Eclipse-:

    Check your environment. Failed to load aparapi native library aparapi_x86_64 or possibly failed to locate opencl native library (opencl.dll/opencl.so). Ensure that both are in your PATH (windows) or in LD_LIBRARY_PATH (linux).

    Now I have set the aparapi_x86_64.so in my library path. But I have not downloaded OpenCL ? Do I need OpenCL to run the examples ? Is it absolutely neccessary ?

    opened by alexde989 0
  • How to run the project in Eclipse ?

    How to run the project in Eclipse ?

    I have downloaded the project into Eclipse, but how do you run it ? Do we have to convert to a maven project ? Can someone give me the detailed steps ?

    opened by alexde989 0
Releases(v0.2.0-alpha)
Owner
Ivan Vasilev
Ivan Vasilev
An Engine-Agnostic Deep Learning Framework in Java

Deep Java Library (DJL) Overview Deep Java Library (DJL) is an open-source, high-level, engine-agnostic Java framework for deep learning. DJL is desig

Amazon Web Services - Labs 2.9k Jan 7, 2023
An Engine-Agnostic Deep Learning Framework in Java

Deep Java Library (DJL) Overview Deep Java Library (DJL) is an open-source, high-level, engine-agnostic Java framework for deep learning. DJL is desig

DeepJavaLibrary 2.9k Jan 7, 2023
On-device wake word detection powered by deep learning.

Porcupine Made in Vancouver, Canada by Picovoice Porcupine is a highly-accurate and lightweight wake word engine. It enables building always-listening

Picovoice 2.8k Dec 30, 2022
Library for helping mods that use graph networks, like Wired Redstone

GraphLib Library for helping mods that use graph networks, like Wired Redstone. GraphLib and HCTM-Base This library is based on HCTM-Base by 2xsaiko a

Data 6 Nov 5, 2022
Datumbox is an open-source Machine Learning framework written in Java which allows the rapid development of Machine Learning and Statistical applications.

Datumbox Machine Learning Framework The Datumbox Machine Learning Framework is an open-source framework written in Java which allows the rapid develop

Vasilis Vryniotis 1.1k Dec 9, 2022
A group of neural-network libraries for functional and mainstream languages

Synapses A group of neural-network libraries for functional and mainstream languages! Choose a programming language: Clojure C# Elixir F# Gleam Java J

Dimos Michailidis 65 Nov 9, 2022
An experiment in artificial life, artificial neural nets, artificial sentience, simulated evolution, simulated consciousness, and genetic programming

bots An experiment in artificial life, artificial neural nets, artificial sentience, simulated evolution, simulated consciousness, and genetic program

Jason Resch 7 Oct 18, 2022
This repository holds the famous Data Structures (mostly abstract ones) and Algorithms for sorting, traversing, and modifying them.

Data-Structures-and-Algorithms About Repo The repo contains the algorithms for manipulating the abstract data structures like Linked List, Stacks, Que

Zaid Ahmed 14 Dec 26, 2021
This repository consists of the code samples, assignments, and the curriculum for the Community Classroom complete Data Structures & Algorithms Java bootcamp.

DSA-Bootcamp-Java Subscribe to our channel Complete Playlist Syllabus Discord for discussions Telegram for announcements Connect with me     Follow Co

Kunal Kushwaha 10.2k Jan 1, 2023
oj! Algorithms - ojAlgo - is Open Source Java code that has to do with mathematics, linear algebra and optimisation.

oj! Algorithms oj! Algorithms - ojAlgo - is Open Source Java code that has to do with mathematics, linear algebra and optimisation. General informatio

Optimatika 403 Dec 14, 2022
This JAVA repository contains solutions for common algorithms and problems.

JAVA-Algorithms ?? Description Beep Boop! Boop Beep!. I have created this repository to improve my Logical thinking skills & Knowledge in programming.

VINU 3 Apr 11, 2022
Data Structures and Algorithms (DSA) - Java Language Using Integrated Development Environments NetBeans

Data Structures and Algorithms (DSA) Course Code : CSC211 Credit Hours : 4 Language : JAVA Integrated development environments : NETBEANS Topic Covere

Ossama Mehmood 샘 2 Oct 1, 2022
Bazel training materials and codelabs focused on beginner, advanced and contributor learning paths

Bazel-learning-paths This repo has materials for learning Bazel: codelabs, presentations, examples. We are open sourcing the content for training engi

null 18 Nov 14, 2022
MALLET is a Java-based package for statistical natural language processing, document classification, clustering, topic modeling, information extraction, and other machine learning applications to text.

MALLET is a Java-based package for statistical natural language processing, document classification, clustering, topic modeling, information extraction, and other machine learning applications to text.

null 900 Jan 2, 2023
Java Statistical Analysis Tool, a Java library for Machine Learning

Java Statistical Analysis Tool JSAT is a library for quickly getting started with Machine Learning problems. It is developed in my free time, and made

null 752 Dec 20, 2022
联邦学习系统,包括常用算法和通用训练推理系统框架 | Fedlearn Main System, Including Algorithms and Frameworks for Training / Inference.

fedlearn 京东科技联邦学习系统 系统包含包含控制端(即前端)、协调端、单点客户端和分布式客户端等 1.代码结构 代码分为多个模块 assembly 整体代码打包模块,无实际功能 client 单机版客户端 common 公共包,实体和工具定义 coordinator 协调端,负责协调多个参与

null 57 Dec 31, 2022
Welcome 🙌! This repository encourages daily contributions from anyone intending to learn Data Structures and Algorithms every day

?? DSA-Community Welcome ?? ! This repository encourages daily contributions from anyone intending to learn Data Structures and Algorithms consistentl

Bishal Mohari 4 Sep 9, 2022
A visual representation of labyrinth solving with common traversal and heuristic algorithms + basic AI patterns

Path-finder A visual representation of labyrinth solving algorithms using common traversal algorithms such as BFS, DFS, A*. Plus there are some basic

Janez Sedeljšak 2 Jan 19, 2022