Java version of LIBLINEAR

Overview

CI Maven Central Coverage Status BSD 3-Clause License Donate

This is the Java version of LIBLINEAR.

The project site of the original C++ version is located at http://www.csie.ntu.edu.tw/~cjlin/liblinear/

The upstream changelog can be found at http://www.csie.ntu.edu.tw/~cjlin/liblinear/log

The upstream GitHub project can be found at https://github.com/cjlin1/liblinear

Dependencies

The only requirement is Java 8 or later.

Usage

<dependency>
    <groupId>de.bwaldvogel</groupId>
    <artifactId>liblinear</artifactId>
    <version>2.43</version>
</dependency>

Please be aware that the code would be written differently at various places, i.e.

  • Java coding style,
  • less static functions and state,
  • smaller classes and methods,

if it would be a pure Java project.

However, I tried to stick as close as possible to the original C++ source code for the following reasons:

  • Maintainability: Patches for the original C++ version can often be applied easily

  • Probability of translation errors: Sticking to the original source code makes it less likely to introduce new bugs that are caused by porting to Java.

  • Code Reviews: It should be more easy to conduct code reviews since the sources can be compared to the original version.

Below follows a slightly modified version of the original README file. Please note that the README refers to the C++ version. As afore mentioned, the Java version is almost identical to use.

The three most important methods for programmatic usage that you might be interested in are:

  • Linear.train(…)
  • Linear.predict(…)
  • Linear.predictProbability(…)

Contributing

Please read the contributing guidelines if you want to contribute code to the project.

If you want to thank the author for this library or want to support the maintenance work, we are happy to receive a donation.

Donate


LIBLINEAR is a simple package for solving large-scale regularized linear classification, regression and outlier detection. It currently supports

  • L2-regularized logistic regression/L2-loss support vector classification/L1-loss support vector classification
  • L1-regularized L2-loss support vector classification/L1-regularized logistic regression
  • L2-regularized L2-loss support vector regression/L1-loss support vector regression
  • one-class support vector machine. This document explains the usage of LIBLINEAR.

To get started, please read the Quick Start section first. For developers, please check the Library Usage section to learn how to integrate LIBLINEAR in your software.

Table of Contents

  • When to use LIBLINEAR but not LIBSVM
  • Quick Start
  • train Usage
  • predict Usage
  • Examples
  • Library Usage
  • Additional Information

When to use LIBLINEAR but not LIBSVM

There are some large data for which with/without nonlinear mappings gives similar performances. Without using kernels, one can efficiently train a much larger set via linear classification/regression. These data usually have a large number of features. Document classification is an example.

Warning: While generally liblinear is very fast, its default solver may be slow under certain situations (e.g., data not scaled or C is large). See Appendix B of our SVM guide about how to handle such cases.

http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf

Warning: If you are a beginner and your data sets are not large, you should consider LIBSVM first.

LIBSVM page: http://www.csie.ntu.edu.tw/~cjlin/libsvm

Quick Start

See the section Installation for installing LIBLINEAR.

After installation, there are programs train and predict for training and testing, respectively.

About the data format, please check the README file of LIBSVM. Note that feature index must start from 1 (but not 0).

A sample classification data included in this package is heart_scale.

Type train heart_scale, and the program will read the training data and output the model file heart_scale.model. If you have a test set called heart_scale.t, then type predict heart_scale.t heart_scale.model output to see the prediction accuracy. The output file contains the predicted class labels.

For more information about train and predict, see the sections train Usage and predict Usage.

To obtain good performances, sometimes one needs to scale the data. Please check the program svm-scale of LIBSVM. For large and sparse data, use -l 0 to keep the sparsity.

train Usage

Usage: train [options] training_set_file [model_file]
options:
-s type : set type of solver (default 1)
  for multi-class classification
     0 -- L2-regularized logistic regression (primal)
     1 -- L2-regularized L2-loss support vector classification (dual)
     2 -- L2-regularized L2-loss support vector classification (primal)
     3 -- L2-regularized L1-loss support vector classification (dual)
     4 -- support vector classification by Crammer and Singer
     5 -- L1-regularized L2-loss support vector classification
     6 -- L1-regularized logistic regression
     7 -- L2-regularized logistic regression (dual)
  for regression
    11 -- L2-regularized L2-loss support vector regression (primal)
    12 -- L2-regularized L2-loss support vector regression (dual)
    13 -- L2-regularized L1-loss support vector regression (dual)
  for outlier detection
    21 -- one-class support vector machine (dual)
-c cost : set the parameter C (default 1)
-p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)
-n nu : set the parameter nu of one-class SVM (default 0.5)
-e epsilon : set tolerance of termination criterion
    -s 0 and 2
        |f'(w)|_2 <= eps*min(pos,neg)/l*|f'(w0)|_2,
        where f is the primal function and pos/neg are # of
        positive/negative data (default 0.01)
    -s 11
        |f'(w)|_2 <= eps*|f'(w0)|_2 (default 0.0001)
    -s 1, 3, 4, 7, and 21
        Dual maximal violation <= eps; similar to libsvm (default 0.1 except 0.01 for -s 21)
    -s 5 and 6
        |f'(w)|_1 <= eps*min(pos,neg)/l*|f'(w0)|_1,
        where f is the primal function (default 0.01)
    -s 12 and 13
        |f'(alpha)|_1 <= eps |f'(alpha0)|,
        where f is the dual function (default 0.1)
-B bias : if bias >= 0, instance x becomes [x; bias]; if < 0, no bias term added (default -1)
-R : not regularize the bias; must with -B 1 to have the bias; DON'T use this unless you know what it is
	(for -s 0, 2, 5, 6, 11)
-wi weight: weights adjust the parameter C of different classes (see README for details)
-v n: n-fold cross validation mode
-C : find parameters (C for -s 0, 2 and C, p for -s 11)
-q : quiet mode (no outputs)

Option -v randomly splits the data into n parts and calculates cross validation accuracy on them.

Option -C conducts cross validation under different parameters and finds the best one. This option is supported only by -s 0, -s 2 (for finding C) and -s 11 (for finding C, p). If the solver is not specified, -s 2 is used.

Formulations:

For L2-regularized logistic regression (-s 0), we solve

min_w w^Tw/2 + C \sum log(1 + exp(-y_i w^Tx_i))

For L2-regularized L2-loss SVC dual (-s 1), we solve

min_alpha  0.5(alpha^T (Q + I/2/C) alpha) - e^T alpha
    s.t.   0 <= alpha_i,

For L2-regularized L2-loss SVC (-s 2), we solve

min_w w^Tw/2 + C \sum max(0, 1- y_i w^Tx_i)^2

For L2-regularized L1-loss SVC dual (-s 3), we solve

min_alpha  0.5(alpha^T Q alpha) - e^T alpha
    s.t.   0 <= alpha_i <= C,

For L1-regularized L2-loss SVC (-s 5), we solve

min_w \sum |w_j| + C \sum max(0, 1- y_i w^Tx_i)^2

For L1-regularized logistic regression (-s 6), we solve

min_w \sum |w_j| + C \sum log(1 + exp(-y_i w^Tx_i))

For L2-regularized logistic regression (-s 7), we solve

min_alpha  0.5(alpha^T Q alpha) + \sum alpha_i*log(alpha_i) + \sum (C-alpha_i)*log(C-alpha_i) - a constant
    s.t.   0 <= alpha_i <= C,

where

Q is a matrix with Q_ij = y_i y_j x_i^T x_j.

For L2-regularized L2-loss SVR (-s 11), we solve

min_w w^Tw/2 + C \sum max(0, |y_i-w^Tx_i|-epsilon)^2

For L2-regularized L2-loss SVR dual (-s 12), we solve

min_beta  0.5(beta^T (Q + lambda I/2/C) beta) - y^T beta + \sum |beta_i|

For L2-regularized L1-loss SVR dual (-s 13), we solve

min_beta  0.5(beta^T Q beta) - y^T beta + \sum |beta_i|
    s.t.   -C <= beta_i <= C,

where

Q is a matrix with Q_ij = x_i^T x_j.

For one-class SVM dual (-s 21), we solve

min_alpha 0.5(alpha^T Q alpha)
    s.t.   0 <= alpha_i <= 1 and \sum alpha_i = nu*l,

where

Q is a matrix with Q_ij = x_i^T x_j.

If bias >= 0, w becomes [w; w_{n+1}] and x becomes [x; bias]. For example, L2-regularized logistic regression (-s 0) becomes

min_w w^Tw/2 + (w_{n+1})^2/2 + C \sum log(1 + exp(-y_i [w; w_{n+1}]^T[x_i; bias]))

Some may prefer not having (w_{n+1})^2/2 (i.e., bias variable not regularized). For primal solvers (-s 0, 2, 5, 6, 11), we provide an option -R to remove (w_{n+1})^2/2. However, -R is generally not needed as for most data with/without (w_{n+1})^2/2 give similar performances.

The primal-dual relationship implies that -s 1 and -s 2 give the same model, -s 0 and -s 7 give the same, and -s 11 and -s 12 give the same.

We implement 1-vs-the rest multi-class strategy for classification. In training i vs. non_i, their C parameters are (weight from -wi)*C and C, respectively. If there are only two classes, we train only one model. Thus weight1*C vs. weight2*C is used. See examples below.

We also implement multi-class SVM by Crammer and Singer (-s 4):

min_{w_m, \xi_i}  0.5 \sum_m ||w_m||^2 + C \sum_i \xi_i
    s.t.  w^T_{y_i} x_i - w^T_m x_i >= \e^m_i - \xi_i \forall m,i

where e^m_i = 0 if y_i  = m,
      e^m_i = 1 if y_i != m,

Here we solve the dual problem:

min_{\alpha}  0.5 \sum_m ||w_m(\alpha)||^2 + \sum_i \sum_m e^m_i alpha^m_i
    s.t.  \alpha^m_i <= C^m_i \forall m,i , \sum_m \alpha^m_i=0 \forall i

where w_m(\alpha) = \sum_i \alpha^m_i x_i,
and C^m_i = C if m  = y_i,
    C^m_i = 0 if m != y_i.

predict Usage

Usage: predict [options] test_file model_file output_file
options:
-b probability_estimates: whether to output probability estimates, 0 or 1 (default 0); currently for logistic regression only
-q : quiet mode (no outputs)

Note that -b is only needed in the prediction phase. This is different from the setting of LIBSVM.

Examples

> train data_file

Train linear SVM with L2-loss function.

> train -s 0 data_file

Train a logistic regression model.

> train -s 21 -n 0.1 data_file

Train a linear one-class SVM which selects roughly 10% data as outliers.

> train -v 5 -e 0.001 data_file

Do five-fold cross-validation using L2-loss SVM. Use a smaller stopping tolerance 0.001 than the default 0.1 if you want more accurate solutions.

> train -C data_file

Conduct cross validation many times by L2-loss SVM and find the parameter C which achieves the best cross validation accuracy.

> train -C -s 0 -v 3 -c 0.5 -e 0.0001 data_file

For parameter selection by -C, users can specify other solvers (currently -s 0, -s 2 and -s 11 are supported) and different number of CV folds. Further, users can use the -c option to specify the smallest C value of the search range. This option is useful when users want to rerun the parameter selection procedure from a specified C under a different setting, such as a stricter stopping tolerance -e 0.0001 in the above example. Similarly, for -s 11, users can use the -p option to specify the maximal p value of the search range.

> train -c 10 -w1 2 -w2 5 -w3 2 four_class_data_file

Train four classifiers: positive negative Cp Cn class 1 class 2,3,4. 20 10 class 2 class 1,3,4. 50 10 class 3 class 1,2,4. 20 10 class 4 class 1,2,3. 10 10

> train -c 10 -w3 1 -w2 5 two_class_data_file

If there are only two classes, we train ONE model. The C values for the two classes are 10 and 50.

> predict -b 1 test_file data_file.model output_file

Output probability estimates (for logistic regression only).

Library Usage

These functions and structures are declared in the header file linear.h. You can see train.c and predict.c for examples showing how to use them. We define LIBLINEAR_VERSION and declare extern int liblinear_version; in linear.h, so you can check the version number.

  • Function: model* train(const struct problem *prob, const struct parameter *param);

    This function constructs and returns a linear classification or regression model according to the given training data and parameters.

    struct problem describes the problem:

      struct problem
      {
          int l, n;
          int *y;
          struct feature_node **x;
          double bias;
      };
    

    where l is the number of training data. If bias >= 0, we assume that one additional feature is added to the end of each data instance. n is the number of feature (including the bias feature if bias >= 0). y is an array containing the target values. (integers in classification, real numbers in regression) And x is an array of pointers, each of which points to a sparse representation (array of feature_node) of one training vector.

    For example, if we have the following training data:

      LABEL       ATTR1   ATTR2   ATTR3   ATTR4   ATTR5
      -----       -----   -----   -----   -----   -----
      1           0       0.1     0.2     0       0
      2           0       0.1     0.3    -1.2     0
      1           0.4     0       0       0       0
      2           0       0.1     0       1.4     0.5
      3          -0.1    -0.2     0.1     1.1     0.1
    

    and bias = 1, then the components of problem are:

      l = 5
      n = 6
    
      y -> 1 2 1 2 3
    
      x -> [ ] -> (2,0.1) (3,0.2) (6,1) (-1,?)
           [ ] -> (2,0.1) (3,0.3) (4,-1.2) (6,1) (-1,?)
           [ ] -> (1,0.4) (6,1) (-1,?)
           [ ] -> (2,0.1) (4,1.4) (5,0.5) (6,1) (-1,?)
           [ ] -> (1,-0.1) (2,-0.2) (3,0.1) (4,1.1) (5,0.1) (6,1) (-1,?)
    

    struct parameter describes the parameters of a linear classification or regression model:

      struct parameter
      {
              int solver_type;
    
              /* these are for training only */
              double eps;             /* stopping tolerance */
              double C;
              double nu;              /* one-class SVM only */
              int nr_weight;
              int *weight_label;
              double* weight;
              double p;
              double *init_sol;
      };
    

    solver_type can be one of L2R_LR, L2R_L2LOSS_SVC_DUAL, L2R_L2LOSS_SVC, L2R_L1LOSS_SVC_DUAL, MCSVM_CS, L1R_L2LOSS_SVC, L1R_LR, L2R_LR_DUAL, L2R_L2LOSS_SVR, L2R_L2LOSS_SVR_DUAL, L2R_L1LOSS_SVR_DUAL, ONECLASS_SVM. for classification

    • L2R_LR L2-regularized logistic regression (primal)
    • L2R_L2LOSS_SVC_DUAL L2-regularized L2-loss support vector classification (dual)
    • L2R_L2LOSS_SVC L2-regularized L2-loss support vector classification (primal)
    • L2R_L1LOSS_SVC_DUAL L2-regularized L1-loss support vector classification (dual)
    • MCSVM_CS support vector classification by Crammer and Singer
    • L1R_L2LOSS_SVC L1-regularized L2-loss support vector classification
    • L1R_LR L1-regularized logistic regression
    • L2R_LR_DUAL L2-regularized logistic regression (dual) for regression
    • L2R_L2LOSS_SVR L2-regularized L2-loss support vector regression (primal)
    • L2R_L2LOSS_SVR_DUAL L2-regularized L2-loss support vector regression (dual)
    • L2R_L1LOSS_SVR_DUAL L2-regularized L1-loss support vector regression (dual) for outlier detection
    • ONECLASS_SVM one-class support vector machine (dual)

    C is the cost of constraints violation. p is the sensitiveness of loss of support vector regression. nu in ONECLASS_SVM approximates the fraction of data as outliers. eps is the stopping criterion.

    nr_weight, weight_label, and weight are used to change the penalty for some classes (If the weight for a class is not changed, it is set to 1). This is useful for training classifier using unbalanced input data or with asymmetric misclassification cost.

    nr_weight is the number of elements in the array weight_label and weight. Each weight[i] corresponds to weight_label[i], meaning that the penalty of class weight_label[i] is scaled by a factor of weight[i].

    If you do not want to change penalty for any of the classes, just set nr_weight to 0.

    init_sol includes the initial weight vectors (supported for only some solvers). See the explanation of the vector w in the model structure.

    NOTE To avoid wrong parameters, check_parameter() should be called before train().

    struct model stores the model obtained from the training procedure:

      struct model
      {
              struct parameter param;
              int nr_class;           /* number of classes */
              int nr_feature;
              double *w;
              int *label;             /* label of each class */
              double bias;
              double rho;             /* one-class SVM only */
      };
    

    param describes the parameters used to obtain the model.

    nr_class and nr_feature are the number of classes and features, respectively. nr_class = 2 for regression.

    The array w gives feature weights; its size is nr_feature*nr_class but is nr_feature if nr_class = 2. We use one against the rest for multi-class classification, so each feature index corresponds to nr_class weight values. Weights are organized in the following way

        +------------------+------------------+------------+
        | nr_class weights | nr_class weights |  ...
        | for 1st feature  | for 2nd feature  |
        +------------------+------------------+------------+
    

    The array label stores class labels.

    If bias >= 0, x becomes [x; bias]. The number of features is increased by one, so w is a (nr_feature+1)*nr_class array. The value of bias is stored in the variable bias.

    rho is the bias term used in one-class SVM only.

  • Function: void cross_validation(const problem *prob, const parameter *param, int nr_fold, double *target);

    This function conducts cross validation. Data are separated to nr_fold folds. Under given parameters, sequentially each fold is validated using the model from training the remaining. Predicted labels in the validation process are stored in the array called target.

    The format of prob is same as that for train().

  • Function: void find_parameters(const struct problem *prob, const struct parameter *param, int nr_fold, double start_C, double start_p, double *best_C, double *best_p, double *best_score);

    This function is similar to cross_validation. However, instead of conducting cross validation under specified parameters. For -s 0, 2, it conducts cross validation many times under parameters C = start_C, 2start_C, 4start_C, 8start_C, ..., and finds the best one with the highest cross validation accuracy. For -s 11, it conducts cross validation many times with a two-fold loop. The outer loop considers a default sequence of p = 19/20max_p, ..., 1/20max_p, 0 and under each p value the inner loop considers a sequence of parameters C = start_C, 2start_C, 4*start_C, ..., and finds the best one with the lowest mean squared error.

    If start_C <= 0, then this procedure calculates a small enough C for prob as the start_C. The procedure stops when the models of all folds become stable or C reaches max_C.

    If start_p <= 0, then this procedure calculates a maximal p for prob as the start_p. Otherwise, the procedure starts with the first i/20max_p <= start_p so the outer sequence is i/20max_p, (i-1)/20*max_p, ..., 0.

    The best C, the best p, and the corresponding accuracy (or MSE) are assigned to *best_C, *best_p and *best_score, respectively. For classification, *best_p is not used, and the returned value is -1.

  • Function: double predict(const model *model_, const feature_node *x);

    For a classification model, the predicted class for x is returned. For a regression model, the function value of x calculated using the model is returned.

  • Function: double predict_values(const struct model *model_, const struct feature_node *x, double* dec_values);

    This function gives nr_w decision values in the array dec_values. nr_w=1 if regression is applied or the number of classes is two. An exception is multi-class SVM by Crammer and Singer (-s 4), where nr_w = 2 if there are two classes. For all other situations, nr_w is the number of classes.

    We implement one-vs-the rest multi-class strategy (-s 0,1,2,3,5,6,7) and multi-class SVM by Crammer and Singer (-s 4) for multi-class SVM. The class with the highest decision value is returned.

  • Function: double predict_probability(const struct model *model_, const struct feature_node *x, double* prob_estimates);

    This function gives nr_class probability estimates in the array prob_estimates. nr_class can be obtained from the function get_nr_class. The class with the highest probability is returned. Currently, we support only the probability outputs of logistic regression.

  • Function: int get_nr_feature(const model *model_);

    The function gives the number of attributes of the model.

  • Function: int get_nr_class(const model *model_);

    The function gives the number of classes of the model. For a regression model, 2 is returned.

  • Function: void get_labels(const model *model_, int* label);

    This function outputs the name of labels into an array called label. For a regression model, label is unchanged.

  • Function: double get_decfun_coef(const struct model *model_, int feat_idx, int label_idx);

    This function gives the coefficient for the feature with feature index = feat_idx and the class with label index = label_idx. Note that feat_idx starts from 1, while label_idx starts from 0. If feat_idx is not in the valid range (1 to nr_feature), then a zero value will be returned. For classification models, if label_idx is not in the valid range (0 to nr_class-1), then a zero value will be returned; for regression models and one-class SVM models, label_idx is ignored.

  • Function: double get_decfun_bias(const struct model *model_, int label_idx);

    This function gives the bias term corresponding to the class with the label_idx. For classification models, if label_idx is not in a valid range (0 to nr_class-1), then a zero value will be returned; for regression models, label_idx is ignored. This function cannot be called for a one-class SVM model.

  • Function: double get_decfun_rho(const struct model *model_);

    This function gives rho, the bias term used in one-class SVM only. This function can only be called for a one-class SVM model.

  • Function: const char *check_parameter(const struct problem *prob, const struct parameter *param);

    This function checks whether the parameters are within the feasible range of the problem. This function should be called before calling train() and cross_validation(). It returns NULL if the parameters are feasible, otherwise an error message is returned.

  • Function: int check_probability_model(const struct model *model);

    This function returns 1 if the model supports probability output; otherwise, it returns 0.

  • Function: int check_regression_model(const struct model *model);

    This function returns 1 if the model is a regression model; otherwise it returns 0.

  • Function: int check_oneclass_model(const struct model *model);

    This function returns 1 if the model is a one-class SVM model; otherwise it returns 0.

  • Function: int save_model(const char *model_file_name, const struct model *model_);

    This function saves a model to a file; returns 0 on success, or -1 if an error occurs.

  • Function: struct model *load_model(const char *model_file_name);

    This function returns a pointer to the model read from the file, or a null pointer if the model could not be loaded.

  • Function: void free_model_content(struct model *model_ptr);

    This function frees the memory used by the entries in a model structure.

  • Function: void free_and_destroy_model(struct model **model_ptr_ptr);

    This function frees the memory used by a model and destroys the model structure.

  • Function: void destroy_param(struct parameter *param);

    This function frees the memory used by a parameter set.

  • Function: void set_print_string_function(void (*print_func)(const char *));

    Users can specify their output format by a function. Use set_print_string_function(NULL); for default printing to stdout.

Additional Information

If you find LIBLINEAR helpful, please cite it as

R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin.
LIBLINEAR: A Library for Large Linear Classification, Journal of
Machine Learning Research 9(2008), 1871-1874. Software available at
http://www.csie.ntu.edu.tw/~cjlin/liblinear

For any questions and comments, please send your email to [email protected]

Comments
  • Changes to run v1.5 directly from Maven Central

    Changes to run v1.5 directly from Maven Central

    Dear @bwaldvogel , DKPro (https://github.com/dkpro/dkpro-core) has a dependency on liblinear 1.5, and we would like to have that dependency downloaded directly from Maven Central (currently, only versions >=1.8 of liblinear are kept there). We need to keep the old group id/artifact id though (liblinear.liblinear).

    I provided the changes needed for packaging a 1.51 version without the need of other dependencies apart from the ones already in Maven Central.

    Please, would it be possible for you to incorporate those changes to your repository and uploading the new jar to Maven Central?

    Thank you, Beto


    These changes are needed to upload v.1.5 to Maven Central:

    • Use netlib-java 0.9.3 dependency from Maven Central, instead of using BLAS 0.8 with manual dependency resolution.
    • Updates to Tron.java due to newer BLAS version in netlib-java 0.9.3
    • Updates in the pom.xml (maybe some extra configuration is needed)
    opened by betoboullosa 12
  • Linear's global RNG makes it difficult to reproduce models or track concurrent executions

    Linear's global RNG makes it difficult to reproduce models or track concurrent executions

    We use liblinear-java in Tribuo, and it’s working very well. We’re adding a reproducibility package to Tribuo to rebuild Tribuo models from the provenance metadata they carry, and as part of the tests for that package I noticed that liblinear-java has a global RNG that causes some of the algorithms to not produce bit-wise exact reproductions when executed on the same inputs. In general Tribuo tracks all RNG state and manages it to ensure that concurrent training runs use independently tracked streams of random numbers for provenance purposes, and the global shared state in liblinear-java means we can’t effectively track it and so we’ll have to enforce sequential use of liblinear-java via synchronization and consistently reset the RNG to a known state.

    Is it possible to move the static random instance in Linear into Problem as an instance field? To preserve the original behaviour it could initialize itself to a Random instance using DEFAULT_RANDOM_SEED, or the code could be modified so it defaults to the global RNG if no instance RNG is present in the Problem. The first option would basically just be a find/replace on random with prob.random, along with adding the extra field to Problem (I think it touches approximately 9 lines). The second option would be a little more involved as it requires guards on the 8 uses of random and thus would slightly increase divergence from the C++ liblinear, so might not be as desirable from a maintainability perspective. However it would preserve the existing behaviour exactly for users who don’t set the random field on Problem. We’d be happy to contribute either patch if you’d accept it.

    opened by Craigacp 6
  • fixed  the code in Linear.java to take the max_iter from input when using L2R_L2LOSS_SVC or L2R_LR as solver

    fixed the code in Linear.java to take the max_iter from input when using L2R_L2LOSS_SVC or L2R_LR as solver

    Previously, when one uses the constructor for Parameter to set max_iter the code wouldn't use it when using either L2RL2LLOSS_SVC or L2R_LR. I changed the code to allow it to receive the max_iter from parameter when set.

    opened by salimm 6
  • Add Multithreading for L2_LR Solver

    Add Multithreading for L2_LR Solver

    1. Adds a threadCount param and associated command-line argument. This argument is a noop unless L2_LR solver is used.
    2. Adds the LLThreadPool class, a wrapper around a ThreadPoolExecutor with accompanying helper classes. The general multithreading strategy is: (a) break up an operation over a list of examples into "chunks", small subsets of the examples, (b) process each chunk independently via the thread pool, accumulating results (e.g. the partial gradients) into a thread-local array where applicable, (c) acquiring a lock and then adding the thread-local array into the "real", global array of values. (3) The actual multithreaded operations are in L2R_LrFunction. If multithreading is not used, the code paths are exactly the same as before, except that inner loops are factored into separate functions to avoid code duplication. (4) Added unit tests.

    I've tried to keep to the style of the original code. Wall clock speedup of a problem with 100K examples in 5 dimensions on a MacBook with 4 threads was roughly 2.4x (not rigorously tested).

    feature 
    opened by jeffpasternack 5
  • Bias parameter not used in Linear.predictValues()

    Bias parameter not used in Linear.predictValues()

    I am using a logistic regression model with 2 features and a bias.

    I would expect the score to be calculated as

    w1*x1 + w2*x2 + bias
    

    but looking at https://github.com/bwaldvogel/liblinear-java/blob/99518885860ad5f88f7582c3fb491607352b6dc7/src/main/java/de/bwaldvogel/liblinear/Linear.java#L503 it seems like the bias parameter is never added to dec_values.

    Am I right to think the bias parameter should contribute to the score or is my understanding incorrect?

    opened by tommilata 4
  • Fixing max_iters for all learners

    Fixing max_iters for all learners

    Fixed Linear.train_one to receive the max_iter from Parameter object Changes are listed below:

    1. For solvers that where using Tron class (L2R_LR, L2R_L2LOSS_SVC, L2R_L2LOSS_SVR): the class already had a constructor that would receive the max_iter as argument. But, it wasn't used. Therefore the learner would automatically just use 1000.

    2. For rest of the solvers(L2R_LR_DUAL, L1R_LR, L1R_L2LOSS_SVC, L2R_L1LOSS_SVC_DUAL, L2R_L2LOSS_SVC_DUAL): the function changed to receive the max_iter as argument. All the function where defining a constant at the begining of the function.

    3. L2R_L2LOSS_SVR_DUAL and L2R_L1LOSS_SVR_DUAL were already handling it, so no changes was applied to this solver type

    opened by salimm 4
  • Different Results With the Same Experiment

    Different Results With the Same Experiment

    I get slightly different results by running the same experiment (LogReg L1, reg=0.3) each time. Is that possible or must there be a bug either with the library or with my code?

    This is not the case if I use LogReg L2. I get the exactly same results. I am testing out the other LogReg L1 implementations (StanfordNLP and Smile) as well. Both's results are deterministic.

    opened by hrzafer 4
  • Bias term is added by default

    Bias term is added by default

    The following code will result a Model object where model.nr_feature = n-1 even if the number of features in the dataset is n excluding the bias term.

    problem.l = l
    problem.n = n
    problem.x = x
    problem.y = y
    ...
    Model model = Linear.train(problem, parameter);
    

    This is because in the above code, bias term is added implicitly (default value for problem.bias is 0) and in Linear.train() there is a line if (prob.bias >= 0) model.nr_feature = n - 1;. To avoid this we can change line problem.n = n as problem.n = n+1. But the Java Api documentation is misleading and CLI documentation says the default value for bias is -1 which makes one think that it is also the case for programmatic access.

    opened by hrzafer 2
  • Added reasonable error message to loadModel parsing

    Added reasonable error message to loadModel parsing

    ...when feature vector weights don't end with a whitespace character.

    Spent a bit too much time this morning debugging this - would've been really helpful to me to just get a straightforward error message suggesting that my model was semantically incorrect from the point of view of the parser, and describing how and where.

    The fix is basically to catch the ArrayIndexOutOfBoundsException, and if b > 127, then throw a different RuntimeException with a message actually spelling out what went wrong. Otherwise, just throw the original exception.

    opened by petergaultney 2
  • Documentation For Java API

    Documentation For Java API

    Hi,

    This is a great library. I've been looking for a lightweight, reliable and commercial friendly Java implementation of LogReg, MaxEnt and SVM for a while and surprisingly it wasn't easy as I thought. Could you provide some Java code examples for the basic usage of the API? I could figure out the following but I wonder if there is more to know.

            Problem problem = Train.readProblem(new File("train.libsvm"), 1);
            Problem testProb = Train.readProblem(new File("test.libsvm"), 1);
    
            SolverType solver = SolverType.L2R_LR; // -s 0
            double C = 1.5;    // cost of constraints violation
            double eps = 0.01; // stopping criteria
    
            Parameter parameter = new Parameter(solver, C, eps);
            final Model model = Linear.train(problem, parameter);
            File modelFile = new File("model");
            model.save(modelFile);
            for (int i = 0; i < testProb.x.length; i++) {
                Feature[] instance = testProb.x[i];
                double prediction = Linear.predict(model, instance);
            }
    
    opened by hrzafer 2
  • InvalidInputDataException: indices must be sorted in ascending order (line 479)

    InvalidInputDataException: indices must be sorted in ascending order (line 479)

    liblinear has problems reading libsvm formatted files which use index 0

    Is there a reason for this or is it just a bug? If it is a bug could you change line 307 in the Train class to int indexBefore = -1;

    opened by sygel 1
  • Is a bias term added to the features automatically?

    Is a bias term added to the features automatically?

    I am maintaining code that I mostly didn't write which uses liblinear for logistic regression. My understanding from the documentation was that setting bias to a value greater than 0 will result in a synthetic feature being added. But I cannot see anywhere in the code where this feature is added either during training or prediction. Is it required to both set the bias parameter to a value greater than 1 and also manually add the synthetic feature node during training and prediction?

    opened by bmccord2 2
  • NullPointerException when using sparse data to train Model

    NullPointerException when using sparse data to train Model

    Hello everyone,

    I am currently working on a project that use your library to train classification models, I build a Problem object with sparse feature vectors (following some of your examples) and when I start the training with such Problem object I get NPE from the train method occurring when it browse through feature vectors. When looking at your code I can see you browse through features in a "for each" way, but when I follow the exception using debugging tools, I see a "for i=0 to n" logic, that will obviously lead to point toward a null feature in a sparse feature vector.

    I have Mac os X 11.2.3 and I use oracle jdk 11.0.6.

    opened by AlexandreLabadie 1
  • Thread safety problem in predict: flag_predict_probability shouldn't be static

    Thread safety problem in predict: flag_predict_probability shouldn't be static

    The flag_predict_probability boolean in predict is declared static. This causes a problem when running 2 different predict jobs in the same Java process with different probability options: the setting from the second call will overwrite the one from the first call. This may cause a prediction using a non-probabilty-capable solver type to fail, despite being called with correct parameters, if a simultaneous job runs with probabilities enabled.

    opened by Googulator 3
  • How to obtain the support vectors

    How to obtain the support vectors

    Hi --

    I am using Weka's LibLINEAR class and want to obtain the support vectors after the classifier has been trained.

    Is there an example showing how this can be done?

    Thanks, Haimonti

    opened by Haimonti 0
Java Statistical Analysis Tool, a Java library for Machine Learning

Java Statistical Analysis Tool JSAT is a library for quickly getting started with Machine Learning problems. It is developed in my free time, and made

null 752 Dec 20, 2022
Hierarchical Temporal Memory implementation in Java - an official Community-Driven Java port of the Numenta Platform for Intelligent Computing (NuPIC).

htm.java Official Java™ version of... Hierarchical Temporal Memory (HTM) Community-supported & ported from the Numenta Platform for Intelligent Comput

Numenta 301 Dec 1, 2022
An Engine-Agnostic Deep Learning Framework in Java

Deep Java Library (DJL) Overview Deep Java Library (DJL) is an open-source, high-level, engine-agnostic Java framework for deep learning. DJL is desig

Amazon Web Services - Labs 2.9k Jan 7, 2023
Datumbox is an open-source Machine Learning framework written in Java which allows the rapid development of Machine Learning and Statistical applications.

Datumbox Machine Learning Framework The Datumbox Machine Learning Framework is an open-source framework written in Java which allows the rapid develop

Vasilis Vryniotis 1.1k Dec 9, 2022
java deep learning algorithms and deep neural networks with gpu acceleration

Deep Neural Networks with GPU support Update This is a newer version of the framework, that I developed while working at ExB Research. Currently, you

Ivan Vasilev 1.2k Jan 6, 2023
Learning Based Java (LBJava)

Learning Based Java LBJava core LBJava examples LBJava maven plugin Compiling the whole package From the root directory run the following command: Jus

CogComp 12 Jun 9, 2019
Kodlama IO | JAVA & REACT Projects

Human Resources Management System Creating a human resources system using Java-SpringBoot that can be used by employers, job seekers and system person

Aykut Şahin 23 Oct 19, 2022
An Open Source Java Library for the Rubiks Cube!

?? Table of contents Overview What is Cubot? Why would you want it? Documentation Installation Updates ?? Overview A Java library to help you : Virtua

Akshath Raghav 13 Oct 17, 2022
Java Exp FrameWork

Exp Poc框架并不少,TangScan、Pocsuite 等等,用python写一个其实是很简单的事情。为什么要重复造这个轮子呢? 看过不少漏洞了,差不多都是本地很杂乱的存放poc,很多语言都有,而且大多数poc也只能弹个计算器而已.....所以很早就想拥有一个属于自己的统一存放Exp的地方,也

Skay 100 Oct 9, 2022
A Simple movies app using JAVA,MVVM and with a offline caching capability

IMDB-CLONE A simple imdb clone using JAVA,MVVM with searching and bookmarking ability with offline caching ability screenshots Home Screen 1 Home Scre

saiteja janjirala 13 Aug 16, 2022
This repository consists of the code samples, assignments, and the curriculum for the Community Classroom complete Data Structures & Algorithms Java bootcamp.

DSA-Bootcamp-Java Subscribe to our channel Complete Playlist Syllabus Discord for discussions Telegram for announcements Connect with me     Follow Co

Kunal Kushwaha 10.2k Jan 1, 2023
JML - Java Math Library.

JML JML - Java Math Library. JML is a Java Math Library for solving Advanced Mathematical calculations. Disclaimer This project is under heavy develop

Java Math Library 3 Sep 23, 2021
DFA来过滤敏感词工具。--- The sensitive word tool for java with DFA.

sensitive-word-plus sensitive-word-plus 基于 DFA 算法实现的高性能敏感词工具。 站在巨人肩膀上,本项目是根据sensitive-word 做的升级 创作目的 基于sensitive-word-plus 实现返回敏感词类型 实现一款好用敏感词工具。 基于 D

null 11 Sep 22, 2022
Data Structure using Java Project

CSC348-Data-Structure This repository contains end of semester project for Data Structure (UiTM diploma's subject). It is developed using Java languag

Farhana Ahmad 2 Oct 11, 2021
Simple ATM Machine made with Java

Output / Preview Enter your account number: Enter your pin number: ATM main menu: 1. - View Account Balance 2. - Withdraw funds 3. - Add funds 4. - T

SonLyte 10 Oct 21, 2021
java math accurate implementation & experiments

Marlin-Math Accurate and fastest Math functions in java, like the Marlin renderer ! Rationale Java supports Quadratic & Cubic curves in Java2D & JavaF

Laurent Bourgès 7 Nov 18, 2021
👨‍🏫ITMO University first 4 labs. They are about object oriented programming and Java language

Java-Programming-1st-semester 1st lab - math operations, formatted output. 2nd lab - object oriented programming. 3rd lab - SOLID and STUPID principle

Andrey Vasiliev 4 Dec 1, 2022
Tribuo - A Java machine learning library

Tribuo - A Java prediction library (v4.2) Tribuo is a machine learning library in Java that provides multi-class classification, regression, clusterin

Oracle 1.1k Dec 28, 2022