Java 8 annotation processor and framework for deriving algebraic data types constructors, pattern-matching, folds, optics and typeclasses.

Overview

Derive4J: Java 8 annotation processor for deriving algebraic data types constructors, pattern matching and more!

Travis codecov.io Maven Central Gitter Chat

tl;dr Show me how to write, say, the Either sum type with Derive4J!.

Table of contents

Caution: if you are not familiar with Algebraic Data Types or the "visitor pattern" then you may want to learn a bit about them.

So, what can this project do for us, poor functional programmers stuck with a legacy language called Java? A good deal of what is commonly available in better languages like Haskell, including:

Algebraic data types come in two flavours, product types and sum types. This readme focus on sum types because it is the more interesting case; product types being the well known common case in Java, but Derive4J handles product types in exactly the same fashion (ie. through a visitor interface with a single abstract method).

Example: a 'Visitor' for HTTP Request

Let's say we want to model an HTTP request. For the sake of the example let's say that an HTTP request can either be

  • a GET on a given path
  • a DELETE on a given path
  • a POST of a content body on a given path
  • a PUT of a content body on a given path

and nothing else!

You could then use the corrected visitor pattern and write the following class in Java:

package org.derive4j.example;

/** A data type to model an http request. */
@Data
public abstract class Request {

  /** the Request 'visitor' interface, R being the return type
   *  used by the 'accept' method : */
  interface Cases<R> {
    // A request can either be a 'GET' (of a path):
    R GET(String path);
    // or a 'DELETE' (of a path):
    R DELETE(String path);
    // or a 'PUT' (on a path, with a body):
    R PUT(String path, String body);
    // or a 'POST' (on a path, with a body):
    R POST(String path, String body);
    // and nothing else!
  }

  // the 'accept' method of the visitor pattern:
  public abstract <R> R match(Cases<R> cases);

  /**
   * Alternatively and equivalently to the visitor pattern above, if you prefer a more FP style,
   * you can define a catamorphism instead. (see examples)
   * (most useful for standard data type like Option, Either, List...)
   */
}

Constructors

Without Derive4J, you would have to create subclasses of Request for all four cases. That is, write at the minimum something like:

  public static Request GET(String path) {
    return new Request() {
      @Override
      public <R> R match(Cases<R> cases) {
        return cases.GET(path);
      }
    };}

for each case. But thanks to the @Data annotation, Derive4j will do that for you! That is, it will generate a Requests class (the name is configurable, the class is generated by default in target/generated-sources/annotations when using Maven) with four static factory methods (what we call 'constructors' in FP):

  public static Request GET(String path) {...}
  public static Request DELETE(String path) {...}
  public static Request PUT(String path, String body) {...}
  public static Request POST(String path, String body) {...}

You can also ask Derive4J to generate null checks with:

@Data(arguments = ArgOption.checkedNotNull)

equals, hashCode, toString?

Derive4J philosophy is to be as safe and consistent as possible. That is why Object.{equals, hashCode, toString} are not implemented by generated classes by default (they are best kept ignored as they break parametricity). Nonetheless, as a concession to legacy, it is possible to force Derive4J to implement them, by declaring them abstract. Eg by adding the following in your annotated class:

  @Override
  public abstract int hashCode();
  @Override
  public abstract boolean equals(Object obj);
  @Override
  public abstract String toString();

The safer solution would be to never use those methods and use 'type classes' instead, eg. Equal, Hash and Show. The project Derive4J for Functional Java aims to generate them automatically.

Pattern matching syntaxes

Now let's say that you want a function that returns the body size of a Request. Without Derive4J you would write something like:

  static final Function<Request, Integer> getBodySize = request -> 
      request.match(new Cases<Integer>() {
        public Integer GET(String path) {
          return 0;
        }
        public Integer DELETE(String path) {
          return 0;
        }
        public Integer PUT(String path, String body) {
          return body.length();
        }
        public Integer POST(String path, String body) {
          return body.length();
        }
      });

With Derive4J you can do that a lot less verbosely, thanks to a generated fluent structural pattern matching syntaxes! And it does exhaustivity check! (you must handle all cases). The above can be rewritten into:

static final Function<Request, Integer> getBodySize = Requests.cases()
      .GET_(0) // shortcut for .Get(path -> 0)
      .DELETE_(0)
      .PUT((path, body)  -> body.length())
      .POST((path, body) -> body.length())

or even (because you don't care of GET and DELETE cases):

static final Function<Request, Integer> getBodySize = Requests.cases()
      .PUT((path, body)  -> body.length())
      .POST((path, body) -> body.length())
      .otherwise_(0)

Derive4j also allows to match directly against a value:

static int getBodyLength(Request request) {
  return Requests.caseOf(request)
      .PUT((path, body)  -> body.length())
      .POST((path, body) -> body.length())
      .otherwise_(0)
}

Accessors (getters)

Now, pattern matching every time you want to inspect an instance of Request is a bit tedious. For this reason Derive4J generates 'getter' static methods for all fields. For the path and body fields, Derive4J will generate the following methods in the Requests class:

  public static String getPath(Request request){
    return Requests.cases()
        .GET(path          -> path)
        .DELETE(path       -> path)
        .PUT((path, body)  -> path)
        .POST((path, body) -> path)
        .apply(request);
  }
  // return an Optional because the body is not present in the GET and DELETE cases:
  static Optional<String> getBody(Request request){
    return Requests.cases()
        .PUT((path, body)  -> body)
        .POST((path, body) -> body)
        .otherwiseEmpty()
        .apply(request);
  }

(Actually the generated code is equivalent but more efficient)

Using the generated getBody methods, we can rewrite our getBodySize function into:

static final Function<Request, Integer> getBodySize = request ->
      Requests.getBody(request)
              .map(String::length)
              .orElse(0);

Functional setters ('withers')

The most painful part of immutable data structures (like the one generated by Derive4J) is updating them. Scala case classes have copy methods for that. Derive4J generates similar modifier and setter methods in the Requests class:

  public static Function<Request, Request> setPath(String newPath){
    return Requests.cases()
            .GET(path          -> Requests.GET(newPath))
            .DELETE(path       -> Requests.DELETE(newPath))
            .PUT((path, body)  -> Requests.PUT(newPath, body))
            .POST((path, body) -> Requests.POST(newPath, body)));
  }
  public static Function<Request, Request> modPath(Function<String, String> pathMapper){
    return Requests.cases()
            .GET(path          -> Requests.GET(pathMapper.apply(path)))
            .DELETE(path       -> Requests.DELETE(pathMapper.apply(path)))
            .PUT((path, body)  -> Requests.PUT(pathMapper.apply(path), body))
            .POST((path, body) -> Requests.POST(pathMapper.apply(path), body)));
  }
  public static Function<Request, Request> setBody(String newBody){
    return Requests.cases()
            .GET(path          -> Requests.GET(path))    // identity function for GET
            .DELETE(path       -> Requests.DELETE(path)) // and DELETE cases.
            .PUT((path, body)  -> Requests.PUT(path, newBody))
            .POST((path, body) -> Requests.POST(path, newBody)));
  }
  ...

By returning a function, modifiers and setters allow for a lightweight syntax when updating deeply nested immutable data structures.

First class laziness

Languages like Haskell provide laziness by default, which simplifies a lot of algorithms. In traditional Java you would have to declare a method argument as Supplier<Request> (and do memoization) to emulate laziness. With Derive4J that is no more necessary as it generates a lazy constructor that gives you transparent lazy evaluation for all consumers of your data type:

  // the requestExpression will be lazy-evaluated on the first call
  // to the 'match' method of the returned Request instance:
  public static Request lazy(Supplier<Request> requestExpression) {
    ...
  }

Have a look at List for how to implement a lazy cons list in Java using Derive4J (you may also want to see the associated generated code).

Flavours

In the example above, we have used the default JDK flavour. Also available are FJ (Functional Java), Fugue (Fugue), Javaslang/Vavr (Vavr), HighJ (HighJ), Guava and Cyclops (Cyclops-react) flavours. When using those alternative flavours, Derive4J will use eg. the specific Option implementations from those projects instead of the jdk Optional class.

Optics (functional lenses)

If you are not familiar with optics, have a look at Monocle (for Scala, but Functional Java provides similar abstraction).

Using Derive4J generated code, defining optics is a breeze (you need to use the FJ flavour by specifying @Data(flavour = Flavour.FJ):

  /**
   * Lenses: optics focused on a field present for all data type constructors
   * (getter cannot 'failed'):
   */
  public static final Lens<Request, String> _path = lens(
      Requests::getPath,
      Requests::setPath);
  /**
   * Optional: optics focused on a field that may not be present for all constructors
   * (getter return an 'Option'):
   */
  public static final Optional<Request, String> _body = optional(
      Requests::getBody,
      Requests::setBody);
  /**
   * Prism: optics focused on a specific constructor:
   */
  public static final Prism<Request, String> _GET = prism(
      // Getter function
      Requests.cases()
          .GET(fj.data.Option::some)
          .otherwise(Option::none),
      // Reverse Get function (aka constructor)
      Requests::GET);

  // If there is more than one field, we use a tuple as the prism target:
  public static final Prism<Request, P2<String, String>> _POST = prism(
      // Getter:
      Requests.cases()
          .POST((path, body) -> p(path, body))
          .otherwiseNone(),
      // reverse get (construct a POST request given a P2<String, String>):
      p2 -> Requests.POST(p2._1(), p2._2()));
}

Smart constructors

Sometimes you want to validate the constructors parameters before returning an instance of a type. When using the Smart visibity (@Data(@Derive(withVisibility = Visibility.Smart))), Derive4J will not expose "raw" constructors and setter as public, but will use package private visibility for those methods instead (getters will still be public).

Then you expose a public static factory method that will do the necessary validation of the arguments before returning an instance (typically wrapped in a Option/Either/Validation), and that public factory will be the only way to get an instance of that type.

See usage of this feature in PersonName.

Static methods export

It is generally considered good style to keep static methods and instance methods separated, especially because overloads could cause ambiguities on usage as method references.

The Java file generated by derive4j contains only static methods, so it makes sense to use this class as main entry point for the static part of the data type API.

To this end, Derive4J support re-exporting of your own manually-written static methods as part of the generated class API. It can do so in two ways (that can be combined):

  1. by specifying that the main generated class must extends a given class eg. MyStaticMethods.class, thus exposing all its static methods through inheritance.
  2. by annotating your package-private static methods with @ExportAsPublic: Derive4J will generate public forwarding methods in the generated class, and, as bonus, it will memoize the result of nullary methods.
@Data(@Derive(extend = MyStaticMethods.class))
public abstract class List<A> {
  // package-private static class with public static methods:
  static abstract class MyStaticMethods {
    public static <A> List<A> singleton(A a) {
      return Lists.cons(a, Lists.nil())
    }
  }
  // Or use the annotation, either in the above MyStaticMethods class
  // or directly in the data type class:
  @ExportAsPublic
  static <A> List<A> singleton(A a) {
    return Lists.cons(a, Lists.nil())
  }
}

public static void main(final String[] args) {
  // enjoy single access points for all static methods:
  List<String> a = Lists.singleton("a");
}

See usage of this feature in PersonName.

Updating deeply nested immutable data structure

Let's say you want to model a CRM. Each client is a Person who can be contacted by email, by telephone or by postal mail. With Derive4J you could write the following:

import org.derive4j.*;
import java.util.function.BiFunction;

@Data
public abstract class Address {
  public abstract <R> R match(@FieldNames({"number", "street"}) 
  			      BiFunction<Integer, String, R> Address);
}
import org.derive4j.Data;

@Data
public abstract class Contact {
    interface Cases<R> {
      R byEmail(String email);
      R byPhone(String phoneNumber);
      R byMail(Address postalAddress);
    }
    public abstract <R> R match(Cases<R> cases);
}
import org.derive4j.*;
import java.util.function.BiFunction;

@Data
public abstract class Person {
  public abstract <R> R match(@FieldNames({"name", "contact"})
                              BiFunction<String, Contact, R> Person);
}

But now we have a problem: All the clients have been imported from a legacy database with an off-by-one error for the street number! We must create a function that increments each Person's street number (if it exists) by one. And we have to do this without modifying the original data structure (because it is immutable). With Derive4J, writing such a function is trivial:

import java.util.Optional;
import java.util.function.Function;

import static org.derive4j.example.Addresss.Address;
import static org.derive4j.example.Addresss.getNumber;
import static org.derive4j.example.Addresss.modNumber;
import static org.derive4j.example.Contacts.getPostalAddress;
import static org.derive4j.example.Contacts.modPostalAddress;
import static org.derive4j.example.Persons.Person;
import static org.derive4j.example.Persons.getContact;
import static org.derive4j.example.Persons.modContact;

  public static void main(String[] args) {

    Person joe = Person("Joe", Contacts.byMail(Address(10, "Main St")));

    Function<Person, Person> incrementStreetNumber = modContact(
    						       modPostalAddress(
    						         modNumber(number -> number + 1)));
    
    // correctedJoe is a copy of joe with the street number incremented:
    Person correctedJoe = incrementStreetNumber.apply(joe);

    Optional<Integer> newStreetNumber = getPostalAddress(getContact(correctedJoe))
        .map(postalAddress -> getNumber(postalAddress));

    System.out.println(newStreetNumber); // print "Optional[11]" !!
  }

Popular use-case: domain specific languages

Algebraic data types are particulary well fitted for creating DSLs. A calculator for arithmetic expressions could be built like this:

import java.util.function.Function;
import org.derive4j.Data;
import static org.derive4j.example.Expressions.*;

@Data
public abstract class Expression {

	interface Cases<R> {
		R Const(Integer value);
		R Add(Expression left, Expression right);
		R Mult(Expression left, Expression right);
		R Neg(Expression expr);
	}
	
	public abstract <R> R match(Cases<R> cases);

	private static Function<Expression, Integer> eval = Expressions
		.cases()
			.Const(value        -> value)
			.Add((left, right)  -> eval(left) + eval(right))
			.Mult((left, right) -> eval(left) * eval(right))
			.Neg(expr           -> -eval(expr));

	public static Integer eval(Expression expression) {
		return eval.apply(expression);
	}

	public static void main(String[] args) {
		Expression expr = Add(Const(1), Mult(Const(2), Mult(Const(3), Const(3))));
		System.out.println(eval(expr)); // (1+(2*(3*3))) = 19
	}
}

Catamorphisms

are generated for recursively defined datatypes. So that you can rewrite the above eval method into:

	public static Integer eval(Expression expression) {
		Expressions
		     .cata(
		        value -> value,
		        (left, right) -> left + right,
		        (left, right) -> left * right,
		        expr -> -expr,
		        Supplier::get
		     )
		     .apply(expression)
	}

The last parameter (Supplier::get above) specify how recursive calls are suspended. Using Supplier::get means that the computation is not suspended: for deep structures it may blow the stack!

To be safe, use the lazy, (or delay or suspend or defer...) constructor of your result type, such as the lazy constructor generated by Derive4J.

If no such constructor is available then your safe option is to use a Trampoline, such as the one provided by FunctionalJava:

public static Integer stackSafeEval(Expression expression) {
    Expressions.cata(
        value -> Trampoline.pure(value),
        (left, right) -> left.zipWith(right, (l, r) -> l + r),
        (left, right) -> left.zipWith(right, (l, r) -> l * r),
        expr -> expr.map(i -> -i),
        Trampoline::suspend
    ).f(expression).run();
}

Extensible algebraic data types

Algebraic data types defined as fix-point (aka initial algebra) of an object algebras can enjoy their extensibility properties.

When the data type is not inductive the extensibility property comes directly from covariance

Eg. an event type for an inventory service:

  @Data
  interface EventV1 {

    interface Cases<R> {
      R newItem(Long ref, String itemName);
      R itemRemoved(Long ref);
    }

    <R> R match(Cases<R> cases);
  }

Then comes a new version of the service, with enriched events and new cases. If the visitor for the new event type extend the old visitor interface then old events can be easily converted to new events, without change to the old classes:

  @Data
  interface EventV2 {

    interface Cases<R> extends EventV1.Cases<R> { // extends V1 with:

      // new `initialStock` field in `newItem` event:
      R newItem(Long ref, String itemName, int initialStock);
      // default to 0 for old events:
      @Override
      default R newItem(Long ref, String itemName) {
        return newItem(ref, itemName, 0);
      }
      // new event:
      R itemRenamed(Long ref, String newName);
    }

    <R> R match(Cases<R> cases);

    static EventV2 fromV1(EventV1 v1Event) {
      // Events are (polymorphic) functions!
      // And functions are contra-variant in type argument,
      // thus we can use method reference to convert from V1 to V2:
      return v1Event::match;
    }
  }

Extensible inductive data types via hylomorphisms

Aka solving the expression problem via object-algebras used as visitor. For this, we need to slightly change the visitor of the above Expression so that a type variable (E) is used instead of the self-reference:

@Data
interface Exp {

  interface ExpAlg<E, R> {
    R Lit(int lit);
    R Add(E e1, E e2);
  }

  <R> R accept(ExpAlg<Exp, R> alg);
}

When data types are defined is such a way (as a fix-point of the algebra), Derive4J generate (by default) an instance of the visitor/algebra that can serve as factory (aka. anamorphism). Using this factory as an argument to compatible catamorphism (thus creating a hylomorphism) we obtain a conversion function from one ADT to another.

Eg. we can create a new data type that add a multiplication case to the above data type, and still be able to maximally reuse the existing code without modification:

@Data
interface ExpMul {

  interface ExpMulAlg<E, R> extends Exp.ExpAlg<E, R> {
    R Mul(E e1, E e2);
  }

  <R> R accept(ExpMulAlg<ExpMul, R> alg);

  static Function<Exp, ExpMul> fromExp() {
    ExpMulAlg<ExpMul, ExpMul> factory = ExpMuls.factory();
    return Exps.cata(factory, ExpMuls::lazy);
  }
}

To ensure smooth extensibility across compilation unit (or even during incremental compilation), it is best to use the -parameters option of javac.

But what exactly is generated?

This is a very legitimate question. Here is the ExpMuls.java file that is generated for the above @Data ExpMul type.

Parametric polymorphism

... works as expected. For example, you can write the following:

import java.util.function.Function;
import java.util.function.Supplier;
import org.derive4j.Data;

@Data
public abstract class Option<A> {

    public abstract <R> R cata(Supplier<R> none, Function<A, R> some);

    public final <B> Option<B> map(final Function<A, B> mapper) {
        return Options.modSome(mapper).apply(this);
    }
}

The generated modifier method modSome allows polymorphic update and is incidentaly the functor for our Option!

Generalized Algebraic Data Types

GADTs are also supported out of the box by Derive4J (within the limitations of the Java type system). Here is how you can translate the example from Fun with phantom types:

import org.derive4j.hkt.TypeEq;

@Data
public abstract class Term<T> {
  interface Cases<A, R> {
    R Zero(TypeEq<Integer, A> id);
    R Succ(Term<Integer> pred, TypeEq<Integer, A> id);
    R Pred(Term<Integer> succ, TypeEq<Integer, A> id);
    R IsZero(Term<Integer> a, TypeEq<Boolean, A> id);
    R If(Term<Boolean> cond, Term<A> then, Term<A> otherwise);
  }

  public abstract <X> X match(Cases<T, X> cases);

  public static <T> T eval(final Term<T> term) {

    return Terms.caseOf(term).
        Zero(id -> id.coerce(0)).
        Succ((t, id) -> id.coerce(eval(t) + 1)).
        Pred((t, id) -> id.coerce(eval(t) - 1)).
        IsZero((t, id) -> id.coerce(eval(t) == 0)).
        If((cond, then, otherwise) -> eval(cond)
            ? eval(then)
            : eval(otherwise));
  }

  public static void main(final String[] args) {

    Term<Integer> one = Succ(Zero());
    out.println(eval(one)); // "1"
    out.println(eval(IsZero(one))); // "false"
    // IsZero(IsZero(one)); // does not compile:
    // "The method IsZero(Term<Integer>) in the type Term<T> is not
    // applicable for the arguments (Term<Boolean>)"
    out.println(eval(If(IsZero(one), Zero(), one))); // "1"
    Term<Boolean> True = IsZero(Zero());
    Term<Boolean> False = IsZero(one);
    out.println(eval(If(True, True, False))); // "true"
    // out.println(prettyPrint(If(True, True, False), 0)); // "if IsZero(0)
    //  then IsZero(0)
    //  else IsZero(Succ(0))"
  }
}

For GADT you will need to add a dependency on derive4j/hkt which provides TypeEq<A, B>: a witness of the equality of two types, A and B.

DRY annotation configuration

By default the @Data annotation triggers the generation of everything which is available, in a file whose name is the English plural of the annotated class. But you may want to restrict the scope of what is generated, or change the name of the file, and you usually want all you ADTs to use the same flavour. You may even dislike the name of the annotation because it clashes with another framework...

For example, let's say that you want to always use the FJ flavour (FunctionalJava), make the generated code package private in a class suffixed by Impl and only generate the pattern matching syntax and the constructors. Then all you have to do is to create the following annotation:

@Data(flavour = Flavour.FJ, value = @Derive(
    inClass = "{ClassName}Impl",
    withVisibility = Visibility.Package,
    make = { Make.constructors, Make.caseOfMatching }
))
public @interface myADT {}

And you annotate your classes with @myADT instead of @Data, saving on that configuration every time.

But now for some of your ADTs you may want to also generate getters and functional setters. In order to not lose the benefits of your @myADT, derive4j allows you to do this:

@myADT
@Derive(make = { Make.getters, Make.modifiers }) // add-up to the @myADT configuration
public abstract class Adt {...}

Use it in your project

Derive4J should be declared as a compile-time only dependency (not needed at runtime). So while derive4j is (L)GPL-licensed, the generated code is not linked to derive4j, and thus derive4j can be used in any project (proprietary or not).

Maven:

<dependency>
  <groupId>org.derive4j</groupId>
  <artifactId>derive4j</artifactId>
  <version>1.1.1</version>
  <optional>true</optional>
</dependency>

Gradle

compile(group: 'org.derive4j', name: 'derive4j', version: '1.1.1', ext: 'jar')

or better using the gradle-apt-plugin:

compileOnly "org.derive4j:derive4j-annotation:1.1.1"
apt "org.derive4j:derive4j:1.1.1"

Contributing

Bug reports and feature requests are welcome, as well as contributions to improve documentation.

Right now the codebase is not ready for external contribution (many blocks of code are more complicated than they should be). So you might be better off waiting for the resolution of #2 before trying to dig into the codebase.

Contact

[email protected], @jb9i or use the project GitHub issues.

Further reading

Thanks

This project has a special dedication to Tony Morris' blog post Debut with a catamorphism. I'm also very thankful to @sviperll and his adt4j project which was the initial inspiration for Derive4J.

Comments
  • Alternative lazy value implementations

    Alternative lazy value implementations

    Additionally from the current synchronized-based implementation, two alternative implementations should be available and configurable via annotation:

    • one based on AtomicReference<Adt>
    • one based on AtomicReference<WeakReference<Adt>>

    which allow better throughput than the synchronized based implementation at the price of possible concurrent evaluation.

    wontfix 
    opened by jbgi 10
  • Adds io.vavr support, fixing #71

    Adds io.vavr support, fixing #71

    Adds new Vavr flavour and adds the related mappings.

    I'm not overly familiar with Gradle so havn't tested this against my maven projects yet. Can one easily deploy the equivalent of a -SNAPSHOT into the local repo?

    opened by talios 8
  • Add support for JavaSlang 2.0 as a Flavour

    Add support for JavaSlang 2.0 as a Flavour

    After playing with Derive4J for even just an evening, I think having support for JavaSlang from @danieldietrich would be a great addition once the 2.0 release it made.

    This would be a natural fit alongside the other flavours - reusing JavaSlangs Option and Function2. Adding generators for tryEmailAddress along side getEmailAddress when using JavaSlang flavour may also be a potentially nice addition.

    enhancement 
    opened by talios 8
  • More descriptive name than `@Data`

    More descriptive name than `@Data`

    What a nice processor!

    I would ask you use a more descriptive name than @Data for annotation:

    1. "Data" is such a generic word, difficult to scan code with @Data and grok something interesting is happening.
    2. "Data" may clash with user code or other packages which also have a Data class, especially after imports.
    3. In particular, Lombok is probably the most popular 3rd-party annotation processor, and it, too, has a Data class, which does something rather different.
    4. This package is version 0.9.1 (as of when I wrote this), so you should have some freedom in refactoring before hitting the magic 1.0.0 and your public API is frozen (assuming semantic versioning).

    You have many fine alternative choices for naming the annotation. First to my mind is @ADT. A web search on "ADT" turns up near the top the right Wikipedia page to understand some background on this library's goals.

    Cheers, --binkley

    question 
    opened by binkley 7
  • Additionnal facilities for partial matching

    Additionnal facilities for partial matching

    today there is only

    Function<ADT, R> otherwise(Supplier<R> expression)
    

    I think it should be renamed into

    Function<ADT, R> otherwiseEval(Supplier<R> expression)
    

    And then we can add:

    Function<ADT, R> otherwise(R value)
    Function<ADT, Option<R>> otherwiseNone()
    Function<ADT, Either<L, R>> <L> otherwiseLeft(L leftValue)
    Function<ADT, Validation<E, R>> <E> otherwiseFail(E error)
    

    What others think about that feature? (are naming good?)

    enhancement question 
    opened by jbgi 7
  • Export static package-private methods annotated with @Export as public in generated class

    Export static package-private methods annotated with @Export as public in generated class

    This allows to use exclusively the generated class for all static api. This should also addresses #41.

    Also, as bonus, all static no-args methods will be exported with an additional caching of the returned value. /cc @talios: WDYT?

    enhancement 
    opened by jbgi 6
  • Provide a means for supplying a validator

    Provide a means for supplying a validator

    I was thinking it would be handy to have some means of providing a validator over the ADT values, I often use this feature of Immutables and keep thinking it would be useful here.

    Currently, derive4j checks for nullness in its created instances, but it would be nice if we could define something like:

    protected|public void validate() {
      this.match(....)
    }
    

    or

    protected|public static Predicate<MyAdtType> validator = it -> ....;
    

    which if existed, gets called before returning the constructed value.

    Thoughts?

    opened by talios 6
  • DSL example Pass compile but can't run .

    DSL example Pass compile but can't run .

    Hi I had try with the DSL Expression example, I had pass compile and package with maven.

    But when I try to run the example it say as

    java -cp target/spl2es-0.0.1-SNAPSHOT.jar spl2es.Expression
    Exception in thread "main" java.lang.Error: Unresolved compilation problem:
    
    	at spl2es.Expression.main(Expression.java:39)
    
    

    The blow is my Expression.java, And I had check only can find Expression.class and Expression$Cases.class. Does there should find an class file such as Expressions.class

    import static java.util.Arrays.asList;
    import org.derive4j.*;
    import static spl2es.Expressions.*;
    import java.util.function.Function;
    import java.util.List;
    import javax.json.*;
    
    @Data
    public abstract class Expression {
    	interface Cases<R> {
    		R Leaf(String key,String value);
    		R EList(List<Expression> blist);
    		R ELeaf(String key, Expression value);
        }
        public abstract <R> R match(Cases<R> cases);
    	private static Function<Expression, JsonValue> eval = Expressions
        .cases()
            .Leaf((key,value)        -> Json.createObjectBuilder()
                .add(key,value).build()
            )
            .ELeaf((key, value)  -> Json.createObjectBuilder()
                .add(key,eval(value)).build()
            )
            .EList(blist -> {
                    JsonArrayBuilder job =Json.createArrayBuilder();
                    for (Expression e : blist) {
                        job.add(e);
                    }
                    return job.build();
                }
            );
        public static JsonValue eval(Expression expression) {
            return eval.apply(expression);
        }
        public static JsonValue test (){
            Expression expr = Leaf("2","3");
            eval(expr);
        }
        public static void main(String[] args) {
            System.out.print(test().toString());
        }            
    }
    
    opened by JulianZhang 5
  • Fixes #62

    Fixes #62

    Of course, there is still http://dilbert.com/strip/2001-10-25 , but anyway com.sun.tools.javac is definitely not thread safe so using parallel streams in annotation processors doesn't seem like a good idea...

    opened by gneuvill 5
  • Add support for a la carte derivation

    Add support for a la carte derivation

    Last week I was experimenting with reworking some existing $work code ( using jADT ) to use derive4j and whilst it worked well, and the code translated fairly seamlessly, I started noticed weird parse/completion errors in IntelliJ.

    It seems IntelliJ by default stops parsing files if there over 2500 lines long, and the derive4j generated source file in question was something like 60,000 lines.

    In this particular instance, once an object has been created, we have no desired to have mutations, so don't really need any of the set/mod support functions to get generated ( also saving a lot of lambdas being compiled ).

    As a simple solution ( other than increasing the rather hidden setting for IntelliJ completion ) was to have a new Flavour.None for derive4j that prevented the generation of get/set/mod methods, leaving you only with the constructors.

    Would something like this be useful to anyone else? Or would a more generic (disable set, disable mod, disable get) generation control across flavours be more useful?

    enhancement 
    opened by talios 5
  • Add support for varargs in datatype definitions

    Add support for varargs in datatype definitions

    Would be it possible to add support for varargs:

    interface Cases<R> {
      R develop(Job job, Job... dependencies);
      R review(Job job, Job... dependencies);
    }
    

    generators constructors like:

    public static JobType develop(Job job, Job[] dependencies) {
    ...
    

    It would be handy if the constructor method matched the vararg definition.

    enhancement bug 
    opened by talios 5
  • workaround ecj javapoet issue

    workaround ecj javapoet issue

    In ecj, $L combined with an ExecutableElement representing Supplier#get will insert public abstract T get() instead of the intended get().

    This could also be a bug with javapoet. I will raise an issue with them.

    opened by rzpt 1
  • StackOverflow in derivingConfig when @Data annotation is used

    StackOverflow in derivingConfig when @Data annotation is used

    Issue

    Code that overflows here

      private Stream<Function<DeriveConfig, DeriveConfig>> deriveConfigs(TypeElement typeElement, Element element,
          HashSet<AnnotationMirror> seenAnnotations) {
        return element.getAnnotationMirrors().stream().sequential().filter(a -> !seenAnnotations.contains(a)).flatMap(a -> {
          seenAnnotations.add(a);
          return concat(deriveConfigs(typeElement, a.getAnnotationType().asElement(), seenAnnotations),
              annotationConfig(typeElement, a));
        });
      }
    

    In the code above you can see HasSet is used, however AnnotationMirror link does declare hashCode method.

    Proof: image

    • You can see that the seenAnnotations set contains many duplicate values (highlighted in yellow)
    • and; that .hashCode() call on the Retention policy returns a random hashCode every time.

    How I found it

    Looking at language server logs. image

    Affect

    I'm not too familiar with derive4j, take below with a grain of salt:

    • This issue does not affect code gen.
    • I believe the issue leads to miss-configured classPaths
    opened by etherandrius 1
  • NoSuchElementException after upgrading from 0.10.2 to 1.1.1

    NoSuchElementException after upgrading from 0.10.2 to 1.1.1

    After I tried upgrading my project https://github.com/highj/ from 0.10.2 to 1.1,1, I got several errors like this:

    [ERROR] Derive4J: unable to process org.highj.data.transformer.FreeArrow due to No value present java.util.NoSuchElementException: No value present at java.util.Optional.get(Optional.java:135) at org.derive4j.processor.MapperDerivator.lambda$mapperTypeName$8(MapperDerivator.java:141) at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) at org.derive4j.processor.MapperDerivator.mapperTypeName(MapperDerivator.java:142) at org.derive4j.processor.MapperDerivator.mapperTypeName(MapperDerivator.java:124) at org.derive4j.processor.MapperDerivator.mapperTypeName(MapperDerivator.java:95) at org.derive4j.processor.MapperDerivator.lambda$createVisitorFactoryAndMappers$18(MapperDerivator.java:247) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) at org.derive4j.processor.MapperDerivator.createVisitorFactoryAndMappers(MapperDerivator.java:248) at org.derive4j.processor.MapperDerivator.lambda$derive$0(MapperDerivator.java:77) at org.derive4j.processor.api.model.MultipleConstructorsSupport$LambdaCases.visitorDispatch(MultipleConstructorsSupport.java:118) at org.derive4j.processor.api.model.MultipleConstructorsSupport$VisitorDispatch.match(MultipleConstructorsSupport.java:143) at org.derive4j.processor.api.model.MultipleConstructorsSupport$CasesMatchers$PartialMatcher.lambda$otherwise$2(MultipleConstructorsSupport.java:257) at org.derive4j.processor.api.model.DataConstructions$LambdaCases.multipleConstructors(DataConstructions.java:107) at org.derive4j.processor.api.model.DataConstructions$MultipleConstructors_.match(DataConstructions.java:130) at org.derive4j.processor.api.model.DataConstructions$CaseOfMatchers$PartialMatcher.otherwise(DataConstructions.java:424) at org.derive4j.processor.MapperDerivator.derive(MapperDerivator.java:80) at org.derive4j.processor.BuiltinDerivator.lambda$derivator$0(BuiltinDerivator.java:54) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.Iterator.forEachRemaining(Iterator.java:116) at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining(StreamSpliterators.java:312) at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:743) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) at org.derive4j.processor.BuiltinDerivator.lambda$derivator$2(BuiltinDerivator.java:55) at org.derive4j.processor.DerivingProcessor.lambda$derivation$7(DerivingProcessor.java:160) at org.derive4j.processor.api.DeriveResults$Result.match(DeriveResults.java:84) at org.derive4j.processor.api.DeriveResult.bind(DeriveResult.java:50) at org.derive4j.processor.DerivingProcessor.derivation(DerivingProcessor.java:160) at org.derive4j.processor.DerivingProcessor.lambda$process$4(DerivingProcessor.java:139) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.Streams$StreamBuilderImpl.forEachRemaining(Streams.java:419) at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580) at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:270) at java.util.stream.Streams$StreamBuilderImpl.forEachRemaining(Streams.java:419) at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742) at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580) at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:270) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) at java.util.Iterator.forEachRemaining(Iterator.java:116) at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining(StreamSpliterators.java:312) at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:743) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at org.derive4j.processor.DerivingProcessor.process(DerivingProcessor.java:143) at com.sun.tools.javac.processing.JavacProcessingEnvironment.callProcessor(JavacProcessingEnvironment.java:794) at com.sun.tools.javac.processing.JavacProcessingEnvironment.discoverAndRunProcs(JavacProcessingEnvironment.java:705) at com.sun.tools.javac.processing.JavacProcessingEnvironment.access$1800(JavacProcessingEnvironment.java:91) at com.sun.tools.javac.processing.JavacProcessingEnvironment$Round.run(JavacProcessingEnvironment.java:1035) at com.sun.tools.javac.processing.JavacProcessingEnvironment.doProcessing(JavacProcessingEnvironment.java:1176) at com.sun.tools.javac.main.JavaCompiler.processAnnotations(JavaCompiler.java:1170) at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:856) at com.sun.tools.javac.main.Main.compile(Main.java:523) at com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:129) at com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:138) at org.codehaus.plexus.compiler.javac.JavaxToolsCompiler.compileInProcess(JavaxToolsCompiler.java:126) at org.codehaus.plexus.compiler.javac.JavacCompiler.performCompile(JavacCompiler.java:174) at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:1134) at org.apache.maven.plugin.compiler.CompilerMojo.execute(CompilerMojo.java:187) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:347) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:154) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:582) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214) at org.apache.maven.cli.MavenCli.main(MavenCli.java:158) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)

    opened by DanielGronau 0
  • More compact code for equals methods

    More compact code for equals methods

    Consider a class with a few cases, like this:

    @Data
     public abstract class C1
    {
      public interface Cases <R>
      {
        R m1 (Integer p1)
        ;
        R m2 (Integer p1, String p2)
        ;
        R m3 (Integer p1, String p2, Object p3)
        ;
      }
      public abstract <R> R match (Cases<R> cases)
      ;
      @Override
      public abstract String toString ()
      ;
      @Override
      public abstract boolean equals (Object other)
      ;
      @Override
      public abstract int hashCode ()
      ;
    }
    

    This causes derive4j to generate an equals method in every subclass M1, M2, M3. Each one looks like this:

    @Override
        public boolean equals(Object other) {
          return (other instanceof C1) && ((C1) other).match(C1s.cases((p1) -> this.p1.equals(p1),
              (p1, p2) -> false,
              (p1, p2, p3) -> false));
        }
    

    More generally, for N cases, there will be N equals methods, each with N lines most of which are just mapping to false. When N gets moderately large, this generates a lot of code. For example, with N = 100, we get 10k lines of code just for these simple equals methods.

    How about generating more compact code, something like this for each equals:

    @Override
        public boolean equals(Object other) {
          return (other instanceof M1) && p1.equals(((M1) other).p1);
        }
    

    That would cause the number of lines of code to scale linearly, rather than quadratically, since each equals just implements equals on its attributes.

    opened by AndreasVoellmy 3
  • Usage in mixed scala/java codebase

    Usage in mixed scala/java codebase

    Hi! I would like to use derive4j in a mixed scala/java codebase, I get a compile error when I use Foos (which is meant to be generated by derive4j) in a Scala file. Is there any other way to generate the classes other than calling the compile task using maven?

    opened by theqp 0
Owner
null
vʌvr (formerly called Javaslang) is a non-commercial, non-profit object-functional library that runs with Java 8+. It aims to reduce the lines of code and increase code quality.

Vavr is an object-functional language extension to Java 8, which aims to reduce the lines of code and increase code quality. It provides persistent co

vavr 5.1k Jan 3, 2023
An advanced, but easy to use, platform for writing functional applications in Java 8.

Getting Cyclops X (10) The latest version is cyclops:10.4.0 Stackoverflow tag cyclops-react Documentation (work in progress for Cyclops X) Integration

AOL 1.3k Dec 29, 2022
Stream utilities for Java 8

protonpack A small collection of Stream utilities for Java 8. Protonpack provides the following: takeWhile and takeUntil skipWhile and skipUntil zip a

Dominic Fox 464 Nov 8, 2022
Enhancing Java Stream API

StreamEx 0.7.3 Enhancing Java Stream API. This library defines four classes: StreamEx, IntStreamEx, LongStreamEx, DoubleStreamEx which are fully compa

Tagir Valeev 2k Jan 3, 2023
Functional patterns for Java

λ Functional patterns for Java Table of Contents Background Installation Examples Semigroups Monoids Functors Bifunctors Profunctors Applicatives Mona

null 825 Dec 29, 2022
java port of Underscore.js

underscore-java Requirements Java 1.8 and later or Java 11. Installation Include the following in your pom.xml for Maven: <dependencies> <dependency

Valentyn Kolesnikov 411 Dec 6, 2022
A library that simplifies error handling for Functional Programming in Java

Faux Pas: Error handling in Functional Programming Faux pas noun, /fəʊ pɑː/: blunder; misstep, false step Faux Pas is a library that simplifies error

Zalando SE 114 Dec 5, 2022
RustScript is a functional scripting language with as much relation to Rust as Javascript has to Java.

RustScript RustScript is a scripting language as much relation to Rust as JavaScript has to Java I made this for a school project; it's meant to be im

Mikail Khan 25 Dec 24, 2022
Java 8 annotation processor and framework for deriving algebraic data types constructors, pattern-matching, folds, optics and typeclasses.

Derive4J: Java 8 annotation processor for deriving algebraic data types constructors, pattern matching and more! tl;dr Show me how to write, say, the

null 543 Nov 23, 2022
adt4j - Algebraic Data Types for Java

adt4j - Algebraic Data Types for Java This library implements Algebraic Data Types for Java. ADT4J provides annotation processor for @GenerateValueCla

Victor Nazarov 136 Aug 25, 2022
Generate Java types from JSON or JSON Schema and annotates those types for data-binding with Jackson, Gson, etc

jsonschema2pojo jsonschema2pojo generates Java types from JSON Schema (or example JSON) and can annotate those types for data-binding with Jackson 2.x

Joe Littlejohn 5.9k Jan 5, 2023
A Java annotation processor used for automatically generating better builder codes.

BetterBuilder BetterBuilder is a Java annotation processor used for automatically generating better builder codes(builder design pattern), which can m

LEO D PEN 9 Apr 6, 2021
Annotation processor to create immutable objects and builders. Feels like Guava's immutable collections but for regular value objects. JSON, Jackson, Gson, JAX-RS integrations included

Read full documentation at http://immutables.org // Define abstract value type using interface, abstract class or annotation @Value.Immutable public i

Immutables 3.2k Dec 31, 2022
An annotation processor for generating type-safe bean mappers

MapStruct - Java bean mappings, the easy way! What is MapStruct? Requirements Using MapStruct Maven Gradle Documentation and getting help Building fro

null 5.8k Dec 31, 2022
An annotation-based Java library for creating Thrift serializable types and services.

Drift Drift is an easy-to-use, annotation-based Java library for creating Thrift clients and serializable types. The client library is similar to JAX-

null 225 Dec 24, 2022
PCRE RegEx matching Log4Shell CVE-2021-44228 IOC in your logs

Log4Shell-Rex The following RegEx was written in an attempt to match indicators of a Log4Shell (CVE-2021-44228 and CVE-2021-45046) exploitation. If yo

back2root 286 Nov 9, 2022
A distributed data integration framework that simplifies common aspects of big data integration such as data ingestion, replication, organization and lifecycle management for both streaming and batch data ecosystems.

Apache Gobblin Apache Gobblin is a highly scalable data management solution for structured and byte-oriented data in heterogeneous data ecosystems. Ca

The Apache Software Foundation 2.1k Jan 4, 2023
MathParser - a simple but powerful open-source math tool that parses and evaluates algebraic expressions written in pure java

MathParser is a simple but powerful open-source math tool that parses and evaluates algebraic expressions written in pure java. This projec

AmirHosseinAghajari 40 Dec 24, 2022