jsoup: the Java HTML parser, built for HTML editing, cleaning, scraping, and XSS safety.

Overview

jsoup: Java HTML Parser

jsoup is a Java library for working with real-world HTML. It provides a very convenient API for fetching URLs and extracting and manipulating data, using the best of HTML5 DOM methods and CSS selectors.

jsoup implements the WHATWG HTML5 specification, and parses HTML to the same DOM as modern browsers do.

  • scrape and parse HTML from a URL, file, or string
  • find and extract data, using DOM traversal or CSS selectors
  • manipulate the HTML elements, attributes, and text
  • clean user-submitted content against a safe-list, to prevent XSS attacks
  • output tidy HTML

jsoup is designed to deal with all varieties of HTML found in the wild; from pristine and validating, to invalid tag-soup; jsoup will create a sensible parse tree.

See jsoup.org for downloads and the full API documentation.

Build Status

Example

Fetch the Wikipedia homepage, parse it to a DOM, and select the headlines from the In the News section into a list of Elements:

Document doc = Jsoup.connect("https://en.wikipedia.org/").get();
log(doc.title());
Elements newsHeadlines = doc.select("#mp-itn b a");
for (Element headline : newsHeadlines) {
  log("%s\n\t%s", 
    headline.attr("title"), headline.absUrl("href"));
}

Online sample, full source.

Open source

jsoup is an open source project distributed under the liberal MIT license. The source code is available at GitHub.

Getting started

  1. Download the latest jsoup jar (or add it to your Maven/Gradle build)
  2. Read the cookbook
  3. Enjoy!

Development and support

If you have any questions on how to use jsoup, or have ideas for future development, please get in touch via the mailing list.

If you find any issues, please file a bug after checking for duplicates.

The colophon talks about the history of and tools used to build jsoup.

Status

jsoup is in general, stable release.

Comments
  • OSGi import of javax.annotation and javax.annotation.meta is broken in 1.14.2

    OSGi import of javax.annotation and javax.annotation.meta is broken in 1.14.2

    In jsoup version 1.14.2, the OSGi import of the package javax.annotation is imported with a version >= 3.0 and < 4.0.

    This makes the jsoup 1.14.2 bundle fail to load on apache karaf which provides version 1.3.0 of the package (from the apache felix runtime).

    Possible fixes:

    1. Check if the import is actually sed at runtime, and remove the import of javax.annotation, if it isn't actually needed (earlier versions of jsoup does not have this import)
    2. Remove the versioning of the import (the actual content of the javax.annotation package has AFAIK not changed since, like, forever)
    3. Expand the version range on the javax.annotation import, from [3.0, 4) to [1.0, 4)

    Not sure where the 3.0 version of the import comes from...? I have googled, and think, maybe from this 2011 vintage, org.glassfish rebundling of javax.annotation? https://mvnrepository.com/artifact/org.glassfish/javax.annotation

    The javax.annotation.meta package will probably also have to be handled in the same way? From the MANIFEST.MF of jsoup 1.14.2:

    Import-Package: javax.annotation;version="[3.0,4)",javax.annotation.meta
     ;version="[3.0,4)",javax.net.ssl,javax.xml.parsers,javax.xml.transform,
     javax.xml.transform.dom,javax.xml.transform.stream,org.jsoup;version="[
     1.14,2)",org.jsoup.helper;version="[1.14,2)",org.jsoup.internal;version
     ="[1.14,2)",org.jsoup.nodes;version="[1.14,2)",org.jsoup.parser;version
     ="[1.14,2)",org.jsoup.safety;version="[1.14,2)",org.jsoup.select;versio
     n="[1.14,2)",org.w3c.dom
    
    fixed 
    opened by steinarb 22
  • XML Attribute Names are converted to lower case, where they should stay as in the original input

    XML Attribute Names are converted to lower case, where they should stay as in the original input

    Hello,

    in my current project I stumbled on this curious behaviour:

    We use the following version of JSoup:

    <dependency> <groupId>org.jsoup</groupId> <artifactId>jsoup</artifactId> <version>1.7.2</version> </dependency>

    @Test
    public void testJsoupAttributeNameCamelCase() {
    
            String xml = "<?xml version='1.0' encoding='UTF-8' standalone='no'?>" 
                                  + "<rootNode attributeName='someValue' />";
            Document doc = Jsoup.parse(xml, "", Parser.xmlParser());
            Element root = doc.getElementsByTag("rootNode").get(0);
            Attribute first = root.attributes().asList().get(0);
    
            assertTrue("someValue".equals(first.getValue())); // pass
            assertTrue("attributeName".equals(first.getKey())); // fail
        }
    

    Please have a look at this issue.

    Thank you and kind regards Edgar

    opened by edgar-philipp 21
  • How to allow character '&' without convert to '&'

    How to allow character '&' without convert to '&'

    i have data from http post data like this :

    "billingCode=820131114000031&custName=Josh & Mark&numAccount=25800&parameterId=2&traceId=820131114000031"
    

    if i use jsoup :

    String postDataori = Jsoup.clean("billingCode=820131114000031&custName=Josh & Mark&numAccount=25800&parameterId=2&traceId=820131114000031", Whitelist.basic());
    System.out.println("PostData  : " + postDataori);
    

    and the output like this :

    PostData  : billingCode=820131114000031&amp;custName=Josh &amp; Mark&amp;numAccount=25800&amp;parameterId=2&amp;traceId=820131114000031
    

    my question is how to allow character '&' without convert to &amp; ?

    opened by rifkiharahap 20
  • Jsoup.parse() doesn't seem to load whole HTML content

    Jsoup.parse() doesn't seem to load whole HTML content

    I save stackoverflow.com into a file: input.html load it inside:

    File input = new File("input.html"); Document doc = Jsoup.parse(input, "UTF-8"); // first query Elements resultsA = doc.select("h3 > a"); // second query Elements resultsB = doc.select("div.nav li a");

    "resultsA" has no element while "resultsB" contains 6 found elements. Wondering by it, I extract the html content from "doc" variable; wow, it contains just part of HTML, where "resultsB" html content can be found, but not content for "resultsA".

    I've tried to parse several URLs (even, google.com), it all results in the same way. "Jsoup.parse()" doesn't return whole HTML content.

    opened by xjaphx 20
  • oss-fuzz integration of jsoup

    oss-fuzz integration of jsoup

    Hi all,

    I prepared the integration (https://github.com/CodeIntelligenceTesting/oss-fuzz/commit/e03329f4b8fde5b361cc68c087fc8290c4631f03) of jsoup into google oss-fuzz. This will enable continuous fuzzing of this project, which will be conducted by google. Bugs that will be found by fuzzing will be reported to you.

    The integration makes only sense if someone will deal with the bug reports submitted by oss-fuzz. Are you interested in the integration? If yes, I would submit a pull request and I would provide your contact information for bug reports. I would use [email protected] or any other email that you would prefer.

    For the Fuzzing of Java applications Jazzer is used. Jazzer is a coverage-guided, in-process fuzzer for the JVM platform developed by Code Intelligence. It is based on libFuzzer and brings many of its instrumentation-powered mutation features to the JVM. Jazzer has already found a lot of critical bugs in JVM applications.

    If there are any questions regarding fuzzing or the oss-fuzz integration, it would be a pleasure for me to help.

    opened by 0roman 18
  • Websites with large amounts of data fail to parse.

    Websites with large amounts of data fail to parse.

    Currently using Jsoup on some large websites, and it throws the Mark Invalid Exception which means the bufref is negative?

    I tried using both Jsoup.connect(url).get() and Jsoup.connect(url).execute().parse() Both cause the same exception.

    	at org.jsoup.parser.CharacterReader.rewindToMark(CharacterReader.java:132)
    	at org.jsoup.parser.Tokeniser.consumeCharacterReference(Tokeniser.java:182)
    	at org.jsoup.parser.TokeniserState.readCharRef(TokeniserState.java:1698)
    	at org.jsoup.parser.TokeniserState.access$100(TokeniserState.java:8)
    	at org.jsoup.parser.TokeniserState$2.read(TokeniserState.java:36)
    	at org.jsoup.parser.Tokeniser.read(Tokeniser.java:57)
    	at org.jsoup.parser.TreeBuilder.runParser(TreeBuilder.java:55)
    	at org.jsoup.parser.TreeBuilder.parse(TreeBuilder.java:47)
    	at org.jsoup.parser.Parser.parseInput(Parser.java:35)
    	at org.jsoup.helper.DataUtil.parseInputStream(DataUtil.java:169)
    	at org.jsoup.helper.HttpConnection$Response.parse(HttpConnection.java:835)
    	at org.jsoup.helper.HttpConnection.get(HttpConnection.java:285)```
    
    if anybody would like to reproduce, here are some urls which it fails to parse:
    https://www.spec.org/cpu2006/results/res2014q4/
    https://www.spec.org/cpu2006/results/res2012q3/
    https://www.spec.org/cpu2006/results/res2014q1/
    https://www.spec.org/cpu2006/results/res2014q3/
    https://www.spec.org/cpu2006/results/res2011q2/
    https://www.spec.org/cpu2006/results/res2010q3/
    https://www.spec.org/cpu2006/results/res2017q2/
    https://www.spec.org/cpu2006/results/res2016q3/
    https://www.spec.org/cpu2006/results/res2015q4/
    https://www.spec.org/cpu2006/results/res2007q4/
    https://www.spec.org/cpu2006/results/res2009q4/
    https://www.spec.org/cpu2006/results/res2012q2/
    https://www.spec.org/cpu2006/results/res2014q2/
    https://www.spec.org/cpu2006/results/res2012q4/
    https://www.spec.org/cpu2006/results/res2011q1/
    
    
    Thanks
    bug 
    opened by Derek-Baum 18
  • Update meta charset

    Update meta charset

    This pull request attends the todo-comment according the update of documents meta charset tag.

    If the charset is changed, the charset meta-tag of the document is slected and updated (if existing).

    An improvement could be an overload charset() method, where the caller can choose if the meta tag will be updated too.

    A test is included.

    opened by offa 18
  • CDATA fields are lost after calling Jsoup.parse

    CDATA fields are lost after calling Jsoup.parse

    First, congratulations to your great library - it's awesome! However we're having an issue that got us in serious trouble. I'll explain our scenario:

    We're running a Search/Replace mechanism on many pages of a CMS. The content of the pages is XHTML. The basic scheme we're doing for each page is

    String xhtml = page.getBody();
    Document document = Jsoup.parse(xhtml, "", Parser.xmlParser());
    // remove some content from document ...
    page.setBody(document.text());
    

    The big problem is that there is a lot of content that looks like

    <some-node><![CDATA[some.string.content=content]]></some-node>
    

    As soon as we call Jsoup.parse, the CDATA tag is gone and text() will produce this

    <some-node>some.string.content=content></some-node>
    

    What we have afterwards is a page with corrupt content.

    We'd be very glad about some help, since we really enjoy using Jsoup otherwise!

    opened by ataraxie 16
  • Mark invalid in 1.12.2, not in 1.12.1

    Mark invalid in 1.12.2, not in 1.12.1

    Document docTime = Jsoup.parse(s)

     W/System.err: org.jsoup.UncheckedIOException: java.io.IOException: Mark invalid
     W/System.err:     at org.jsoup.parser.CharacterReader.rewindToMark(CharacterReader.java:148)
     W/System.err:     at org.jsoup.parser.Tokeniser.consumeCharacterReference(Tokeniser.java:192)
     W/System.err:     at org.jsoup.parser.TokeniserState$38.read(TokeniserState.java:759)
     W/System.err:     at org.jsoup.parser.Tokeniser.read(Tokeniser.java:59)
     W/System.err:     at org.jsoup.parser.TreeBuilder.runParser(TreeBuilder.java:55)
     W/System.err:     at org.jsoup.parser.TreeBuilder.parse(TreeBuilder.java:47)
     W/System.err:     at org.jsoup.parser.Parser.parse(Parser.java:107)
     W/System.err:     at org.jsoup.Jsoup.parse(Jsoup.java:58)
    
    duplicate fixed 
    opened by yuhldr 14
  • Bug in Evaluator.AttributeWithValue

    Bug in Evaluator.AttributeWithValue

    final String turkishCapital_I = "İ";
    final String html = "<a title=" + turkishCapital_I + " />";
    final String selector = "a[title=" + turkishCapital_I + "]";
    final Elements elements = Jsoup.parse(html).select(selector);
    System.out.println("elements=" + elements.size());
    

    It should print "elements=1" but prints "elements=0".

    Bug is in Evaluator.AttributeWithValue. https://github.com/jhy/jsoup/blob/master/src/main/java/org/jsoup/select/Evaluator.java. Constructor stores the "value" in lower case and AttributeWithValue.matches compares it using equalsIgnoreCase. This would fail for some characters unless proper locale is used in toLowerCase. For example:

    final String i = "İ";
    System.out.println(i.equalsIgnoreCase(i.toLowerCase())); // false
    System.out.println(i.equalsIgnoreCase(i.toLowerCase(Locale.forLanguageTag("tr-TR")))); // true
    

    Since the intent is to compare case-insensitive, a fix might be to not use equalsIgnoreCase in AttributeWithValue.matches but use equal and toLowerCase. That is:

    value.equals(element.attr(key).toLowerCase()); // value is lower case already
    

    AttributeWithValueNot may suffer from the same issue.

    For some reason, it works on http://try.jsoup.org. That might be a separate issue as to why it works there.

    no-repro needs-more-info 
    opened by savatgithub 14
  • [alignment] Fix alignment issues so repeated processing works properly

    [alignment] Fix alignment issues so repeated processing works properly

    Context: formatter-maven-plugin formats source code for html using jsoup with mixed results.

    During first pass on formatting a file with pretty printing, if tags such as <div> with content on second line show up, jsoup currently forces that +1 in the padding which is incorrect. If the div is inline, jsoup will correctly align it with line breaks properly on that first pass. A secondary pass of now formatted file will result in the same behaviour with +1. This seems to affect many elements based on file we have here

    I tried various ways to address and don't know jsoup enough to know the right way. What seemed to work best without messing anything else up was to use the normalize on the string already present in jsoup when doing pretty printing and if that ends up with space before content as it does in this case (one at start and one at end), then do the padding with 1 less character. Under testing this seemed to then work on the first round and every subsequent pretty print of same worked out the same way.

    I even tried your website to drop the file I have above. It does behave in same way there so this seems right but may not be the most appropriate way to resolve. Please take a look here and let me know what you think.

    One test seemed inherently wrong here too where it had a non breaking space but was being prefixed with an extra space. To me that means it got 2 when it didn't need the extra. While look and feel is better with that space, I wasn't quite sure how to make that look better and technically one space is all that needed so I adapted that test to match the change.

    opened by hazendaz 13
  • Trying to parse data created by js

    Trying to parse data created by js

    hey, I need to get urls of images created by java script on page. but document returns in html only "&quot", is there a way to get it by JSoup?

    opened by Kartofanych 0
  • jsoup allows all tags to self-close, while browsers do not

    jsoup allows all tags to self-close, while browsers do not

    JSoup appears to generate an incorrect DOM in response to the (erroneous) input

    <body>
      <video autoplay="" id="remote" width="240"/>
      <script>
        ...
      </script>
    </body>
    

    (The "/" in the video tag should be ignored; it should be treated as a start tag to be closed later.)

    Chrome and Firefox, validator.nu and AngleSharp all put the script element as a child of the video element. JSoup leaves video and script as siblings.

    opened by michaelhkay 3
  • jsoup 1.15 performance problems with :has(), :matches(), ...

    jsoup 1.15 performance problems with :has(), :matches(), ...

    Hi.

    I am intensively using jsoup in my app. I tried to update from jsoup 1.12.2 to jsoup 1.15.3 -> and see huge performance degradation. After doing some tests, i see that there is a problem with pseudo selectors like :has(), :matches(), and may be other.

    Document doc = Jsoup.connect("https://www.amazon.com/").get();
    String title = doc.title();
    System.out.println("Title: " + title);
    
    // warm up
    for(int i=0; i<1000; ++i) doc.select("*");
    
    long s = System.currentTimeMillis();
    for(int i=0; i<1000; ++i) doc.select("NoSuchElement");
    System.out.println("Test #1: " + (System.currentTimeMillis() - s) + "ms");
    
    s = System.currentTimeMillis();
    for(int i=0; i<1000; ++i) doc.select("NoSuchElement:has(> div)");
    System.out.println("Test #2: " + (System.currentTimeMillis() - s) + "ms");
    
    s = System.currentTimeMillis();
    for(int i=0; i<1000; ++i) doc.select("NoSuchElement:has(> div:has(> a))");
    System.out.println("Test #3: " + (System.currentTimeMillis() - s) + "ms");
    
    s = System.currentTimeMillis();
    for(int i=0; i<1000; ++i) doc.select("NoSuchElement:has(> div > a)");
    System.out.println("Test #4: " + (System.currentTimeMillis() - s) + "ms");
    
    for(int i=0; i<1000; ++i) doc.select("NoSuchElement:matches(^abc$)");
    System.out.println("Test #5: " + (System.currentTimeMillis() - s) + "ms");
    

    Jsoup 1.12.2 result:

    Title: Amazon.com. Spend less. Smile more. Test #1: 32ms Test #2: 37ms Test #3: 36ms Test #4: 39ms Test #5: 74ms

    Here you can see, that:

    1. :has() pseudo selector works as expected = no extra time consumed
    2. :matches() pseudo selector consumes some extra time = unexpected behaviour

    Jsoup 1.15.3 result:

    Title: Amazon.com. Spend less. Smile more. Test #1: 14ms Test #2: 30ms Test #3: 85ms Test #4: 20ms Test #5: 174ms

    Here you can see, that

    1. :has() preudo selector consumes extra time, especially when its nested
    2. :has(> div > a) works much faster than :has(> div:has(> a)) = unexpected behaviour
    3. :matches() preudo selector consumes extra time, and thats time is significant ( 2.5x times slower that in jsoup 1.12.2 )

    Resume:

    1. [jsoup 1.15.3] there are performance problems while using :has() and :matches() pseudo selectors when context element is absent
    2. [jsoup 1.12.2] there are no problems with :has() pseudo selector, but there are problems with :matches() pseudo selector
    3. [jsoup 1.15.3] :has(> div > a) and :has(div a) pseudo selectors work much faster than :has(> div:has(> a)) pseudo selector in both situations - when context element is absent and when its not
    4. [jsoup 1.12.2] same problem, but only when context element is not absent
    5. [jsoup 1.15.3] in general :has() and :matches() preudo selectors work slower than in jsoup 1.12.2 when context element is not absent
    6. i showed some simple selectors, but my app use hunders of them and a lot of them are complex -> real performance degradation is 3x
    needs-more-info 
    opened by ogolovanov 1
  • IndexOutOfBoundsException for embedded PHP in HTML parsed using XML parser

    IndexOutOfBoundsException for embedded PHP in HTML parsed using XML parser

    Hi there,

    We've been using an XML parser to parse HTML, since we don't want the HTML to be fixed-up during the process. This has been working great, except when there's embedded PHP in the HTML document. It appears to be treated as an XML declaration and parsed as an Element, so as to extract attributes.

    This throws an Index 0 out of bounds for length 0.

    Here is some HTML that would reproduce the issue:

    <!DOCTYPE html>
    <html lang="en">
    <head>
        <meta charset="UTF-8">
        <title>PHP Test</title>
    </head>
    <body>
        <p>
            <a href="mailto:[email protected]">
                <? Html::encode($email); ?>
            </a>
        </p>
    </body>
    </html>
    
    

    If you try and parse it using the Parser.xmlParser(), it will throw an IndexOutOfBoundsException.

    The issue is in org.jsoup.parser.XmlTreeBuilder#insert(org.jsoup.parser.Token.Comment):

        void insert(Token.Comment commentToken) {
            Comment comment = new Comment(commentToken.getData());
            Node insert = comment;
            if (commentToken.bogus) { // xml declarations are emitted as bogus comments (which is right for html, but not xml)
                // so we do a bit of a hack and parse the data as an element to pull the attributes out
                String data = comment.getData();
                if (data.length() > 1 && (data.startsWith("!") || data.startsWith("?"))) {
                    Document doc = Jsoup.parse("<" + data.substring(1, data.length() -1) + ">", baseUri, Parser.xmlParser());
                    if (doc.childNodeSize() > 0) {
                        Element el = doc.child(0);
                        insert = new XmlDeclaration(settings.normalizeTag(el.tagName()), data.startsWith("!"));
                        insert.attributes().addAll(el.attributes());
                    } // else, we couldn't parse it as a decl, so leave as a comment
                }
            }
            insertNode(insert);
        }
    

    Parsing the PHP as a "doc" and trying to call doc.child(0) is what throws the exception.

    needs-more-info 
    opened by richardmorleysmith 4
  • Best way to minimize a html document with given condition

    Best way to minimize a html document with given condition

    Hey, I just tried to accomplish following task: I have an html document, and I want to cut it so, that I split after a given amount of visible text. So e.g. I only want to have the html containing the first 1024 Chars. At first I tried using the filter Method on the Elements, but its not filtering as I expected it to do. I though it will at the end only keep the elements in the traversal, which are not removed by the FilterResult.REMOVE.

    My current kotlin code looks like this:

    val allowedChars = 1024
    var currentChars = 0
    val allElements = htmlPage.body().allElements
    
    val filteredElems = allElements.filter { node, i ->
        if (currentChars > allowedChars) {
            FilterResult.REMOVE
        } else if (node is TextNode) {
            currentChars += node.text().length
            FilterResult.CONTINUE
        } else {
            FilterResult.CONTINUE
        }
    }
    

    Is my task with yout library and the given options easily possible? Or does the filtering not work as I hoped to work like in this case? Im using current version 1.15.3.

    EDIT: I found out, that if I keep the last element which returned CONTINUE and then going all parents upwards, I get the minimized version. But somehow its not returned in the right way from the filter Method.

    needs-more-info 
    opened by Richie94 3
Releases(jsoup-1.15.3)
Owner
Jonathan Hedley
Hacker, author of jsoup, principal solution architect in computer vision at AWS. Opinions are my own.
Jonathan Hedley
A pure-Java Markdown processor based on a parboiled PEG parser supporting a number of extensions

:>>> DEPRECATION NOTE <<<: Although still one of the most popular Markdown parsing libraries for the JVM, pegdown has reached its end of life. The pro

Mathias 1.3k Nov 24, 2022
ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files.

ANTLR v4 Build status ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating

Antlr Project 13.6k Jan 3, 2023
Elegant parsing in Java and Scala - lightweight, easy-to-use, powerful.

Please see https://repo1.maven.org/maven2/org/parboiled/ for download access to the artifacts https://github.com/sirthias/parboiled/wiki for all docum

Mathias 1.2k Dec 21, 2022
Apache Nutch is an extensible and scalable web crawler

Apache Nutch README For the latest information about Nutch, please visit our website at: https://nutch.apache.org/ and our wiki, at: https://cwiki.apa

The Apache Software Foundation 2.5k Dec 31, 2022
Open Source Web Crawler for Java

crawler4j crawler4j is an open source web crawler for Java which provides a simple interface for crawling the Web. Using it, you can setup a multi-thr

Yasser Ganjisaffar 4.3k Jan 3, 2023
A scalable web crawler framework for Java.

Readme in Chinese A scalable crawler framework. It covers the whole lifecycle of crawler: downloading, url management, content extraction and persiste

Yihua Huang 10.7k Jan 5, 2023
Concise UI Tests with Java!

Selenide = UI Testing Framework powered by Selenium WebDriver What is Selenide? Selenide is a framework for writing easy-to-read and easy-to-maintain

Selenide 1.6k Jan 4, 2023
jQuery-like cross-driver interface in Java for Selenium WebDriver

seleniumQuery Feature-rich jQuery-like Java interface for Selenium WebDriver seleniumQuery is a feature-rich cross-driver Java library that brings a j

null 69 Nov 27, 2022
My solution in Java for Advent of Code 2021.

advent-of-code-2021 My solution in Java for Advent of Code 2021. What is Advent of Code? Advent of Code (AoC) is an Advent calendar of small programmi

Phil Träger 3 Dec 2, 2021
Dicas , códigos e soluções para projetos desenvolvidos na linguagem Java

Digytal Code - Programação, Pesquisa e Educação www.digytal.com.br (11) 95894-0362 Autores Gleyson Sampaio Repositório repleto de desafios, componente

Digytal Code 13 Apr 15, 2022
An EFX translator written in Java.

This is an EFX translator written in Java. It supports multiple target languages. It includes an EFX expression translator to XPath. It is used to in the generation of the Schematron rules in the eForms SDK.

TED & EU Public Procurement 5 Oct 14, 2022
simple tail call optimization and stack safety for Java

com.github.kag0.tail simple tail call optimization for Java enables infinitely deep tail recursive calls without throwing a StackOverflowError no tran

Nathaniel Fischer 18 Dec 7, 2022
hella-html is a library that makes it hella easy to generate dynamic HTML in vanilla Java.

Hella easy HTML in Java hella-html is a library that makes it hella easy to generate dynamic HTML in vanilla Java. Very lightweight and fast, the prim

null 1 Nov 23, 2022
XSS reflector vulnerabilities exploitation extended.

XSS-Reflector Description Burp Suite extension is able to find reflected XSS on page in real-time while browsing on web-site and include some features

Andri Wahyudi 28 Oct 16, 2022
Pcap editing and replay tools for *NIX and Windows - Users please download source from

Tcpreplay Tcpreplay is a suite of GPLv3 licensed utilities for UNIX (and Win32 under Cygwin) operating systems for editing and replaying network traff

AppNeta, Inc. 956 Dec 30, 2022
This app is simple and awesome notepad. It is a quick notepad editing experience when writing notes,emails,message,shoppings and to do list.

This app is simple and awesome notepad. It is a quick notepad editing experience when writing notes,emails,message,shoppings and to do list.It is easy to use and enjoy hassle free with pen and paper.

Md Arif Hossain 1 Jan 18, 2022
A library for creating and editing graph-like diagrams in JavaFX.

Graph Editor A library for creating and editing graph-like diagrams in JavaFX. This project is a fork of tesis-dynaware/graph-editor 1.3.1, which is n

Steffen 125 Jan 1, 2023
Vim-like editing in Eclipse

Vim-like editing in Eclipse Vrapper is an Eclipse plugin which acts as a wrapper for Eclipse text editors to provide a Vim-like input scheme for movin

Vrapper team 1.1k Jan 1, 2023