A High Performance Network ( TCP/IP ) Library

Overview

Chronicle-Network

Maven Central

Maven Central

About

A High Performance Network library

Purpose

This library is designed to be lower latency and support higher throughputs by employing techniques used in low latency trading systems.

Transports

Network currently support TCP only.

Planned support for

  • Shared Memory

UDP support can be found in Chronicle Network Enterprise (commercial product - contact [email protected])

Example

TCP Client/Server : Echo Example

The client sends a message to the server, the server immediately responds with the same message back to the client.

The full source code of this example can be found at:

net.openhft.performance.tests.network.SimpleServerAndClientTest.test

Below are some of the key parts of this code explained, in a bit more detail.

TCPRegistry

The TCPRegistry is most useful for unit tests, it allows you to either provide a true host and port, say "localhost:8080" or if you would rather let the application allocate you a free port at random, you can just provide a text reference to the port, such as, "host.port", you can provide any text you want. It will always be taken as a reference. That is unless it's correctly formed like "hostname:port”, then it will use the exact host and port you provide. The reason we offer this functionality is quite often in unit tests you wish to start a test via loopback, followed often by another test, if the first test does not shut down correctly it can impact on the second test. Giving each test a unique port is one solution, but then managing those ports can become a problem in its self. So we created the TCPRegistry which manages those ports for you, when you come to clean up at the end of each test, all you have to do is call TCPRegistry.reset() and this will ensure that any open ports, will be closed.

// this the name of a reference to the host name and port,
// allocated automatically when to a free port on localhost
final String desc = "host.port";
TCPRegistry.createServerSocketChannelFor(desc);

// we use an event loop rather than lots of threads
EventLoop eg = new EventGroup(true);
eg.start();

Create and Start the Server

The server is configured with TextWire, so the client must also be configured with TextWire. The port that we will use will be ( in this example ) determined by the TCP Registry, of course in a real life production environment you may decide not to use the TcpRegistry or if you still use the TcpRegistry you can use a fixed host:port.

final String expectedMessage = "<my message>";
AcceptorEventHandler eah = new AcceptorEventHandler(desc,
    () -> new WireEchoRequestHandler(WireType.TEXT), VanillaSessionDetails::new, 0, 0);
eg.addHandler(eah);
final SocketChannel sc = TCPRegistry.createSocketChannel(desc);
sc.configureBlocking(false);

Server Message Processing

The server code that processes a message, in this simple example we receive and update a message and then immediately send back a response, however there are other solutions that can be implemented using Chronicle-Network, such as the server responding later to a client subscription.

/**
 * This code is used to read the tid and payload from a wire message,
 * and send the same tid and message back to the client
 */
public class WireEchoRequestHandler extends WireTcpHandler {

    public WireEchoRequestHandler(@NotNull Function<Bytes, Wire> bytesToWire) {
        super(bytesToWire);
    }

    /**
     * simply reads the csp,tid and payload and sends back the tid and payload
     *
     * @param inWire  the wire from the client
     * @param outWire the wire to be sent back to the server
     * @param sd      details about this session
     */
    @Override
    protected void process(@NotNull WireIn inWire,
                           @NotNull WireOut outWire,
                           @NotNull SessionDetailsProvider sd) {

        inWire.readDocument(m -> {
            outWire.writeDocument(true, meta -> meta.write("tid")
                    .int64(inWire.read("tid").int64()));
        }, d -> {
            outWire.writeDocument(false, data -> data.write("payloadResponse")
                    .text(inWire.read("payload").text()));
        });
    }
}

Create and Start the Client

The client code that creates the TcpChannelHub,

The TcpChannelHub is used to send your messages to the server and then read the servers response.

The TcpChannelHub ensures that each response is marshalled back onto the appropriate client thread. It does this through the use of a unique transaction ID ( we call this transaction ID the "tid" ), when the server responds to the client, its expected that the server sends back the tid as the very first field in the message. The TcpChannelHub will look at each message and read the tid, and then marshall the message onto your appropriate client thread.

TcpChannelHub tcpChannelHub = TcpChannelHub(null, eg, WireType.TEXT, "",
    SocketAddressSupplier.uri(desc), false);

in this example we are not implementing fail-over support, so the simple SocketAddressSupplier.uri(desc) is used.

Client Message

Creates the message the client sends to the server

// the tid must be unique, its reflected back by the server, it must be at the start
// of each message sent from the server to the client. Its use by the client to identify which
// thread will handle this message
final long tid = tcpChannelHub.nextUniqueTransaction(System.currentTimeMillis());

// we will use a text wire backed by a elasticByteBuffer
final Wire wire = new TextWire(Bytes.elasticByteBuffer());

wire.writeDocument(true, w -> w.write("tid").int64(tid));
wire.writeDocument(false, w -> w.write("payload").text(expectedMessage));

Write the Data to the Socket

When you have multiple client threads its important to lock before writing the data to the socket.

tcpChannelHub.lock(() -> tcpChannelHub.writeSocket(wire));

Read the Reply from the Server

In order that the correct reply can be send to your thread you have to specify the tid

Wire reply = tcpChannelHub.proxyReply(TimeUnit.SECONDS.toMillis(1), tid);

Check the Result of the Reply

// read the reply and check the result
reply.readDocument(null, data -> {
    final String text = data.read("payloadResponse").text();
    Assert.assertEquals(expectedMessage, text);
});

Shutdown and Cleanup

eg.stop();
TcpChannelHub.closeAllHubs();
TCPRegistry.reset();
tcpChannelHub.close();

Server Threading Strategy

By default the Chronicle-Network server uses a single thread, to process all messages. However, if you wish to dedicate each client connection to its own thread. Then you can change the server threading strategy, to :

-DServerThreadingStrategy= CONCURRENT

see the following enum for more details net.openhft.chronicle.network.ServerThreadingStrategy

Java Version

This library will require Java 8

Testing

The target environment is to support TCP over 10 Gig-E ethernet. In prototype testing, this library has half the latency and support 30% more bandwidth.

A key test is that it shouldn't GC more than once (to allow for warm up) with -mx64m

Downsides

This comes at the cost of scalability for large number os connections. In this situation, this library should perform at least as well as netty.

Comparisons

Netty

Netty has a much wider range of functionality, however it creates some garbage in it's operation (less than using plain NIO Selectors) and isn't designed to support busy waiting which gives up a small but significant delay.

Comments
  • Add internal copy-free buffers for TcpEventHandler

    Add internal copy-free buffers for TcpEventHandler

    The TcpEventHandler's internal write and read overhead is ~O(N^2), where N is the number of messages buffered. This means that the handler's capacity will hit a steep performance cliff at a certain workload.

    TL;DR

    100k msg/s For example, if 100k messages per second are written to a socket and each message is 100 bytes and the saturated socked can accept 100 messages per write attempts and there is a 100 ms latency in steady-state:

    • There are 0.1*100k = 10 k messages in the send queue.
    • The 10k messages will occupy 10k*100 = 1 MB ByteBuffer capacity
    • There will be 100k/100 = 1000 socket write attempts per second
    • The ByteByffer::compact operation will have to copy 1 MB 1000 times per second = 1 GB/s

    200k msg/s For example, if 200k messages per second are written to a socket and each message is 100 bytes and the saturated socked can accept 100 messages per write attempts and there is a 100 ms latency in steady-state:

    • There are 0.1*200k = 20 k messages in the send queue.
    • The 20k messages will occupy 20k*100 = 2 MB ByteBuffer capacity
    • There will be 200k/100 = 2000 socket write attempts per second
    • The ByteByffer::compact operation will have to copy 2 MB 2000 times per second = 4 GB/s

    Thus, the internal write overhead is ~O(N^2). The problem with reading will under some conditions be similar.

    Details

    The internals of the class can be outlined like this:

    TcpEventHandler

    This picture shows how ByteBuffer::compact is invoked for every second message sent (assuming that the saturated socket could absorb 2 messages per call on average).

    InBBB

    It would be better to use a copy-free scheme (e.g. a ring-buffer or a reusable ByteBuffer slice) as shown here:

    CircularInBBB bug wontfix 
    opened by minborg 9
  • ConnectionManager.addListener executes synchronously for connections already present

    ConnectionManager.addListener executes synchronously for connections already present

    When ConnectionManager.addListener is invoked it will execute the listener synchronously for any connections that are already present.

    Usually the code in the listener is executed by the event loop thread, so callers might assume that is always the case and do things that should only be executed in the event loop thread. ~An example of an issue caused by this is https://github.com/ChronicleEnterprise/Chronicle-Services/issues/229.~

    I think a better approach would be to keep track of new listeners and execute them in the event loop rather than synchronously in the add.

    opened by nicktindall 8
  • WireTcpHandlerTest fails due to heartbeat

    WireTcpHandlerTest fails due to heartbeat

    Two of the three tests in WireTcpHandlerTest fail for me with the following exception on TextWire and BinaryWire. Cuirioulsy RawWire seems to pass without a problem.

    I can workaround this by commenting out the heartbeats on WireTcpHandler.sendHeartBeat.

    java.lang.UnsupportedOperationException: Unordered fields not supported yet. key=key1, was=heartbeat, data='heartbeat' at net.openhft.chronicle.wire.TextWire.read(TextWire.java:260) at net.openhft.performance.tests.network.TestData.lambda$read$5(TestData.java:46) at net.openhft.performance.tests.network.TestData$$Lambda$13/742394451.readMarshallable(Unknown Source) at net.openhft.chronicle.wire.Wires.lambda$readData$58(Wires.java:132) at net.openhft.chronicle.wire.Wires$$Lambda$14/1916389359.accept(Unknown Source) at net.openhft.chronicle.bytes.StreamingDataInput.lambda$readWithLength$1(StreamingDataInput.java:42) at net.openhft.chronicle.bytes.StreamingDataInput$$Lambda$15/1063288177.apply(Unknown Source) at net.openhft.chronicle.bytes.StreamingDataInput.parseWithLength(StreamingDataInput.java:54) at net.openhft.chronicle.bytes.StreamingDataInput.readWithLength(StreamingDataInput.java:41) at net.openhft.chronicle.wire.Wires.readData(Wires.java:132) at net.openhft.chronicle.wire.WireIn.readDocument(WireIn.java:82) at net.openhft.performance.tests.network.TestData.read(TestData.java:45) at net.openhft.performance.tests.network.WireTcpHandlerTest.testLatency(WireTcpHandlerTest.java:113) at net.openhft.performance.tests.network.WireTcpHandlerTest.testProcess(WireTcpHandlerTest.java:153) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.junit.runners.Suite.runChild(Suite.java:127) at org.junit.runners.Suite.runChild(Suite.java:26) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.junit.runner.JUnitCore.run(JUnitCore.java:160) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:78) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)

    opened by davidjones7076 8
  • Connection reset exceptions are no longer reliably suppressed

    Connection reset exceptions are no longer reliably suppressed

    We have code in TCPEventHandler to log connection reset errors at TRACE level, where other connection errors are logged at DEBUG or WARN. There is no specific exception for connection reset so we inspect IOExceptions to see if they look like one of the exceptions thrown when reading from/writing to a reset connection.

    https://github.com/OpenHFT/Chronicle-Network/blob/76ffaf7b06811254c3dc61c408d2cf881d8eef48/src/main/java/net/openhft/chronicle/network/TcpEventHandler.java#L478

    In JDKs 13 and above, a new one of these has been introduced. https://github.com/openjdk/jdk/commit/3a4d5db248d74020b7448b64c9f0fc072fc80470

    We should detect that as well so we don't log spurious warnings.

    opened by nicktindall 7
  • TcpSocketConsumer java.lang.AssertionError: Found tid=1486392198574 in the old map

    TcpSocketConsumer java.lang.AssertionError: Found tid=1486392198574 in the old map

    While trying to use ChronicleEngine to access a queue remotely, we occasionally see the following AssertionError (once every few runs):

    524748  [1, Demo Asset remote]
    2017-02-06T16:43:18.847 /TcpChannelHub-Reads-(none) TcpSocketConsumer java.lang.AssertionError: Found tid=1486392198574 in the old map.
            at net.openhft.chronicle.network.connection.TcpChannelHub$TcpSocketConsumer.processData(TcpChannelHub.java:1346)
            at net.openhft.chronicle.network.connection.TcpChannelHub$TcpSocketConsumer.running(TcpChannelHub.java:1231)
            at net.openhft.chronicle.network.connection.TcpChannelHub$TcpSocketConsumer.lambda$start$3(TcpChannelHub.java:1160)
            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
            at java.util.concurrent.FutureTask.run(FutureTask.java:266)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
            at java.lang.Thread.run(Thread.java:745)
    

    and sometimes (not always) it is then immediately followed in the logs by this:

    525660  [2, Demo Asset remote]
    2017-02-06T16:43:18.852 tree-1/conc-event-loop-0 TcpEventHandler java.io.IOException: Broken pipe
            at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
            at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
            at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
            at sun.nio.ch.IOUtil.write(IOUtil.java:51)
            at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
            at net.openhft.chronicle.network.TcpEventHandler.tryWrite(TcpEventHandler.java:332)
            at net.openhft.chronicle.network.TcpEventHandler.invokeHandler(TcpEventHandler.java:253)
            at net.openhft.chronicle.network.TcpEventHandler.action(TcpEventHandler.java:183)
            at net.openhft.chronicle.threads.VanillaEventLoop.runAllMediumHandler(VanillaEventLoop.java:286)
            at net.openhft.chronicle.threads.VanillaEventLoop.run(VanillaEventLoop.java:206)
            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
            at java.util.concurrent.FutureTask.run(FutureTask.java:266)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
            at java.lang.Thread.run(Thread.java:745)
    

    After the assertion error (with or without the IOException) the logs are silent for a few seconds, and then there is what appears to be a loop of acquires and releases (onAcquired/onReleased callbacks) that never stops.

    Being an assertion, I take it to mean it's a code error that should never occur, but the cryptic message doesn't help much in understanding what the issue is... we hope it means something to you and can be fixed soon :-)

    opened by amichair 5
  • Minimum time between attempts to reconnect

    Minimum time between attempts to reconnect

    See net.openhft.chronicle.network.ConnectionStrategy#pauseMillisBeforeReconnect and net.openhft.chronicle.network.AlwaysStartOnPrimaryConnectionStrategy#pauseMillisBeforeReconnect

    I would appreciate a review from the original authors @RobAustin @peter-k-lawrey @dpisklov

    See also https://github.com/ChronicleEnterprise/Chronicle-FIX/issues/299

    opened by JerryShea 4
  • UberHandler heartbeat timeout being reset on send OR receive

    UberHandler heartbeat timeout being reset on send OR receive

    The timeout in the UberHandler's HeartbeatHandler is being reset on every send OR receive. So it effectively never triggers. As long as you're sending heartbeats, it won't time-out, even if it's not receiving any from the other side:

    https://github.com/OpenHFT/Chronicle-Network/blob/7d76645288747fb66a8a8ff963e63c1451e17414/src/main/java/net/openhft/chronicle/network/cluster/handlers/UberHandler.java#L289-L293

    opened by nicktindall 3
  • ConnectionListener#onDisconnected is called twice for every disconnect

    ConnectionListener#onDisconnected is called twice for every disconnect

    (when using the UberHandler at least)

    It's called here

    https://github.com/OpenHFT/Chronicle-Network/blob/5d53d322e4b1713c6e8ce24326038f4e382a0b53/src/main/java/net/openhft/chronicle/network/cluster/handlers/UberHandler.java#L157

    and here

    https://github.com/OpenHFT/Chronicle-Network/blob/5d53d322e4b1713c6e8ce24326038f4e382a0b53/src/main/java/net/openhft/chronicle/network/cluster/handlers/HeartbeatHandler.java#L227-L228

    opened by nicktindall 3
  • Allow the VanillaNetworkContext to be used for all network contexts

    Allow the VanillaNetworkContext to be used for all network contexts

    Allow the VanillaNetworkContext to be used for all network context so that a single port can serve a number of uses, in other words, remove the

    QueueClusterNetworkContext
    DatagridWireNetworkContext
    MapClusterNetworkContext
    

    and allow all the data to be stored and configured into the VanillaNetworkContext

    wontfix 
    opened by RobAustin 3
  • Refactor pauser defaults. De-deprecate

    Refactor pauser defaults. De-deprecate

    See also https://github.com/ChronicleEnterprise/Chronicle-Queue-Enterprise/pull/391 & https://github.com/ChronicleEnterprise/Chronicle-Map-Enterprise/pull/112

    Signed-off-by: Jerry Shea [email protected]

    opened by JerryShea 2
  • Long buffered read log has incorrect CPU and affinity details on it

    Long buffered read log has incorrect CPU and affinity details on it

    The long buffered read message

    2022-05-13T11:05:41.469 INFO [main/fix-out-~monitor] [TcpEventHandler$StatusMonitorEventHandler] - Non blocking TcpEventHandler read took 21420 us, CPU: 30, affinity {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31}
    

    Displays the CPU and affinity details for the Monitor thread, not the thread where the long read was detected

    bug 
    opened by nicktindall 2
  • VanillaNetworkContextTest.networkStatsListenerShouldNotBeClosedOnBackgroundResourceReleaserThread flaky

    VanillaNetworkContextTest.networkStatsListenerShouldNotBeClosedOnBackgroundResourceReleaserThread flaky

    java.lang.AssertionError: Closeables still open
      at net.openhft.chronicle.core.io.AbstractCloseable.assertCloseablesClosed(AbstractCloseable.java:160)
      at net.openhft.chronicle.core.io.AbstractReferenceCounted.assertReferencesReleased(AbstractReferenceCounted.java:61)
      at net.openhft.chronicle.network.NetworkTestCommon.assertReferencesReleased(NetworkTestCommon.java:27)
      at net.openhft.chronicle.network.NetworkTestCommon.afterChecks(NetworkTestCommon.java:74)
      at jdk.internal.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
      Suppressed: java.lang.IllegalStateException: Not closed IdempotentLoopStartedEventHandler@1 closed=false
        at net.openhft.chronicle.core.io.AbstractCloseable.assertCloseablesClosed(AbstractCloseable.java:184)
        ... 31 more
      Caused by: net.openhft.chronicle.core.StackTrace: net.openhft.chronicle.threads.MonitorEventLoop$IdempotentLoopStartedEventHandler created here on main/testAcceptorcore-event-loop
        at net.openhft.chronicle.core.io.AbstractCloseable.<init>(AbstractCloseable.java:74)
        at net.openhft.chronicle.threads.MonitorEventLoop$IdempotentLoopStartedEventHandler.<init>(MonitorEventLoop.java:173)
        at net.openhft.chronicle.threads.MonitorEventLoop.addHandler(MonitorEventLoop.java:97)
        at net.openhft.chronicle.threads.EventGroup.addHandler(EventGroup.java:299)
        at net.openhft.chronicle.network.TcpEventHandler.eventLoop(TcpEventHandler.java:169)
        at net.openhft.chronicle.threads.VanillaEventLoop.addNewHandler(VanillaEventLoop.java:210)
        at net.openhft.chronicle.threads.MediumEventLoop.acceptNewHandlers(MediumEventLoop.java:462)
        at net.openhft.chronicle.threads.MediumEventLoop.runLoop(MediumEventLoop.java:280)
        at net.openhft.chronicle.threads.MediumEventLoop.run(MediumEventLoop.java:223)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
        at net.openhft.chronicle.core.threads.CleaningThread.run(CleaningThread.java:156)
    
    flaky 
    opened by alamar 0
Releases(chronicle-network-2.22.17)
Owner
Chronicle Software : Open Source
Open Source components of Chronicle Software
Chronicle Software : Open Source
TCP/UDP client/server library for Java, based on Kryo

KryoNet can be downloaded on the releases page. Please use the KryoNet discussion group for support. Overview KryoNet is a Java library that provides

Esoteric Software 1.7k Jan 2, 2023
TCP/IP packet demultiplexer. Download from:

TCPFLOW 1.5.0 Downloads directory: http://digitalcorpora.org/downloads/tcpflow/ Installation Most common GNU/Linux distributions ship tcpflow in their

Simson L. Garfinkel 1.5k Jan 4, 2023
RSocket is a binary protocol for use on byte stream transports such as TCP, WebSockets, and Aeron

RSocket RSocket is a binary protocol for use on byte stream transports such as TCP, WebSockets, and Aeron. It enables the following symmetric interact

RSocket 2.2k Dec 30, 2022
A small java project consisting of Client and Server, that communicate via TCP/UDP protocols.

Ninja Battle A small java project consisting of Client and Server, that communicate via TCP/UDP protocols. Client The client is equipped with a menu i

Steliyan Dobrev 2 Jan 14, 2022
FileServer - A multithreaded client-server program that uses Java Sockets to establish TCP/IP connection

A multithreaded client-server program that uses Java Sockets to establish TCP/IP connection. The server allows multiple clients to upload, retrieve and delete files on/from the server.

Lokesh Bisht 3 Nov 13, 2022
Apache Dubbo is a high-performance, java based, open source RPC framework.

Apache Dubbo Project Apache Dubbo is a high-performance, Java-based open-source RPC framework. Please visit official site for quick start and document

The Apache Software Foundation 38.2k Dec 31, 2022
Simulating shitty network connections so you can build better systems.

Comcast Testing distributed systems under hard failures like network partitions and instance termination is critical, but it's also important we test

Tyler Treat 9.8k Dec 30, 2022
Netty project - an event-driven asynchronous network application framework

Netty Project Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol serv

The Netty Project 30.5k Jan 3, 2023
Lunar Network SoupPvP gamemode replica

SoupPvP Lunar Network SoupPvP gamemode replica Disclaimer This is a work-in-progress, for that reason, a lot of features and essential parts of Lunar'

Elb1to 64 Nov 30, 2022
JNetcat : a tool to debug network issues or simulate servers

JNetcat A tool to easily debug or monitor traffic on TCP/UDP and simulate a server or client No need of telnet anymore to test for a remote connection

io-panic 3 Jul 26, 2022
Apache MINA is a network application framework which helps users

Apache MINA is a network application framework which helps users develop high performance and high scalability network applications easily

The Apache Software Foundation 846 Dec 20, 2022
A network core plugin for the Spigot which best Experience for Minecraft Servers.

tCore The core plugin for Spigot. (Supports 1.8.8<=) 大規模サーバー、ネットワーク等の中核となるプラグインです。プロトコルバージョン 1.8 未満での動作は確認していません。かなりの量のソースになりますが、様々な機能が実装されています。中身自体は過

null 6 Oct 13, 2022
Intra is an experimental tool that allows you to test new DNS-over-HTTPS services that encrypt domain name lookups and prevent manipulation by your network

Intra Intra is an experimental tool that allows you to test new DNS-over-HTTPS services that encrypt domain name lookups and prevent manipulation by y

Jigsaw 1.2k Jan 1, 2023
VelocityControl is a BungeeControl-fork plugin enabling ChatControl Red to connect with your Velocity network.

VelocityControl is a BungeeControl-fork plugin enabling ChatControl Red to connect with your Velocity network.

Matej Pacan 10 Oct 24, 2022
An annotation-based Java library for creating Thrift serializable types and services.

Drift Drift is an easy-to-use, annotation-based Java library for creating Thrift clients and serializable types. The client library is similar to JAX-

null 225 Dec 24, 2022
A Java library that implements a ByteChannel interface over SSLEngine, enabling easy-to-use (socket-like) TLS for Java applications.

TLS Channel TLS Channel is a library that implements a ByteChannel interface over a TLS (Transport Layer Security) connection. It delegates all crypto

Mariano Barrios 149 Dec 31, 2022
Java library for representing, parsing and encoding URNs as in RFC2141 and RFC8141

urnlib Java library for representing, parsing and encoding URNs as specified in RFC 2141 and RFC 8141. The initial URN RFC 2141 of May 1997 was supers

SLUB 24 May 10, 2022
Unconventional I/O library for Java

one-nio one-nio is a library for building high performance Java servers. It features OS capabilities and JDK internal APIs essential for making your h

OK.ru 589 Dec 29, 2022
A Java library for capturing, crafting, and sending packets.

Japanese Logos Pcap4J Pcap4J is a Java library for capturing, crafting and sending packets. Pcap4J wraps a native packet capture library (libpcap, Win

Kaito Yamada 1k Dec 30, 2022