Free and open source log management

Overview

Graylog

License Maven Central

Welcome! Graylog is an open source log management platform.

You can read more about the project on our website and check out the documentation on the documentation site.

Issue Tracking

Found a bug? Have an idea for an improvement? Feel free to add an issue.

Contributing

Help us build the future of log management and be part of a project that is used by thousands of people out there every day.

Follow the contributors guide and read the contributing instructions to get started.

Do you want to get paid for developing our open source product? Apply for one of our jobs!

Staying in Touch

Come chat with us in the #graylog channel on freenode IRC or create a topic in our community discussion forums.

License

Graylog is released under version 1 of the Server Side Public License (SSPL).

Comments
  • Elasticsearch 7 Support

    Elasticsearch 7 Support

    Expected Behavior

    Graylog 3.0 should work with Elasticsearch 7

    2019-05-08 16:36:57,413 ERROR: org.graylog2.periodical.ConfigurationManagementPeriodical - Error while running migration <V20170607164210_MigrateReopenedIndicesToAliases{2017-06-07T16:42:10Z}>
    org.graylog2.indexer.ElasticsearchException: Unsupported Elasticsearch version: 7.0.1
    

    Your Environment

    • Graylog Version: 3
    elasticsearch infrastructure #XL to-test 
    opened by jalogisch 66
  • Appliance upgrade -reconfigure fails

    Appliance upgrade -reconfigure fails "add node to server list" ECONNREFUSED port 4001

    Upgrading a previously upgraded appliance from 2.2.3 to 2.3.0.

    During reconfigure step fails connecting to 127.0.0.1 port 4001

       - execute /opt/graylog/embedded/bin/graylog-ctl start graylog-server
      * ruby_block[add node to server list] action run
    
        ================================================================================
        Error executing action `run` on resource 'ruby_block[add node to server list]'
        ================================================================================
    
        Errno::ECONNREFUSED
        -------------------
        Connection refused - connect(2) for "127.0.0.1" port 4001
    

    Expected Behavior

    Completion

    Current Behavior

    Fails with long error message. Will attach stacktrace as requested. chef-stacktrace.zip

    Possible Solution

    Steps to Reproduce (for bugs)

    1. With a previously upgraded appliance to 2.2.3
    2. wget https://packages.graylog2.org/releases/graylog-omnibus/ubuntu/graylog_latest.deb
    3. sudo graylog-ctl stop
    4. sudo dpkg -G -i graylog_latest.deb -- no errors noted on previous steps
    5. sudo graylog-ctl reconfigure -- fails during this.

    Running reconfigure again results in same error message. Rebooting the appliance results in an nginx page.

    Context

    Your Environment

    VM hosted on hyperv - been running for two years without issues - have upgraded in past without issue as well.

    • Graylog Version:
    • Elasticsearch Version:
    • MongoDB Version:
    • Operating System:
    • Browser version:
    bug 
    opened by robdig 51
  • Output buffer not being filled with new messages

    Output buffer not being filled with new messages

    Expected Behavior

    I expect messages to be moved from the processor buffer to output buffer constantly

    Current Behavior

    Processor buffer is being filled and output buffer stays empty, nothing is being written to database

    Possible Solution

    Possibly a thread lock or something like that

    Steps to Reproduce (for bugs)

    I reproduce this putting a lot of messages into graylog, it works for some time and then stops to send messages to ES cluster. I have no idea how can You reproduce this

    Context

    Pushing a large amount cca 10k/s high volume messages to graylog

    Thread dump of affected node: Thread dump of node 6e52bb4b / graylog2 Taken at Wed Nov 08 2017 11:32:36 GMT+0100

    "outputbufferprocessor-0" id=32 state=WAITING
        - waiting on <0x1d302449> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        - locked <0x1d302449> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at com.lmax.disruptor.BlockingWaitStrategy.waitFor(BlockingWaitStrategy.java:45)
        at com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
        at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:148)
        at com.codahale.metrics.InstrumentedThreadFactory$InstrumentedRunnable.run(InstrumentedThreadFactory.java:66)
        at java.lang.Thread.run(Thread.java:748)
    "outputbuffer-processor-executor-0" id=219 state=WAITING
        - waiting on <0x2e3c0fa7> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        - locked <0x2e3c0fa7> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
    "outputbufferprocessor-1" id=33 state=WAITING
        - waiting on <0x1d302449> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        - locked <0x1d302449> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at com.lmax.disruptor.BlockingWaitStrategy.waitFor(BlockingWaitStrategy.java:45)
        at com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
        at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:148)
        at com.codahale.metrics.InstrumentedThreadFactory$InstrumentedRunnable.run(InstrumentedThreadFactory.java:66)
        at java.lang.Thread.run(Thread.java:748)
    
    "outputbufferprocessor-2" id=34 state=WAITING
        - waiting on <0x1d302449> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        - locked <0x1d302449> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at com.lmax.disruptor.BlockingWaitStrategy.waitFor(BlockingWaitStrategy.java:45)
        at com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
        at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:148)
        at com.codahale.metrics.InstrumentedThreadFactory$InstrumentedRunnable.run(InstrumentedThreadFactory.java:66)
        at java.lang.Thread.run(Thread.java:748)
    
    
    "outputbuffer-processor-executor-0" id=220 state=WAITING
        - waiting on <0x59428d5b> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        - locked <0x59428d5b> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
    
    "outputbuffer-processor-executor-1" id=221 state=WAITING
        - waiting on <0x59428d5b> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        - locked <0x59428d5b> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
    
    "outputbuffer-processor-executor-2" id=222 state=WAITING
        - waiting on <0x59428d5b> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        - locked <0x59428d5b> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
    
    "outputbuffer-processor-executor-0" id=223 state=WAITING
        - waiting on <0x11eb6f6a> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        - locked <0x11eb6f6a> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
    
    "outputbuffer-processor-executor-1" id=224 state=WAITING
        - waiting on <0x2e3c0fa7> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        - locked <0x2e3c0fa7> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
    
    "outputbuffer-processor-executor-2" id=225 state=WAITING
        - waiting on <0x2e3c0fa7> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        - locked <0x2e3c0fa7> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
    
    "outputbuffer-processor-executor-1" id=226 state=WAITING
        - waiting on <0x11eb6f6a> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        - locked <0x11eb6f6a> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
    
    "outputbuffer-processor-executor-2" id=227 state=WAITING
        - waiting on <0x11eb6f6a> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        - locked <0x11eb6f6a> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
    
    

    Your Environment

    • Graylog Version: v2.3.2+3df951e
    • Elasticsearch Version: 5.6.3
    • MongoDB Version: v3.4.10
    • Operating System: RHEL 7.3
    • Browser version: irrelevant
    opened by madiTG 46
  • Kafka upgrade - compatible with old and new versions

    Kafka upgrade - compatible with old and new versions

    Currently Graylog2 is limited to run with Kafka 0.9.0.1 or lower versions. This change overcomes it, by being comptable with old 0.9.0.1 to 1.1.0 versions.

    Description

    Keep org.apache.kafka:kafka_2.11:0.9.01, so that the message journal stays compatible with previous versions of Graylog. (No changes to pom and kafka versions.)

    Created a new Kafka transport instead of changing the existing KafkaTransport class and use the new transport to create a new Kafka input. The old input still works with older Kafka brokers. This way users can decide which version of Kafka they want to support and existing setups won't break.

    Motivation and Context

    Change is required to make Graylog2 run with newer kafka versions like 1.1.0 and also does not break with old kafka version 0.9.0.1

    How Has This Been Tested?

    Tested with different versions of kafka brokers and tested.

    Screenshots (if appropriate):

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [ x] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ x] I have read the CONTRIBUTING document.
    • [ ] I have added tests to cover my changes.
    • [ x] All new and existing tests passed.
    ready-for-review 
    opened by muralibasani 34
  • Support for Cisco ASA Netflow

    Support for Cisco ASA Netflow

    Hi,

    Do you think the plugin supports Cisco firewall - IOS 9.8?

    Its seems that i have next issues:

    	at org.graylog.plugins.netflow.flows.NetFlowFormatter.toMessageString(NetFlowFormatter.java:54) ~[?:?]
    	at org.graylog.plugins.netflow.flows.NetFlowFormatter.toMessage(NetFlowFormatter.java:119) ~[?:?]
    	at org.graylog.plugins.netflow.codecs.NetFlowCodec.lambda$decodeV9$2(NetFlowCodec.java:160) ~[?:?]
    	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_151]
    	at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) ~[?:1.8.0_151]
    	at java.util.Collections$2.tryAdvance(Collections.java:4717) ~[?:1.8.0_151]
    	at java.util.Collections$2.forEachRemaining(Collections.java:4725) ~[?:1.8.0_151]
    	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_151]
    	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_151]
    	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[?:1.8.0_151]
    	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_151]
    	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) ~[?:1.8.0_151]
    	at org.graylog.plugins.netflow.codecs.NetFlowCodec.lambda$decodeV9$3(NetFlowCodec.java:161) ~[?:?]
    	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_151]
    	at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1380) ~[?:1.8.0_151]
    	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_151]
    	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_151]
    	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[?:1.8.0_151]
    	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_151]
    	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) ~[?:1.8.0_151]
    	at org.graylog.plugins.netflow.codecs.NetFlowCodec.decodeV9(NetFlowCodec.java:163) ~[?:?]
    	at org.graylog.plugins.netflow.codecs.NetFlowCodec.decodeMessages(NetFlowCodec.java:134) ~[?:?]
    	at org.graylog2.shared.buffers.processors.DecodingProcessor.processMessage(DecodingProcessor.java:148) ~[graylog.jar:?]
    	at org.graylog2.shared.buffers.processors.DecodingProcessor.onEvent(DecodingProcessor.java:91) [graylog.jar:?]
    	at org.graylog2.shared.buffers.processors.ProcessBufferProcessor.onEvent(ProcessBufferProcessor.java:74) [graylog.jar:?]
    	at org.graylog2.shared.buffers.processors.ProcessBufferProcessor.onEvent(ProcessBufferProcessor.java:42) [graylog.jar:?]
    	at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:143) [graylog.jar:?]
    	at com.codahale.metrics.InstrumentedThreadFactory$InstrumentedRunnable.run(InstrumentedThreadFactory.java:66) [graylog.jar:?]
    	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
    2018-02-07 09:08:45,042 ERROR: org.graylog2.shared.buffers.processors.DecodingProcessor - Unable to decode raw message RawMessage{id=7fdd18e0-0be6-11e8-8e32-86ed9180ca75, journalOffset=165877, codec=netflow, payloadSize=1673, timestamp=2018-02-07T09:08:45.038Z, remoteAddress=/10.161.84.192:10977} on input <5a7abcc01509a000010b2670>.
    2018-02-07 09:08:45,042 ERROR: org.graylog2.shared.buffers.processors.DecodingProcessor - Error processing message RawMessage{id=7fdd18e0-0be6-11e8-8e32-86ed9180ca75, journalOffset=165877, codec=netflow, payloadSize=1673, timestamp=2018-02-07T09:08:45.038Z, remoteAddress=/10.161.84.192:10977}
    java.lang.NullPointerException: null
    	at org.graylog.plugins.netflow.flows.NetFlowFormatter.toMessageString(NetFlowFormatter.java:54) ~[?:?]
    	at org.graylog.plugins.netflow.flows.NetFlowFormatter.toMessage(NetFlowFormatter.java:119) ~[?:?]
    	at org.graylog.plugins.netflow.codecs.NetFlowCodec.lambda$decodeV9$2(NetFlowCodec.java:160) ~[?:?]
    	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_151]
    	at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) ~[?:1.8.0_151]
    	at java.util.Collections$2.tryAdvance(Collections.java:4717) ~[?:1.8.0_151]
    	at java.util.Collections$2.forEachRemaining(Collections.java:4725) ~[?:1.8.0_151]
    	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_151]
    	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_151]
    	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[?:1.8.0_151]
    	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_151]
    	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) ~[?:1.8.0_151]
    	at org.graylog.plugins.netflow.codecs.NetFlowCodec.lambda$decodeV9$3(NetFlowCodec.java:161) ~[?:?]
    	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_151]
    	at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1380) ~[?:1.8.0_151]
    	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_151]
    	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_151]
    	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[?:1.8.0_151]
    	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_151]
    	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) ~[?:1.8.0_151]
    	at org.graylog.plugins.netflow.codecs.NetFlowCodec.decodeV9(NetFlowCodec.java:163) ~[?:?]
    	at org.graylog.plugins.netflow.codecs.NetFlowCodec.decodeMessages(NetFlowCodec.java:134) ~[?:?]
    	at org.graylog2.shared.buffers.processors.DecodingProcessor.processMessage(DecodingProcessor.java:148) ~[graylog.jar:?]
    	at org.graylog2.shared.buffers.processors.DecodingProcessor.onEvent(DecodingProcessor.java:91) [graylog.jar:?]
    	at org.graylog2.shared.buffers.processors.ProcessBufferProcessor.onEvent(ProcessBufferProcessor.java:74) [graylog.jar:?]
    	at org.graylog2.shared.buffers.processors.ProcessBufferProcessor.onEvent(ProcessBufferProcessor.java:42) [graylog.jar:?]
    	at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:143) [graylog.jar:?]
    	at com.codahale.metrics.InstrumentedThreadFactory$InstrumentedRunnable.run(InstrumentedThreadFactory.java:66) [graylog.jar:?]
    	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
    

    Thanks Sergiu Plotnicu

    opened by sppwf 32
  • Add global Index Sets Defaults system configuration

    Add global Index Sets Defaults system configuration

    Description

    Adds a new Index Set Defaults section to the System > Configurations page where defaults for index set creation can be specified. This can also be configured by API. The new defaults apply to index creation everywhere in Graylog (currently Illuminate installations, and the System > Index Sets > Create Index set flow)

    On the first server boot, the configuration values are initialized from the server.conf values via the ElasticsearchConfiguration class. This allows for both the index defaults configuration, and for system indices (Default, Events, and System Events index sets) to be created with the specified defaults.

    The @Deprecated annotation was also removed from the corresponding server configuration values, since we intend to continue using them as initialization defaults.

    The following new defaults are also being used:

    • Shards: 1
    • Rotation Strategy: Size: 30GB

    Note that the config UI is hidden for Cloud environments.

    The default graylog.conf was also updated:

    • Comment out default properties, so that the defaults specified in ElasticsearchConfiguration will take precedence for users who establish a new configuration file from the template.
    • Re-order the properties for consistency.
    • Formatting adjustments.

    Closes https://github.com/Graylog2/graylog-plugin-enterprise/issues/3264 https://github.com/Graylog2/graylog-plugin-enterprise/issues/3319

    /jenkins-pr-deps https://github.com/Graylog2/graylog-plugin-enterprise/pull/3803

    Motivation and Context

    The goal is to achieve globally-applicable Index set defaults, which apply uniformly throughout the application. Previously, hard-coded/arbitrary values were used on the Create Index Set page and new Illuminate index creation process. This change standardizes the initialization and use of central defaults.

    Testing

    • Verify that the new System > Configurations > Index Set Defaults are initialized with the correct defaults specified in the server.conf file. You might need to delete the configuration from Mongo to test multiple times (db.cluster_config.deleteOne({"type": /IndexSetsDefaultConfiguration/})). Also verify that the values can be customized.
    • Verify that the Default, Events, and System Events indices are also created with the correct defaults.
    • Verify that the System > Index Sets > Create Index page uses the correct defaults. Verify the same for the Illuminate index set.

    Index sets to tests:

    • Default
    • Events
    • System Events
    • Indexer Failures
    • Restored Archives

    How Has This Been Tested?

    Testing will be done after the remaining tasks are finished.

    Screenshots (if appropriate):

    A new Index Sets Configuration section has been added: image

    image

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Refactoring (non-breaking change)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [x] My code follows the code style of this project.
    • [x] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have added tests to cover my changes.
    opened by danotorrey 31
  • AMQP Support

    AMQP Support

    I know this is on the roadmap, but I wanted to create a ticket to track it.

    Kafka is an interesting solution, but ideally I'd like AMQP support back ASAP as we already have a RabbitMQ in use.

    opened by jaxxstorm 31
  • Dashboard widget does not update data

    Dashboard widget does not update data

    I created a dashboard and added a widget that is supposed to report the number of IIS log entries for my servers within the last two hours. It does not seem to update. When I click on the "Replay search" button and view the search results, the data is different (and accurate I believe). The query is simple.

    SourceName:IIS Time: Last 2 hours

    Steps to reproduce:

    1. Collect some IIS log data.
    2. Search the last two hours of data by using the Elasticsearch query SourceName:IIS.
    3. Drill down on the Source field and click on Quick values.
    4. Click the Add to dashboard pull down box and add it to my Beth's dashboard.
    5. Choose show pie chart and data table.
    6. View the dashboard and see the new widget.
    7. You see other widgets updating their data.
    8. This widget does not update.
    9. Close your browser windows and re-open. Same problem.
    10. Try a different browser (Edge), see the same problem.

    Environment

    • Graylog Version: 2.0.1 appliance
    • Elasticsearch Version: 2.3.1
    • MongoDB Version: 3.2.5
    • Operating System: Ubuntu 14.04
    • Browser version: Chrome 50.0.2661.102 (primary), Edge 25.10586.0.0 (secondary testing)

    image

    old_data

    bug S3 P2 triaged 
    opened by OlympiaLady 29
  • Support for AD nested groups

    Support for AD nested groups

    It would be great if Graylog could support so-called nested groups in AD. The easiest way to do this would be to have the group member attribute configurable. You can then use a special object ID as described in https://msdn.microsoft.com/en-us/library/windows/desktop/aa746475(v=vs.85).aspx.

    Some information on how this is handled in some other tool (Nexus): https://support.sonatype.com/entries/31005457-How-to-Configure-Nexus-to-use-Active-Directory-Nested-Groups

    feature triaged ldap 
    opened by andham 28
  • API browser (Swagger interface) does not work with AWS appliance

    API browser (Swagger interface) does not work with AWS appliance

    I've implemented Graylog using an AWS AMI (ami-d5cc48b5), following the directions at http://docs.graylog.org/en/2.2/pages/installation/aws.html. Everything is working fine, except for the Swagger API browser -- which does not appear to do anything when you use it.

    Expected Behavior

    When accessing the Swagger API browser, clicking on links should do something ("Show/Hide" methods, "List Operations," etc.)

    Current Behavior

    When I load the page (https://graylog.example.com/api/api-browser) there are a number of errors in the console. I've attached the output from Chrome 56.0.2924.87 (64-bit) on page load: graylog.allstardirectories.com-1487096853399.txt

    Clicking any of the links (besides the "Raw" links) on the page does not appear to do anything (besides change the text color to black); the console does not produce any additional output. The "Raw" links always produce an Internal Server Error (if a parameter is required -- e.g., cluster/node/metrics), or a response (if no parameter is required - e.g., cluster/metrics). Neither type of "Raw" link produces any output in the console.

    Possible Solution

    Steps to Reproduce (for bugs)

    1. Install Graylog following the documentation here: http://docs.graylog.org/en/2.2/pages/installation/aws.html
    2. Configure system using graylog-ctl script and graylog-settings.json.
    3. Open the REST API browser page.
    4. Click around; nothing happens.

    Context

    We are trying to use the API browser to assist with building external scripts that will access Graylog data via the API.

    Your Environment

    root@aws-gravux01:~# cat /etc/graylog/graylog-settings.json { "timezone": "America/Los_Angeles", "smtp_server": "email-smtp.us-west-2.amazonaws.com", "smtp_port": 587, "smtp_user": "XXXXXXXXXXXXXXX", "smtp_password": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", "smtp_from_email": "[email protected]", "smtp_web_url": "https://graylog.example.com", "smtp_no_tls": false, "smtp_no_ssl": true, "master_node": "127.0.0.1", "local_connect": false, "current_address": "172.30.0.189", "last_address": "172.30.0.189", "enforce_ssl": true, "journal_size": 1, "node_id": "296c16ef-c67a-4e05-b1e0-763897135704", "internal_logging": true, "web_listen_uri": "http://127.0.0.1:9000/", "web_endpoint_uri": false, "rest_listen_uri": "http://127.0.0.1:9000/api", "rest_transport_uri": "https://graylog.example.com/api", "external_rest_uri": "https://graylog.example.com/api", "custom_attributes": { "graylog-server": { "memory": "1800m" }, "elasticsearch": { "memory": "4400m" } } }

    This is all running on an EC2 T2.Large instance, with an Elastic IP mapped.

    • Graylog Version: v2.2.0+d9681cb
    • Elasticsearch Version: 2.4.2
    • MongoDB Version: 2.4.9
    • Operating System: Ubuntu 14.04.5 LTS
    • Browser version: tried current versions of Chrome, Firefox, and Microsoft Edge - all behave the same way.
    bug 
    opened by drinkyouroj 26
  • Possible Cause Of Memory Leak Graylog 2(v2.1.2)

    Possible Cause Of Memory Leak Graylog 2(v2.1.2)

    Hi,

    We have just started using graylog2 (2.1.2) on centos 7. I keep getting below exception after few hours

    2016-11-24T17:25:13.737+05:30 WARN  [ProxiedResource] Unable to call http://10.101.160.95:9000/api/system on node <b8d88e71-dbbb-44ee-9951-2bcdb45b047a>
    java.net.SocketTimeoutException: timeout
            at okio.Okio$3.newTimeoutException(Okio.java:210) ~[graylog.jar:?]
            at okio.AsyncTimeout.exit(AsyncTimeout.java:288) ~[graylog.jar:?]
            at okio.AsyncTimeout$2.read(AsyncTimeout.java:242) ~[graylog.jar:?]
            at okio.RealBufferedSource.indexOf(RealBufferedSource.java:325) ~[graylog.jar:?]
            at okio.RealBufferedSource.indexOf(RealBufferedSource.java:314) ~[graylog.jar:?]
            at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:210) ~[graylog.jar:?]
            at okhttp3.internal.http.Http1xStream.readResponse(Http1xStream.java:186) ~[graylog.jar:?]
            at okhttp3.internal.http.Http1xStream.readResponseHeaders(Http1xStream.java:127) ~[graylog.jar:?]
            at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:53) ~[graylog.jar:?]
            at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) ~[graylog.jar:?]
            at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:45) ~[graylog.jar:?]
            at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) ~[graylog.jar:?]
            at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67) ~[graylog.jar:?]
            at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:109) ~[graylog.jar:?]
            at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) ~[graylog.jar:?]
            at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67) ~[graylog.jar:?]
            at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93) ~[graylog.jar:?]
            at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) ~[graylog.jar:?]
            at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:124) ~[graylog.jar:?]
            at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) ~[graylog.jar:?]
            at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67) ~[graylog.jar:?]
            at org.graylog2.rest.RemoteInterfaceProvider.lambda$get$0(RemoteInterfaceProvider.java:59) ~[graylog.jar:?]
            at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) ~[graylog.jar:?]
            at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67) ~[graylog.jar:?]
            at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:170) ~[graylog.jar:?]
            at okhttp3.RealCall.execute(RealCall.java:60) ~[graylog.jar:?]
            at retrofit2.OkHttpCall.execute(OkHttpCall.java:174) ~[graylog.jar:?]
            at org.graylog2.shared.rest.resources.ProxiedResource.lambda$null$0(ProxiedResource.java:76) ~[graylog.jar:?]
            at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_102]
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_102]
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_102]
            at java.lang.Thread.run(Thread.java:745) [?:1.8.0_102]
    Caused by: java.net.SocketTimeoutException: Read timed out
            at java.net.SocketInputStream.socketRead0(Native Method) ~[?:1.8.0_102]
            at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) ~[?:1.8.0_102]
            at java.net.SocketInputStream.read(SocketInputStream.java:170) ~[?:1.8.0_102]
            at java.net.SocketInputStream.read(SocketInputStream.java:141) ~[?:1.8.0_102]
            at okio.Okio$2.read(Okio.java:138) ~[graylog.jar:?]
            at okio.AsyncTimeout$2.read(AsyncTimeout.java:238) ~[graylog.jar:?]
            ... 29 more
    2016-11-24T17:25:13.751+05:30 WARN  [jvm] [graylog-b8d88e71-dbbb-44ee-9951-2bcdb45b047a] [gc][old][19743][529] duration [12.4s], collections [1]/[12.6s], total [12.4s]/[10.4m], memory [3.7gb]->[3.7gb]/[3.8gb], all_pools {[young] [1.6gb]->[1.6gb]/[1.6gb]}{[survivor] [188.7mb]->[191.2mb]/[204.7mb]}{[old] [1.9gb]->[1.9gb]/[2gb]}
    2016-11-24T17:25:13.821+05:30 INFO  [BeatsCodec]  inside decodeMessagesRawMessage{id=52182873-b20e-11e6-9b98-14feb5ea6076, journalOffset=960249, codec=beats, payloadSize=4169, timestamp=2016-11-24T06:22:05.798Z, remoteAddress=/10.101.160.6:49324}
    

    Heap Size set for Graylog2 is 4G, I noticed the heap size jumps up from 1.5G to 4G as soon as the exceptions are encountered, eventually leading to slower GC's and finally ends up unresponsive.

    all the buffer processors are set to 1 process_buffer and output buffer are set to 32768

    Graylog is configured on a Core 2 Duo machine with 8GB Ram and is pumping data into 3 elasticsearch (clustered) instances.

    bug triaged 
    opened by yogeshrao 25
  • Nonexistent aggregation event Group by Field causes error: Invalid format:

    Nonexistent aggregation event Group by Field causes error: Invalid format: "(Empty Value)"

    [HS 1357286194]

    Description:

    When a Group by Field for an Aggregation event definition does not exist on any log messages queried by the event, the following exception occurs when the correlation processor runs:

    Caused by: java.lang.IllegalArgumentException: Invalid format: "(Empty Value)"
    	at org.joda.time.format.DateTimeFormatter.parseDateTime(DateTimeFormatter.java:945) ~[joda-time-2.10.6.jar:2.10.6]
    
    Expand for full stacktrace: ERROR: org.graylog.events.processor.EventProcessorExecutionJob - Event processor failed to execute: Couldn't create events for: EventDefinitionDto{scope=DEFAULT, id=63b880b6c03bed191e30bc94, title=Aggregation, description=, priority=2, alert=false, config=AggregationEventProcessorConfig{type=aggregation-v1, query=*, queryParameters=[], streams=[000000000000000000000001], groupBy=[kljhkl], series=[AggregationSeries{id=avg-jkhkhj, function=AVG, field=Optional[jkhkhj]}], conditions=Optional[AggregationConditions{expression=Optional[Greater{expr=>, left=NumberReference{expr=number-ref, ref=avg-jkhkhj}, right=NumberValue{expr=number, value=10.0}}]}], searchWithinMs=15000, executeEveryMs=15000}, fieldSpec={}, keySpec=[], notificationSettings=EventNotificationSettings{gracePeriodMs=0, backlogSize=0}, notifications=[], storage=[Config{type=persist-to-streams-v1, streams=[000000000000000000000002]}]} (retry in 5000 ms) org.graylog.events.processor.EventProcessorException: Couldn't create events for: EventDefinitionDto{scope=DEFAULT, id=63b880b6c03bed191e30bc94, title=Aggregation, description=, priority=2, alert=false, config=AggregationEventProcessorConfig{type=aggregation-v1, query=*, queryParameters=[], streams=[000000000000000000000001], groupBy=[kljhkl], series=[AggregationSeries{id=avg-jkhkhj, function=AVG, field=Optional[jkhkhj]}], conditions=Optional[AggregationConditions{expression=Optional[Greater{expr=>, left=NumberReference{expr=number-ref, ref=avg-jkhkhj}, right=NumberValue{expr=number, value=10.0}}]}], searchWithinMs=15000, executeEveryMs=15000}, fieldSpec={}, keySpec=[], notificationSettings=EventNotificationSettings{gracePeriodMs=0, backlogSize=0}, notifications=[], storage=[Config{type=persist-to-streams-v1, streams=[000000000000000000000002]}]} at org.graylog.events.processor.EventProcessorEngine.execute(EventProcessorEngine.java:105) ~[classes/:?] at org.graylog.events.processor.EventProcessorExecutionJob.execute(EventProcessorExecutionJob.java:115) ~[classes/:?] at org.graylog.scheduler.JobExecutionEngine.executeJob(JobExecutionEngine.java:173) ~[classes/:?] at org.graylog.scheduler.JobExecutionEngine.lambda$handleTrigger$2(JobExecutionEngine.java:151) ~[classes/:?] at com.codahale.metrics.Timer.time(Timer.java:151) ~[metrics-core-4.1.9.jar:4.1.9] at org.graylog.scheduler.JobExecutionEngine.handleTrigger(JobExecutionEngine.java:151) ~[classes/:?] at org.graylog.scheduler.JobExecutionEngine.lambda$execute$0(JobExecutionEngine.java:120) ~[classes/:?] at org.graylog.scheduler.worker.JobWorkerPool.lambda$execute$0(JobWorkerPool.java:121) ~[classes/:?] at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:180) [metrics-core-4.1.9.jar:4.1.9] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at com.codahale.metrics.InstrumentedThreadFactory$InstrumentedRunnable.run(InstrumentedThreadFactory.java:66) [metrics-core-4.1.9.jar:4.1.9] at java.lang.Thread.run(Thread.java:833) [?:?] Caused by: java.lang.IllegalArgumentException: Invalid format: "(Empty Value)" at org.joda.time.format.DateTimeFormatter.parseDateTime(DateTimeFormatter.java:945) ~[joda-time-2.10.6.jar:2.10.6] at org.joda.time.DateTime.parse(DateTime.java:160) ~[joda-time-2.10.6.jar:2.10.6] at org.joda.time.DateTime.parse(DateTime.java:149) ~[joda-time-2.10.6.jar:2.10.6] at org.graylog.events.processor.aggregation.PivotAggregationSearch.extractValues(PivotAggregationSearch.java:322) ~[classes/:?] at org.graylog.events.processor.aggregation.PivotAggregationSearch.doSearch(PivotAggregationSearch.java:161) ~[classes/:?] at org.graylog.events.processor.aggregation.AggregationEventProcessor.aggregatedSearch(AggregationEventProcessor.java:234) ~[classes/:?] at org.graylog.events.processor.aggregation.AggregationEventProcessor.createEvents(AggregationEventProcessor.java:124) ~[classes/:?] at org.graylog.events.processor.EventProcessorEngine.execute(EventProcessorEngine.java:91) ~[classes/:?]

    Possible Solution

    Steps to Reproduce (for bugs)

    This issue can be reproduced by creating logs with the Random HTTP message generator input and creating the following aggregation event definition. Note the field-does-not-exist group by field.

    image

    Workaround

    Specify a Group by Field that exists on the log messages queried by the aggregation.

    Your Environment

    • Graylog Version: 5.0.2
    • Java Version:
    • Elasticsearch Version: 7.10.2
    • MongoDB Version: 5.0.14
    • Operating System:
    • Browser version:
    bug 
    opened by danotorrey 0
  • Bump dompurify from 2.4.1 to 2.4.2 in /graylog2-web-interface

    Bump dompurify from 2.4.1 to 2.4.2 in /graylog2-web-interface

    Bumps dompurify from 2.4.1 to 2.4.2.

    Release notes

    Sourced from dompurify's releases.

    DOMPurify 2.4.2

    • Fixed a Trusted Types sink violation with empty input and NAMESPACE , thanks @​tosmolka
    • Fixed a Prototype Pollution issue discovered and reported by @​kevin-mizu
    Commits
    • d1dd037 fix: Fixed a prototype pollution bug reported by @​kevin_mizu
    • 24d2a7f Merge pull request #748 from tosmolka/tosmolka/747
    • 7de86a0 Fix formatting
    • 191cc00 Fix Trusted Types Sink violation with empty input and NAMESPACE
    • 4945074 Merge pull request #745 from cure53/dependabot/npm_and_yarn/qs-and-body-parse...
    • 7e9fcd9 build(deps): bump qs and body-parser
    • 2734b2d Merge pull request #737 from cure53/dependabot/npm_and_yarn/engine.io-and-soc...
    • f3b68d9 build(deps): bump engine.io and socket.io
    • 9a751e4 Merge pull request #732 from Pomierski/patch-1
    • 2c03b6c fix
    • Additional commits viewable in compare view

    Dependabot compatibility score

    You can trigger a rebase of this PR by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Test Lookup does not allow uppercase letters - cannot test case sensitive lookups

    Test Lookup does not allow uppercase letters - cannot test case sensitive lookups

    When using the 'Test lookup' functionality via the details screen of a Lookup table, it is not possible to input an uppercase letter. This prevents testing lookups that are stored using uppercase letters.

    image

    Expected Behavior

    Able to test lookups regardless of casing.

    Current Behavior

    Cannot retrieve lookup results if the key contains an uppercase letter.

    Possible Solution

    This appears to be set on the input form:

    image

    Steps to Reproduce (for bugs)

    1. View details of a lookup table
    2. Attempt to type a capital letter in the test lookup key input box

    Context

    This makes testing lookup results impossible if the key has an uppercase letter.

    Your Environment

    • Graylog Version: 5.0.2
    • Java Version: (Bundled JVM 17.0.5)
    • Elasticsearch Version: OpenSearch 2.4.1
    • MongoDB Version: 5.0.14
    • Operating System: Ubuntu Server 20.04 LTS
    • Browser version: Chrome 108.0.5359.124
    bug 
    opened by drewmiranda-gl 0
  • Separate concerns in `PaginatedList` to simplify state management.

    Separate concerns in `PaginatedList` to simplify state management.

    Description

    Motivation and Context

    With this PR we are simplifying the state management in the PaginatedList component. The component can maintain the state for the active page and page size in two different ways:

    1. It can have its own react state
    2. Or It can use the URL query params to maintain the state

    Before this PR the component contained all necessary logic for both use cases. Even when the state was based on the URL query params it still maintained its own state. This resulted in a comparable complex state management.

    The PaginatedList component now contains the stateless ListBase component which receives the active page and page size. In case 1 we render a wrapper component which maintains a state for the active page and page size and passes it to the ListBase component. In case 2 we render a wrapper component which extracts the page and page size from the URL and passes it to the ListBase component

    With this approach the change in https://github.com/Graylog2/graylog2-server/pull/14141 is no longer necessary.

    One thought I had while working on the state management, in case 1 we currently often maintain two states, one outside of the PaginatedList and one which is part of the component. In my opinion we can remove the internal PaginatedList state to simplify the sate management even more.

    How Has This Been Tested?

    Screenshots (if appropriate):

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [x] Refactoring (non-breaking change)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    /nocl

    opened by linuspahl 0
  • remove single-quotes from jvm.memory metric names

    remove single-quotes from jvm.memory metric names

    Single quotes are removed from all jvm.memory.* metric names because they make it difficult to design a match_pattern for Prometheus Exporter Custom Mappings.

    Motivation and Context

    fixes Graylog2/graylog-plugin-enterprise#4394

    Types of changes

    • [X] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Refactoring (non-breaking change)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    opened by AntonEbel 0
Releases(0.21.0-beta4)
A high performance replicated log service. (The development is moved to Apache Incubator)

Apache DistributedLog (incubating) Apache DistributedLog (DL) is a high-throughput, low-latency replicated log service, offering durability, replicati

Twitter 2.2k Dec 29, 2022
Log annotation for logging frameworks

Herald "Why, sometimes I've believed as many as six impossible things before breakfast." - Lewis Carroll, Alice in Wonderland. Herald provides a very

Vladislav Bauer 71 Dec 21, 2022
Highly efficient garbage-free logging framework for Java 8+

Garbage Free Log Highly efficient garbage-free logging framework for Java 8+. Use Add the following dependencies to your project: implementation 'com.

EPAM Systems 37 Dec 12, 2022
Echopraxia - Java Logging API with clean and simple structured logging and conditional & contextual features. Logback implementation based on logstash-logback-encoder.

Echopraxia Echopraxia is a Java logging API that and is designed around structured logging, rich context, and conditional logging. There is a Logback-

Terse Systems 43 Nov 30, 2022
A Java library that facilitates reading, writing and processing of sensor events and raw GNSS measurements encoded according to the Google's GNSS Logger application format.

google-gnss-logger This library facilitates reading, writing and processing of sensor events and raw GNSS measurements encoded according to the Google

Giulio Scattolin 5 Dec 21, 2022
An extensible Java library for HTTP request and response logging

Logbook: HTTP request and response logging Logbook noun, /lɑɡ bʊk/: A book in which measurements from the ship's log are recorded, along with other sa

Zalando SE 1.3k Dec 29, 2022
P6Spy is a framework that enables database data to be seamlessly intercepted and logged with no code changes to the application.

p6spy P6Spy is a framework that enables database data to be seamlessly intercepted and logged with no code changes to existing application. The P6Spy

p6spy 1.8k Dec 27, 2022
Best-of-breed OpenTracing utilities, instrumentations and extensions

OpenTracing Toolbox OpenTracing Toolbox is a collection of libraries that build on top of OpenTracing and provide extensions and plugins to existing i

Zalando SE 181 Oct 15, 2022
Logstash - transport and process your logs, events, or other data

Logstash Logstash is part of the Elastic Stack along with Beats, Elasticsearch and Kibana. Logstash is a server-side data processing pipeline that ing

elastic 13.2k Jan 5, 2023
The reliable, generic, fast and flexible logging framework for Java.

About logback Thank you for your interest in logback, the reliable, generic, fast and flexible logging library for Java. The Logback documentation can

QOS.CH Sarl 2.6k Jan 7, 2023
The Apache Software Foundation 3k Jan 4, 2023
tinylog is a lightweight logging framework for Java, Kotlin, Scala, and Android

tinylog 2 Example import org.tinylog.Logger; public class Application { public static void main(String[] args) { Logger.info("Hello

tinylog.org 547 Dec 30, 2022
Best-of-breed OpenTracing utilities, instrumentations and extensions

OpenTracing Toolbox OpenTracing Toolbox is a collection of libraries that build on top of OpenTracing and provide extensions and plugins to existing i

Zalando SE 181 Oct 15, 2022
tinylog is a lightweight logging framework for Java, Kotlin, Scala, and Android

tinylog 2 Example import org.tinylog.Logger; public class Application { public static void main(String[] args) { Logger.info("Hello

tinylog.org 551 Jan 4, 2023
Logging filters for Spring WebFlux client and server request/responses

webflux-log Logging filters for Spring WebFlux client and server request/responses. Usage To log WebClient request/response, do the following specify

null 10 Nov 29, 2022
PortalLogger - Logs portals into a text file and in chat

Logs portals into a text file and in chat. Useful if afk flying under bedrock. Feel free to add to your client The logs are stored in .minecraft/ARTEMIS/PortalLogger

null 7 Dec 2, 2022
Java-Trading-Log-Project - A Trading Log to Journal Your Trades.

Abhi's Project - Trading Log Trading Background I am very passionate about trading. I have been studying the financial markets for a few years and hav

Abhigyan Dabla 0 Jul 18, 2022
Sourcetrail - free and open-source interactive source explorer

Sourcetrail Sourcetrail is a free and open-source cross-platform source explorer that helps you get productive on unfamiliar source code. Windows: Lin

Coati Software 13.2k Jan 5, 2023