profile
viewpoint

zach-taylor/splunk_handler 36

Python logging handler for sending logs to Splunk Enterprise

sullivanmatt/gae_requests_mtls_demo 2

A small example of requesting remote HTTPS resources authenticated with mutual TLS (MTLS) on GAE

sullivanmatt/json-ups 2

A simple Python script which POSTs JSON data derived from the Linux upsc application.

sullivanmatt/splunk_handler 2

Python logging handler for sending logs to Splunk Enterprise

sullivanmatt/BadTwitterClone 1

An insecure website for learning to hack

sullivanmatt/cloudflare-ddns 1

Script for dynamically updating a CloudFlare DNS record.

sullivanmatt/python-json-logger 1

Json Formatter for the standard python logger

sullivanmatt/rancher 1

Complete container management platform

sullivanmatt/Raspberry-Pwn 1

A Raspberry Pi pentesting suite by Pwnie Express

sullivanmatt/splunk-sdk-python 1

Splunk Software Development Kit for Python

push eventzach-taylor/splunk_handler

Zach Taylor

commit sha 2cafe514c75e7ff43de758b17a5f971a2fae9bf8

Create tests.yml First attempt at migrating ci to github actions

view details

push time in a day

issue openedzach-taylor/splunk_handler

Remove python 3.4 support

Going to consider this a non-breaking change. The code should continue to work on 3.4, but we don't want to officially support it anymore, since it's EOL.

created time in a day

issue openedzach-taylor/splunk_handler

Add python 3.9 support

created time in a day

push eventzach-taylor/splunk_handler

Zach Taylor

commit sha 9340801092ec5025742b6e1e3ec4ed60de77b80f

Create codeql-analysis.yml

view details

push time in a day

push eventzach-taylor/splunk_handler

AetherDeity

commit sha 210a0c83afac167a33ae83ebea72cd92eb293013

Release 2.2.1

view details

AetherDeity

commit sha d6a77e5453e6ca074b3b603f07a2d8ee5b7e7a83

Merge pull request #42 from zach-taylor/release_2_2_1 Release 2.2.1

view details

push time in 2 months

pull request commentzach-taylor/splunk_handler

fix dup log race condition on shutdown

Ok, I will merge and cut a release

AetherDeity

comment created time in 2 months

PR merged zach-taylor/splunk_handler

Release 2.2.1
+1 -1

0 comment

1 changed file

pr created time in 2 months

created tagzach-taylor/splunk_handler

tagv2.2.1

Python logging handler for sending logs to Splunk Enterprise

created time in 2 months

push eventzach-taylor/splunk_handler

Jeffrey Melvin

commit sha a53746b7ab87da50e982f1fb904ba4773fd1fe47

fix dup log race condition on shutdown

view details

Jeffrey Melvin

commit sha 5f696fd8286bc2c0372a905b52a19066d34a278d

fix error in comments

view details

AetherDeity

commit sha 0cf5bad68ea9d52cef490422741e4739a6871540

Merge pull request #40 from AetherDeity/fix_dup_log_race_condition fix dup log race condition on shutdown

view details

push time in 2 months

create barnchzach-taylor/splunk_handler

branch : release_2_2_1

created branch time in 2 months

release zach-taylor/splunk_handler

v2.2.1

released time in 2 months

PR merged zach-taylor/splunk_handler

Reviewers
fix dup log race condition on shutdown

Problem

wait_until_empty allows the code to wait until the queue has been emptied. However, there is still a race condition between the response from the Splunk server and the execution of shutdown.

Instance shutdown() calls _splunk_worker(), which will call empty_queue(). If the connection to splunk is still open from the Timer call that emptied the queue, the Timer thread will not have yet cleared self.log_payload, which can result in the shutdown resending the payload, because empty_queue() only appends to self.log_payload.

Solution

There are multiple options:

  1. Have _splunk_worker() check the queue size. This was not done as it doesnt align with the option to send a payload as an arg to the function
  2. Have shutdown() send the payload in the _splunk_worker() call.
  3. Clear self.log_payload as soon as it is read instead of after it is received by splunk. The payload is thrown away regardless of error already, so this seemed like the best option.

Additionally removed the flush_interval adjustment made in wait_until_empty() as the timer is already adjusted in _splunk_worker() when there is content in the queue after completing a call to splunk

@zach-taylor

+3 -8

1 comment

1 changed file

AetherDeity

pr closed time in 2 months

issue openedzach-taylor/splunk_handler

Can you make index optional..

Hi

When configuring a HEC in Splunk it requires a default index to be specified, which get used if no "index" if specified in the HEC request.

Given this, it would be useful to make it optional in Splunk Handler, that way, if a change is needed of the back-end (i.e. migrating to different index), it avoids the next to also update the code.

created time in 2 months

PR opened zach-taylor/splunk_handler

fix dup log race condition on shutdown

Problem

wait_until_empty allows the code to wait until the queue has been emptied. However, there is still a race condition between the response from the Splunk server and the execution of shutdown.

Instance shutdown() calls _splunk_worker(), which will call empty_queue(). If the connection to splunk is still open from the call that emptied the queue, the Timer thread will not have yet cleared self.log_payload, which can result in the shutdown resending the payload, because empty_queue() only appends to self.log_payload.

Solution

There are multiple options:

  1. Have _splunk_worker() check the queue size. This was not done as it doesnt align with the option to send a payload as an arg to the function
  2. Have shutdown() send the payload in the _splunk_worker() call.
  3. Clear self.log_payload as soon as it is read instead of after it is received by splunk. There does not appear to be any preservation of this data if the call to splunk fails, so it seemed like the best option

Additionally removed the flush_interval adjustment made in wait_until_empty() as the timer is already adjusted in _splunk_worker() when there is content in the queue after completing a call to splunk

@zach-taylor

+2 -7

0 comment

1 changed file

pr created time in 2 months

issue closedzach-taylor/splunk_handler

Using with AWS Lambda

Trying to get this to work with Lambda. I read your notes on the matter and implemented them. However, I am pretty sure I messed it up badly as its not working. I made sure to install the splunk handler with pip to my projects directory along with the python script. I then uploaded the .zip and don't see any errors about importing anything so that is all fine. My problem is that when I test, I get a "Task timed out after 3.00 seconds". I am assuming it is my fault as I am not that great with Python yet. Any guidance would be greatly appreciated.

from splunk_handler import SplunkHandler
from splunk_handler import force_flush
import logging

def lambda_handler(event, context):
    splunk = SplunkHandler(
        host='<my_splunk_instance>',
        port='8088',
        token='<my_splunk_token>',
        index='test_index'
        #hostname='hostname', # manually set a hostname parameter, defaults to socket.gethostname()
        #source='source', # manually set a source, defaults to the log record.pathname
        #sourcetype='sourcetype', # manually set a sourcetype, defaults to 'text'
        #verify=True, # turn SSL verification on or off, defaults to True
        #timeout=60, # timeout for waiting on a 200 OK from Splunk server, defaults to 60s
        #flush_interval=15.0, # send batch of logs every n sec, defaults to 15.0, set '0' to block thread & send immediately
        #queue_size=5000, # a throttle to prevent resource overconsumption, defaults to 5000
        #debug=False, # turn on debug mode; prints module activity to stdout, defaults to False
        #retry_count=5, # Number of retry attempts on a failed/erroring connection, defaults to 5
        #retry_backoff=2.0,  # Backoff factor, default options will retry for 1 min, defaults to 2.0
    )

    logging.getLogger('').addHandler(splunk)

    logging.warning('hello!')
    force_flush()  # Flush logs in a blocking manner

closed time in 3 months

TheChedda

issue closedzach-taylor/splunk_handler

Using with AWS Lambda that is triggered by API Gateway

Most of my lambdas are triggered by API Gateway, so my handlers end with something like:

    return {'statusCode': 200,
            'body': json.dumps(data),
            'headers': {'Content-Type': 'application/json'}}

Should I put force_flush() before the return? Wouldn't that impact my API performance?

closed time in 3 months

JonHolman

issue commentzach-taylor/splunk_handler

Using with AWS Lambda that is triggered by API Gateway

@JonHolman , it could have a potential impact if there are outstanding logs waiting to be sent. The alternative is to return and hope all the logs were submitted before the environment is terminated, which occurs once you return. I would recommend just using CloudWatch in lieu of the later.

JonHolman

comment created time in 3 months

more