profile
viewpoint
moved to gitlab.langton.cloud chrisdlangton Australia http://gitlab.langton.cloud Repos moved to Gitlab

chrisdlangton/pages.js 22

PagesJS JavaScript Micro-Framework for Single-page Apps

chrisdlangton/node-tickets 19

Issue, Milestone, and Task tracker built on NodeJS

chrisdlangton/docker-phaser 15

Easy quick phaser environment for anywhere Docker runs

chrisdlangton/php-stream-socket-server 8

Provides a bootstrapable server for WebSockets

chrisdlangton/persistJS 3

Automatically Persist Forms across page refreshes

chrisdlangton/sm.js 3

Social Markup JavaScript Library for Single-page Apps

chrisdlangton/Open-Group-Chat 2

Chat between PC's Openly - Nothing is logged anywhere. Ever.

chrisdlangton/php-functional-programming 2

The Building Blocks for PHP to be able to provide true Functional Programming

issue commentdrwetter/testssl.sh

Is is possible to test for tls racoon?

I came here looking for such a thing ;)

chrisdlangton

comment created time in a day

issue openeddrwetter/testssl.sh

Is is possible to test for tls racoon?

A side channel, so maybe not. Info https://hackaday.com/2020/09/11/security-this-week-racoons-in-my-tls-bypassing-frontends-and-obscurity/

created time in 2 days

issue commentsendgrid/sendgrid-python

Add support for Proxies

https://github.com/sendgrid/python-http-client/issues/146

pranitbauva1997

comment created time in 6 days

issue openedsendgrid/python-http-client

proxy support required for use in private subnets without internet egress (PCI, IRAP, ISM, SOC2, ISO27018)

I think this message would be best suited in other repo's issues that depend on this lib - because this lib can be removed as redundant from my observation.

I cannot use send grid to send email from a webserver that is in a private subnet. the webserver is behind a load balancer for ingress and reguires a proxy (allowed hosts list) for egress.

This is a service that must be compliant, so best practice security is required; PCI, IRAP, ISM, SOC2, ISO27018

I suggest to use urllib to do HTTP instead of reinvent the wheel. if you need more sugar use requests instead of reinvent the aircraft. Both allow a per-request proxy setting AND most importantly they both respect the operating system proxy setting (on linux it is the env var http_proxy and https_proxy case insensitive).

I would appreciate some attention on this because it is hard to see why you decided to degrade python and do http yourself..

thank you

created time in 7 days

issue openedfugue/regula

Beware: the CIS rules are not aligned to CIS

Hello maintainers.

Please consider addressing CIS benchmarks 1.3.0 correctly. You are writing about 1.22 from AWS Foundations in the README and the rule defined here; https://github.com/fugue/regula/blob/master/rules/aws/iam_admin_policy.rego

Seems you have not checked what the CIS 1.22 intent should be, because you are looking for "overly permissive" policy (which is great to have a rule for it just is not CIS 1.22).

You can view this rule here; https://workbench.cisecurity.org/sections/43739/recommendations/939514

Here is a copy from the site if you have not registered for an account

1.22 Ensure IAM users are managed centrally via identity federation or AWS Organizations for multi-account environments
Scoring Status
Manual
Applicable Profiles
Level 2
Description
In multi-account environments, IAM user centralization facilitates greater user control. User access beyond the initial account is then provide via role assumption. Centralization of users can be accomplished through federation with an external identity provider or through the use of AWS Organizations.

Rationale Statement
Centralizing IAM user management to a single identity store reduces complexity and thus the likelihood of access management errors.

Audit Procedure
For multi-account AWS environments with an external identity provider...

Determine the master account for identity federation or IAM user management
Login to that account through the AWS Management Console
Click Services
Click IAM
Click Identity providers
Verify the configuration
Then..., determine all accounts that should not have local users present. For each account...

Determine all accounts that should not have local users present
Log into the AWS Management Console
Switch role into each identified account
Click Services
Click IAM
Click Users
Confirm that no IAM users representing individuals are present
For multi-account AWS environments implementing AWS Organizations without an external identity provider...

Determine all accounts that should not have local users present
Log into the AWS Management Console
Switch role into each identified account
Click Services
Click IAM
Click Users
Confirm that no IAM users representing individuals are present
Remediation Procedure
The remediation procedure will vary based on the individual organization's implementation of identity federation and/or AWS Organizations with the acceptance criteria that no non-service IAM users, and non-root accounts, are present outside the account providing centralized IAM user management.

CIS Controls
Version 7
16.2: Configure Centralized Point of Authentication

As you can see the intent is for federation via an identity provider (like Okta, Auth0, AzureAD, JumpCloud, etc) and I would suggest that AWS SOO or Cognito are also equally acceptable solutions to address this rule.

Happy to help you with any other CIS interpretations, but the rule basically speaks for itself, do not hesitate to ask.

This overly permissive rule is actually CIS 1.16. which strangely you have ;

resource_type = "aws_ebs_volume"
controls = {"CIS_1-16"}

You can renumber 1.22 as 1.16 to correct the first issue.

EBS is storage, and CIS is broken into 5 categories where Storage is category 2. So an EBS rule would start with a 2; 2.xx not a 1.

Again, happy to help with CIS interpretations, but they are pretty straightforward as far as security standards go i'd say they are extremely simple and very well documented.

created time in a month

fork chrisdlangton/FunctionShield

A Serverless Security Library for Developers. Regain Control Over Your AWS Lambda & Google Cloud Functions Runtimes.

fork in a month

issue closedboto/boto3

S3 client causes Segmentation fault when used with mysql.connector

my env

boto3==1.14.28
botocore==1.17.28
mysql-connector-python==8.0.21
Python 3.8.4 [GCC 8.3.0] on docker `python:3.8-slim-buster`

Reproduce easily

import boto3
from mysql.connector import errorcode

aws_session = boto3.Session(region_name='ap-southeast-2')
sqs_client = aws_session.resource('sqs')
print('sqs')
s3_client = aws_session.client('s3')
print('s3')
print(s3_client.list_buckets())

produces

sqs
Segmentation fault (core dumped)

comment out from mysql.connector import errorcode and it produces desired results;

sqs
s3
{'ResponseMetadata': ...
... etc

Any chance we can make boto3 work with the mysql.connector considering it is needed for RDS/Aurora?

I have naively ruled out the bug being a problem with mysql.connector using process of elimination. i.e. My existing app worked flawlessly using mysql.connector, i later added boto3 session for resource('sqs') and has been working for weeks, then i added resource('s3') and encountered Segmentation fault. Therefore the replication shows how SQS works but S3 fails, leading me to believe that boto3 likely works well with mysql.connector in general, both projects are not causing the segfault as a general incompatibility, but it is narrowed down to S3 client code path being the root cause that introduced the segfault.

thanks

closed time in a month

chrisdlangton

issue commentboto/boto3

S3 client causes Segmentation fault when used with mysql.connector

And if you did not read it in the first message, boto3 with sqs (and many other resources) works, but there is a bug on boto3 S3 somewhere

chrisdlangton

comment created time in a month

issue commentboto/boto3

S3 client causes Segmentation fault when used with mysql.connector

This is a docker image

FROM python:3.8-slim-buster

RUN apt-get update && \
    apt-get install -y --no-install-recommends \
        build-essential \
        zlib1g-dev \
        libssl-dev \
        wget \
        unzip \
        ldnsutils \
        logrotate && \
    apt-get autoremove -y && \
    apt-get clean && \
    rm -rf /tmp/* /var/lib/apt/lists/*

RUN pip install -q --no-cache-dir --isolated -U pip setuptools wheel

ENTRYPOINT ["/usr/bin/env"]

it was used to reproduce (steps above) so your environment versus what i have reproduced on Digital ocean host, AWS EC2 host, and 2x hosts at home - could be the same environment (if you used this with the reproduction python code I supplied), otherwise your 1 test is the exception environment, not my 4 test environments with identical outcomes.

By the way, as you can see, there is NO MYSQL usage - just an import. so your advice to debug an import is not helpful at all, there is nothing to debug.

chrisdlangton

comment created time in a month

issue commentboto/boto3

S3 client causes Segmentation fault when used with mysql.connector

i have reproduced it on Digital ocean host, EC2 host, and 2x hosts at home.

again, there is NO MYSQL usage - just an import.. your advice is meaningless

chrisdlangton

comment created time in a month

issue commentboto/boto3

Boto3 causing segmentation fault

i am also seeing segfault #2534 but only when using S3, in my testing i went back to python 3.7 and saw the error existed so i ruled out it being a python bug and didn't go back further.

I think there might be a commonality somewhere in boto3 session code paths, if we assume there is a connection.

JasonXJ

comment created time in a month

issue openedboto/boto3

S3 client causes Segmentation fault when used with mysql.connector

my env

boto3==1.14.28
botocore==1.17.28
mysql-connector-python==8.0.21
Python 3.8.4 [GCC 8.3.0] on docker `python:3.8-slim-buster`

Reproduce easily

import boto3
from mysql.connector import errorcode

aws_session = boto3.Session(region_name='ap-southeast-2')
sqs_client = aws_session.resource('sqs')
print('sqs')
s3_client = aws_session.client('s3')
print('s3')
print(s3_client.list_buckets())

produces

sqs
Segmentation fault (core dumped)

comment out from mysql.connector import errorcode and it produces desired results;

sqs
Segmentation fault (core dumped)
root@3f5c25498d7b:/srv/app# python -u -d src/main.py --service amass --worker-id 123
sqs
s3
{'ResponseMetadata': ...
... etc

Any chance we can make boto3 work with the mysql.connector considering it is needed for RDS/Aurora?

thanks

created time in a month

fork chrisdlangton/sonarqube

OWASP SonarQube Project

fork in 2 months

issue commentmiguelgrinberg/python-socketio

SIGINT handler does not disconnect client

Zombie maker;

import atexit
import logging
import json
import socketio


log = logging.getLogger(__name__)
sio = socketio.Client()
atexit.register(sio.disconnect)

@sio.event
def connect():
    log.info("connected")

@sio.event
def connect_error():
    log.info("connection failed")

@sio.event
def disconnect():
    log.info("disconnected")

def not_the_real_send_event(event: str, data: dict, host: str):
    if not sio.connected:
        sio.connect(host)
    sio.emit(event, json.dumps(data, sort_keys=True, default=str))
 
if __name__ == "__main__":
    # Do stuff that will use not_the_real_send_event()
    log.info(f'Finished')
    sys.exit(0)

This caused output of connected and expected script outputs, finally i get a Finished which should be the end - however the process remains active. You can force external connectivity to be lost (by cycling the socket server or taking down the interface) which causes the zombie to produce output disconnected followed by connected when a connectivity is re-established. these occur after the exit;

connected
# do stuff that has outputs
Finished
disconnected
connected
disconnected

The fix was simply removing atexit

import logging
import json
import socketio
import signal


log = logging.getLogger(__name__)
sio = socketio.Client()

@sio.event
def connect():
    log.info("connected")

@sio.event
def connect_error():
    log.info("connection failed")

@sio.event
def disconnect():
    log.info("disconnected")

def not_the_real_send_event(event: str, data: dict, host: str):
    if not sio.connected:
        sio.connect(host)
    sio.emit(event, json.dumps(data, sort_keys=True, default=str))

def signal_handler(signum, stack_frame):
    message = f'Signal handler called with signal {signum}'
    log.warning(message)
    log.debug(stack_frame)
    sio.disconnect()
    sys.exit(0)

signal.signal(signal.SIGQUIT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
signal.signal(signal.SIGTSTP, signal_handler) # ctrl+z
signal.signal(signal.SIGINT, signal_handler) # ctrl+c
 
if __name__ == "__main__":
    # Do stuff that will use not_the_real_send_event()
    sio.disconnect()
    log.info(f'Finished')
    sys.exit(0)

However if there is an unhanded signal, and no sio.disconnect() call, we are a zombie again.

Do not feel defensive about my comments - they are observations and valid because they are observed using the getting started guidance. I would not be alone and any mature project would handle signals as I ended up doing anyway so it might not be widely reported or an understood problem.

asnelzin

comment created time in 2 months

issue commentmiguelgrinberg/python-socketio

SIGINT handler does not disconnect client

I also noticed this in my project.

Tried fixing it using some process managers, supervisord / monit / circusd. It took 9 weeks of after work time and I finally realised my bug was a little more trouble then your report might seem..

I immediately started using the atexit.register builtin for Client.disconnect and forgot all this time I introduced it "after" noticing the bug!

Long story short; this library is not signal aware and it is incompatible with the atexit python builtin (doing this actually turns your process into a zombie).

You need to do 2 things;

  1. Add some signal handlers;
import socketio
from retry.api import retry

@retry((Exception), tries=15, delay=1.5, backoff=3)
def close_socket():
    sio.disconnect()

sio = socketio.Client()
def signal_handler(signum, **_):
    message = f'Signal handler called with signal {signum}'
    log.warning(message)
    # do things
    close_socket()
    sys.exit(0)

if __name__ == "__main__":
    signal.signal(signal.SIGQUIT, signal_handler)
    signal.signal(signal.SIGTERM, signal_handler)
    signal.signal(signal.SIGTSTP, signal_handler) # ctrl+z
    signal.signal(signal.SIGINT, signal_handler) # ctrl+c
    # do things
    close_socket()
    sys.exit(0)
  1. Make sure every single possible exit location calls close_socket (because atexit will zombie this library)

Hope that helps you too

Maintainers You should consider fixing your incompatibility with python atexit builtin, being a zombie is embarrassing.. And while I might agree that It is not necessary to hand hold users OR include platform specific code for them, I do actually think it is 100% a requirement for 100% of users on linux to call Client disconnect so it might be a good idea to inform users how to properly implement this core implementation quirk using signal handlers. There is just so many ways this library turns into a zombie process, so if you are not going to inherently manage this, you should at least document it in the getting started guide.

asnelzin

comment created time in 2 months

issue commentOWASP/Amass

[Feature request] Save/Cache API responses

@caffix this is wonderful news - thank you! I reviewed the diff and am not 100% sure but i don't think there was a way to specify where responses are save to (file system). It seems cache is pure graph database and responses are private to teh app and users still cannot access their data from third party integrations.. If we can't see or access the data, that code does not address this ticket at all.. seems to only address an internal cache with ttl which is not related to this ticket - or related in the sense that you and I both breath but are not family :)

chrisdlangton

comment created time in 2 months

more