profile
viewpoint

Cougar/Arduino 2

open-source electronics prototyping platform

Cougar/atom.js 1

Controller software for the home automation project projekt.auml.se based on node.js

Cougar/ActionBarSherlock 0

Library for implementing the action bar design pattern using the native action bar on Android 4.0+ and a custom implementation on pre-4.0 through a single API and theme.

Cougar/afwall 0

Android Firewall ( IPtables frontend), Forked from DroidWall

Cougar/Airtime 0

Airtime is Sourcefabric’s open source radio software for scheduling and remote station management. Airtime provides a reliable audio playout with sub-second precision, an improved interface with modern usability features, advanced user management supporting roles and a Google-style calendar to schedule and move shows and playlists.

Cougar/android-ColorPickerPreference 0

ColorPickerPreference for android to create color picker in preferences. Project created as Library

Cougar/android-jamod-TCP 0

An update on the open source Jamod Modbus library with reduced serial code and added features for TCP

Cougar/ansible-action-plugins 0

Ansible docker_copy action plugin

Cougar/ansible-docker 0

Ansible Docker Playbook

Cougar/ansible-ini2yaml 0

Ansible INI to YAML inventory converter

PR opened arendst/Tasmota

Fix wraparound bug in backlog

Description:

Fix wraparound bug in backlog caused by the incorrect assumption that millis time 0 would always be in the past. After around 25 days of uptime, millis() passed 2^31 and the backlog stopped working.

Also rename backlog_delay to backlog_timer + fix a similar (but more harmless) assumption in wifiKeepAlive

Checklist:

  • [ ] The pull request is done against the latest dev branch
  • [ ] Only relevant files were touched
  • [ ] Only one feature/fix was added per PR and the code change compiles without warnings
  • [ ] The code change is tested and works on Tasmota core ESP8266 V.2.7.4.7
  • [ ] The code change is tested and works on Tasmota core ESP32 V.1.0.4.2
  • [ ] I accept the CLA.

NOTE: The code change must pass CI tests. Your PR cannot be merged unless tests pass

+9 -9

0 comment

3 changed files

pr created time in 8 minutes

pull request commentsyncthing/syncthing

lib/ur: Store unreported failures on shutdown

Is there reason to believe we have time to write them to disk? (And if there is, why don't we have time to send them upstream instead?)

imsodin

comment created time in 15 minutes

pull request commentsyncthing/syncthing

gui: Restore Select / Deselect All buttons in device sharing tab.

Thanks @calmh. So you will merge the commit over to release?

acolomb

comment created time in 39 minutes

issue closedriptideio/pymodbus

Modbus Server with multiple slave devices context - writing to a register overwrites to all slaves

<!-- Please use the Pymodbus gitter channel at https://gitter.im/pymodbus_dev/Lobby or Stack Overflow(tag pymodbus for support questions.

Before opening a new issue, make sure you do the following: * check that your issue isn't already filed: https://github.com/riptideio/pymodbus/issues * prepare a short, runnable example that reproduce the issue with the latest development version of Pymodbus -->

Versions

  • Python: 3.8
  • OS: Ubuntu Buster
  • Pymodbus: 2.4.0
  • Modbus Hardware (if used): USB-RS485-WE-1800-BT (FTDI Chip)

Pymodbus Specific

  • Server: rtu - async (but tried both async and sync)

Description

I have an issue with a simple pymodbus server.
i.e. writing to slave 1, register address 1, should be a different register from slave 100, register 1.

In my case, writing to registers on slave 100, results in writing to registers for ALL slaves. Could someone go over my server code to see if I am missing something, or perhaps this is a bug.

I could see in log that it seems to write to the right slaves and registers. But on the master side it end up with the wrong values.

Saw also that someone had an similar issue at stack overflow, but with a tcp server implementation: https://stackoverflow.com/questions/61949194/pymodbus-modbus-server-implementation-with-multiple-slave-devices-context-writ

Code and Logs

Code

#!/usr/bin/env python
"""
Pymodbus Server With Updating Thread
--------------------------------------------------------------------------

This is an example of having a background thread updating the
context while the server is operating. This can also be done with
a python thread::

    from threading import Thread

    thread = Thread(target=updating_writer, args=(context,))
    thread.start()
"""
# --------------------------------------------------------------------------- #
# import the modbus libraries we need
# --------------------------------------------------------------------------- #
from pymodbus.server.asynchronous import StartSerialServer
from pymodbus.device import ModbusDeviceIdentification
from pymodbus.datastore import ModbusSequentialDataBlock
from pymodbus.datastore import ModbusSlaveContext, ModbusServerContext
from pymodbus.transaction import ModbusRtuFramer, ModbusAsciiFramer

# --------------------------------------------------------------------------- #
# import the payload builder
# --------------------------------------------------------------------------- #
from pymodbus.constants import Endian
from pymodbus.payload import BinaryPayloadDecoder
from pymodbus.payload import BinaryPayloadBuilder

# --------------------------------------------------------------------------- #
# import the twisted libraries we need
# --------------------------------------------------------------------------- #
from twisted.internet.task import LoopingCall

# --------------------------------------------------------------------------- #
# configure the service logging
# --------------------------------------------------------------------------- #
import logging
logging.basicConfig()
log = logging.getLogger()
log.setLevel(logging.DEBUG)

# --------------------------------------------------------------------------- #
# define your callback process
# --------------------------------------------------------------------------- #


def updating_writer(a):
    """ A worker process that runs every so often and
    updates live values of the context. It should be noted
    that there is a race condition for the update.

    :param arguments: The input arguments to the call
    """
    context = a[0]
    register = 3
    
    #### Write to registers slave 1 ####
    slave_id = 0x01
    log.debug(f"::: Make payload to SLAVE={slave_id} :::")

    # Total energy
    builder = BinaryPayloadBuilder(
        byteorder=Endian.Big,
        wordorder=Endian.Little
        )
    builder.add_32bit_int(20000)  # kWh Tot*10
    energy = builder.to_registers()

    # Phase 1 variables
    builder = BinaryPayloadBuilder(
        byteorder=Endian.Big,
        wordorder=Endian.Little
        )
    builder.add_32bit_int(4000)  # VL1L2*10
    builder.add_32bit_int(2300)  # VL1N*10
    builder.add_32bit_int(1000)  # AL1*1000
    builder.add_32bit_int(2300)  # kWL1*10
    phase1 = builder.to_registers()

    log.debug(f"::: Write registers to SLAVE={slave_id} :::")
    context[slave_id].setValues(register, 0x0112, energy)
    context[slave_id].setValues(register, 0x011e, phase1)

    #### Write to registers slave 100 ####
    slave_id = 0x64
    log.debug(f"::: Make payload to SLAVE={slave_id} :::")

    # Total energy
    builder = BinaryPayloadBuilder(
        byteorder=Endian.Big,
        wordorder=Endian.Little
        )
    builder.add_32bit_int(20000)  # kWh Tot*10
    energy = builder.to_registers()

    # Phase 1 variables
    builder = BinaryPayloadBuilder(
        byteorder=Endian.Big,
        wordorder=Endian.Little
        )
    builder.add_32bit_int(4000)  # VL1L2*10
    builder.add_32bit_int(2300)  # VL1N*10
    builder.add_32bit_int(2000)  # AL1*1000
    builder.add_32bit_int(4600)  # kWL1*10
    phase1 = builder.to_registers()

    log.debug(f"::: Write registers to SLAVE={slave_id} :::")
    context[slave_id].setValues(register, 0x0112, energy)
    context[slave_id].setValues(register, 0x011e, phase1)

    # MeterID
    context[slave_id].setValues(register, 0x000b, [0x0155])


def run_updating_server():
    # ----------------------------------------------------------------------- # 
    # initialize your data store
    # ----------------------------------------------------------------------- # 
    store = ModbusSlaveContext(zero_mode=True)

    # Add datastores to slaves
    addresses = [1, 100]
    slaves = {}
    for adress in addresses:
        slaves.update({adress: store})

    context = ModbusServerContext(slaves=slaves, single=False)

    # ----------------------------------------------------------------------- # 
    # initialize the server information
    # ----------------------------------------------------------------------- # 
    identity = ModbusDeviceIdentification()
    identity.VendorName = 'pymodbus'
    identity.ProductCode = 'PM'
    identity.VendorUrl = 'http://github.com/bashwork/pymodbus/'
    identity.ProductName = 'pymodbus Server'
    identity.ModelName = 'pymodbus Server'
    identity.MajorMinorRevision = '2.3.0'

    # ----------------------------------------------------------------------- # 
    # run the server you want
    # ----------------------------------------------------------------------- # 
    time = 5  # 5 seconds delay
    loop = LoopingCall(f=updating_writer, a=(context,))
    loop.start(time, now=False) # initially delay by time
    StartSerialServer(
        context=context,
        framer=ModbusRtuFramer,
        identity=identity,
        port='/dev/ttyUSB0',
        timeout=0.0001,
        baudrate=9600,
        parity='N',
        bytesize=8,
        stopbits=1,
        ignore_missing_slaves=True)


if __name__ == "__main__":
    run_updating_server()

Logs

INFO:pymodbus.server.asynchronous:Starting Modbus Serial Server on /dev/ttyUSB0
DEBUG:pymodbus.server.asynchronous:Client Connected [/dev/ttyUSB0]
DEBUG:pymodbus.server.asynchronous:Running in Main thread
DEBUG:pymodbus.server.asynchronous:Data Received: 0x1 0x3 0x0 0xb 0x0
DEBUG:pymodbus.framer.rtu_framer:Frame check failed, ignoring!!
DEBUG:pymodbus.framer.rtu_framer:Resetting frame - Current Frame in buffer - 0x1 0x3 0x0 0xb 0x0
DEBUG:pymodbus.server.asynchronous:Data Received: 0x1 0xf5 0xc8
DEBUG:pymodbus.framer.rtu_framer:Frame - [b'\x01\xf5\xc8'] not ready
DEBUG:pymodbus.server.asynchronous:Data Received: 0x2 0x3 0x0 0xb 0x0 0x1 0xf5 0xfb
DEBUG:pymodbus.framer.rtu_framer:CRC invalid, discarding header!!
DEBUG:pymodbus.framer.rtu_framer:Resetting frame - Current Frame in buffer - 0x1 0xf5 0xc8 0x2 0x3 0x0 0xb 0x0 0x1 0xf5 0xfb
DEBUG:pymodbus.framer.rtu_framer:Frame check failed, ignoring!!
DEBUG:pymodbus.framer.rtu_framer:Resetting frame - Current Frame in buffer - 
DEBUG:pymodbus.server.asynchronous:Data Received: 0x64 0x3 0x0 0xb 0x0 0x1 0xfc 0x3d
DEBUG:pymodbus.framer.rtu_framer:Getting Frame - 0x3 0x0 0xb 0x0 0x1
DEBUG:pymodbus.factory:Factory Request[ReadHoldingRegistersRequest: 3]
DEBUG:pymodbus.framer.rtu_framer:Frame advanced, resetting header!!
DEBUG:pymodbus.datastore.context:validate: fc-[3] address-11: count-1
DEBUG:pymodbus.datastore.context:getValues fc-[3] address-11: count-1
DEBUG:pymodbus.server.asynchronous:send: b'6403020000f44c'
DEBUG:pymodbus.server.asynchronous:Data Received: 0x65
DEBUG:pymodbus.framer.rtu_framer:Frame - [b'e'] not ready
DEBUG:pymodbus.server.asynchronous:Data Received: 0x3 0x0 0xb 0x0 0x1 0xfd 0xec
DEBUG:pymodbus.framer.rtu_framer:Not a valid unit id - 101, ignoring!!
DEBUG:pymodbus.framer.rtu_framer:Resetting frame - Current Frame in buffer - 0x65 0x3 0x0 0xb 0x0 0x1 0xfd 0xec
DEBUG:pymodbus.server.asynchronous:Data Received: 0x1 0x3 0x50 0x0 0x0 0x7 0x15 0x8
DEBUG:pymodbus.framer.rtu_framer:Getting Frame - 0x3 0x50 0x0 0x0 0x7
DEBUG:pymodbus.factory:Factory Request[ReadHoldingRegistersRequest: 3]
DEBUG:pymodbus.framer.rtu_framer:Frame advanced, resetting header!!
DEBUG:pymodbus.datastore.context:validate: fc-[3] address-20480: count-7
DEBUG:pymodbus.datastore.context:getValues fc-[3] address-20480: count-7
DEBUG:pymodbus.server.asynchronous:send: b'01030e0000000000000000000000000000ef15'
DEBUG:root:::: Make payload to SLAVE=1 :::
DEBUG:pymodbus.payload:[20000, 0]
DEBUG:pymodbus.payload:[4000, 0, 2300, 0, 1000, 0, 2300, 0]
DEBUG:root:::: Write registers to SLAVE=1 :::
DEBUG:pymodbus.datastore.context:setValues[3] 274:2
DEBUG:pymodbus.datastore.context:setValues[3] 286:8
DEBUG:root:::: Make payload to SLAVE=100 :::
DEBUG:pymodbus.payload:[20000, 0]
DEBUG:pymodbus.payload:[4000, 0, 2300, 0, 2000, 0, 4600, 0]
DEBUG:root:::: Write registers to SLAVE=100 :::
DEBUG:pymodbus.datastore.context:setValues[3] 274:2
DEBUG:pymodbus.datastore.context:setValues[3] 286:8
DEBUG:pymodbus.datastore.context:setValues[3] 11:1
DEBUG:pymodbus.server.asynchronous:Data Received: 0x1 0x3 0x1 0x12 0x0 0x31 0x25 0xe7
DEBUG:pymodbus.framer.rtu_framer:Getting Frame - 0x3 0x1 0x12 0x0 0x31
DEBUG:pymodbus.factory:Factory Request[ReadHoldingRegistersRequest: 3]
DEBUG:pymodbus.framer.rtu_framer:Frame advanced, resetting header!!
DEBUG:pymodbus.datastore.context:validate: fc-[3] address-274: count-49
DEBUG:pymodbus.datastore.context:getValues fc-[3] address-274: count-49
DEBUG:pymodbus.server.asynchronous:send: b'0103624e20000000000000000000000000000000000000000000000fa0000008fc000007d0000011f80000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001621'
DEBUG:root:::: Make payload to SLAVE=1 :::
DEBUG:pymodbus.payload:[20000, 0]
DEBUG:pymodbus.payload:[4000, 0, 2300, 0, 1000, 0, 2300, 0]
DEBUG:root:::: Write registers to SLAVE=1 :::
DEBUG:pymodbus.datastore.context:setValues[3] 274:2
DEBUG:pymodbus.datastore.context:setValues[3] 286:8
DEBUG:root:::: Make payload to SLAVE=100 :::
DEBUG:pymodbus.payload:[20000, 0]
DEBUG:pymodbus.payload:[4000, 0, 2300, 0, 2000, 0, 4600, 0]
DEBUG:root:::: Write registers to SLAVE=100 :::
DEBUG:pymodbus.datastore.context:setValues[3] 274:2
DEBUG:pymodbus.datastore.context:setValues[3] 286:8
DEBUG:pymodbus.datastore.context:setValues[3] 11:1
DEBUG:pymodbus.server.asynchronous:Data Received: 0x1 0x3 0x89 0x60 0x0 0x1 0xae 0x48
DEBUG:pymodbus.framer.rtu_framer:Getting Frame - 0x3 0x89 0x60 0x0 0x1
DEBUG:pymodbus.factory:Factory Request[ReadHoldingRegistersRequest: 3]
DEBUG:pymodbus.framer.rtu_framer:Frame advanced, resetting header!!
DEBUG:pymodbus.datastore.context:validate: fc-[3] address-35168: count-1
DEBUG:pymodbus.datastore.context:getValues fc-[3] address-35168: count-1
DEBUG:pymodbus.server.asynchronous:send: b'0103020000b844'
DEBUG:root:::: Make payload to SLAVE=1 :::
DEBUG:pymodbus.payload:[20000, 0]
DEBUG:pymodbus.payload:[4000, 0, 2300, 0, 1000, 0, 2300, 0]
DEBUG:root:::: Write registers to SLAVE=1 :::
DEBUG:pymodbus.datastore.context:setValues[3] 274:2
DEBUG:pymodbus.datastore.context:setValues[3] 286:8
DEBUG:root:::: Make payload to SLAVE=100 :::
DEBUG:pymodbus.payload:[20000, 0]
DEBUG:pymodbus.payload:[4000, 0, 2300, 0, 2000, 0, 4600, 0]
DEBUG:root:::: Write registers to SLAVE=100 :::
DEBUG:pymodbus.datastore.context:setValues[3] 274:2
DEBUG:pymodbus.datastore.context:setValues[3] 286:8
DEBUG:pymodbus.datastore.context:setValues[3] 11:1
DEBUG:pymodbus.server.asynchronous:Data Received: 0x1 0x3 0x0 0xb 0x0 0x1 0xf5 0xc8
DEBUG:pymodbus.framer.rtu_framer:Getting Frame - 0x3 0x0 0xb 0x0 0x1
DEBUG:pymodbus.factory:Factory Request[ReadHoldingRegistersRequest: 3]
DEBUG:pymodbus.framer.rtu_framer:Frame advanced, resetting header!!
DEBUG:pymodbus.datastore.context:validate: fc-[3] address-11: count-1
DEBUG:pymodbus.datastore.context:getValues fc-[3] address-11: count-1
DEBUG:pymodbus.server.asynchronous:send: b'010302015579eb'
DEBUG:pymodbus.server.asynchronous:Data Received: 0x2
DEBUG:pymodbus.framer.rtu_framer:Frame - [b'\x02'] not ready
DEBUG:pymodbus.server.asynchronous:Data Received: 0x3 0x0 0xb 0x0 0x1 0xf5 0xfb
DEBUG:pymodbus.framer.rtu_framer:Not a valid unit id - 2, ignoring!!
DEBUG:pymodbus.framer.rtu_framer:Resetting frame - Current Frame in buffer - 0x2 0x3 0x0 0xb 0x0 0x1 0xf5 0xfb
DEBUG:pymodbus.server.asynchronous:Data Received: 0x64 0x3 0x0 0xb 0x0 0x1 0xfc 0x3d
DEBUG:pymodbus.framer.rtu_framer:Getting Frame - 0x3 0x0 0xb 0x0 0x1
DEBUG:pymodbus.factory:Factory Request[ReadHoldingRegistersRequest: 3]
DEBUG:pymodbus.framer.rtu_framer:Frame advanced, resetting header!!
DEBUG:pymodbus.datastore.context:validate: fc-[3] address-11: count-1
DEBUG:pymodbus.datastore.context:getValues fc-[3] address-11: count-1
DEBUG:pymodbus.server.asynchronous:send: b'640302015535e3'
DEBUG:pymodbus.server.asynchronous:Data Received: 0x65 0x3 0x0 0xb 0x0
DEBUG:pymodbus.framer.rtu_framer:Frame - [b'e\x03\x00\x0b\x00'] not ready
DEBUG:pymodbus.server.asynchronous:Data Received: 0x1 0xfd 0xec
DEBUG:pymodbus.framer.rtu_framer:Not a valid unit id - 101, ignoring!!
DEBUG:pymodbus.framer.rtu_framer:Resetting frame - Current Frame in buffer - 0x65 0x3 0x0 0xb 0x0 0x1 0xfd 0xec
DEBUG:pymodbus.server.asynchronous:Data Received: 0x1 0x3 0x50 0x0 0x0 0x7 0x15 0x8
DEBUG:pymodbus.framer.rtu_framer:Getting Frame - 0x3 0x50 0x0 0x0 0x7
DEBUG:pymodbus.factory:Factory Request[ReadHoldingRegistersRequest: 3]
DEBUG:pymodbus.framer.rtu_framer:Frame advanced, resetting header!!
DEBUG:pymodbus.datastore.context:validate: fc-[3] address-20480: count-7
DEBUG:pymodbus.datastore.context:getValues fc-[3] address-20480: count-7
DEBUG:pymodbus.server.asynchronous:send: b'01030e0000000000000000000000000000ef15'
DEBUG:pymodbus.server.asynchronous:Data Received: 0x64 0x3 0x50 0x0 0x0 0x7 0x1c 0xfd
DEBUG:pymodbus.framer.rtu_framer:Getting Frame - 0x3 0x50 0x0 0x0 0x7
DEBUG:pymodbus.factory:Factory Request[ReadHoldingRegistersRequest: 3]
DEBUG:pymodbus.framer.rtu_framer:Frame advanced, resetting header!!
DEBUG:pymodbus.datastore.context:validate: fc-[3] address-20480: count-7
DEBUG:pymodbus.datastore.context:getValues fc-[3] address-20480: count-7
DEBUG:pymodbus.server.asynchronous:send: b'64030e0000000000000000000000000000d45a'
DEBUG:pymodbus.server.asynchronous:Data Received: 0x1 0x3 0x1 0x12 0x0
DEBUG:pymodbus.framer.rtu_framer:Frame - [b'\x01\x03\x01\x12\x00'] not ready
DEBUG:pymodbus.server.asynchronous:Data Received: 0x31 0x25 0xe7
DEBUG:pymodbus.framer.rtu_framer:Getting Frame - 0x3 0x1 0x12 0x0 0x31
DEBUG:pymodbus.factory:Factory Request[ReadHoldingRegistersRequest: 3]
DEBUG:pymodbus.framer.rtu_framer:Frame advanced, resetting header!!
DEBUG:pymodbus.datastore.context:validate: fc-[3] address-274: count-49
DEBUG:pymodbus.datastore.context:getValues fc-[3] address-274: count-49
DEBUG:pymodbus.server.asynchronous:send: b'0103624e20000000000000000000000000000000000000000000000fa0000008fc000007d0000011f80000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001621'
DEBUG:pymodbus.server.asynchronous:Data Received: 0x64 0x3 0x1 0x12 0x0 0x31 0x2c 0x12
DEBUG:pymodbus.framer.rtu_framer:Getting Frame - 0x3 0x1 0x12 0x0 0x31
DEBUG:pymodbus.factory:Factory Request[ReadHoldingRegistersRequest: 3]
DEBUG:pymodbus.framer.rtu_framer:Frame advanced, resetting header!!
DEBUG:pymodbus.datastore.context:validate: fc-[3] address-274: count-49
DEBUG:pymodbus.datastore.context:getValues fc-[3] address-274: count-49
DEBUG:pymodbus.server.asynchronous:send: b'6403624e20000000000000000000000000000000000000

closed time in an hour

pydaw

issue commentriptideio/pymodbus

Modbus Server with multiple slave devices context - writing to a register overwrites to all slaves

Was not a bug, Had to make separate store objects

'''python # ----------------------------------------------------------------------- # # initialize your data store # ----------------------------------------------------------------------- # #store = ModbusSlaveContext(zero_mode=True)

# Add datastores to slaves
addresses = [1, 100]
slaves = {}
for adress in addresses:
   store = ModbusSlaveContext(zero_mode=True)
   slaves.update({adress: store})

context = ModbusServerContext(slaves=slaves, single=False)

'''

pydaw

comment created time in an hour

push eventarendst/Tasmota

Staars

commit sha f53ac700133fdb6a919f8c40c5955be3e5621071

ports from MI32 to HM10

view details

Theo Arends

commit sha 22fe64e265112d1de2dd6d4dc6898c1aa0ed0fc8

Merge pull request #9992 from Staars/HM10 HM10: ports from MI32 to HM10

view details

Platformio BUILD

commit sha deed947458b8f1c3ffa035ff6ebbee1c5c0ba57e

Tasmota ESP Binaries http://tasmota.com

view details

push time in an hour

pull request commentsyncthing/syncthing

gui: Restore Select / Deselect All buttons in device sharing tab.

I've sent an invitation. However, please only ever file PRs against main, and ideally leave the milestones as is as I set and verify them as part of the release process. :)

acolomb

comment created time in an hour

push eventarendst/Tasmota

Federico Leoni

commit sha 866b286bb84b565ef9ee083180286075cbb28d4d

HaTasmota: enhanced support for shutters

view details

Theo Arends

commit sha d58524aa19eb54acb45423c9791c5c56be6e8abf

Merge pull request #9994 from effelle/discovery HaTasmota: enhanced support for shutters

view details

Platformio BUILD

commit sha 3c434c5d2b4b7fbaf9b488cb2b0ed1a8813801f9

Tasmota ESP Binaries http://tasmota.com

view details

push time in an hour

PR opened threat9/routersploit

Zim.py

Status

READY/IN DEVELOPMENT/HOLD

Description

Describe what is changed by your Pull Request. If this PR is related to the open issue (bug/feature/new module) please attach issue number.

Verification

Provide steps to test or reproduce the PR.

  1. Start ./rsf.py
  2. use exploits/routers/dlink/dsl_2750b_rce
  3. set target 192.168.1.1
  4. run
  5. ...

Checklist

  • [ ] Write module/feature
  • [ ] Write tests (Example)
  • [ ] Document how it works (Example)
+17 -0

0 comment

1 changed file

pr created time in an hour

Pull request review commentsyncthing/syncthing

Store pending devices and folders in database

+// Copyright (C) 2020 The Syncthing Authors.+//+// This Source Code Form is subject to the terms of the Mozilla Public+// License, v. 2.0. If a copy of the MPL was not distributed with this file,+// You can obtain one at https://mozilla.org/MPL/2.0/.++package db++import (+	"time"++	"github.com/syncthing/syncthing/lib/protocol"+)++func (db *Lowlevel) AddOrUpdatePendingDevice(device protocol.DeviceID, name, address string) error {+	key := db.keyer.GeneratePendingDeviceKey(nil, device[:])+	od := ObservedDevice{+		Time:    time.Now().Round(time.Second),+		Name:    name,+		Address: address,+	}+	bs, err := od.Marshal()+	if err == nil {+		err = db.Put(key, bs)+	}+	return err+}++func (db *Lowlevel) RemovePendingDevice(device protocol.DeviceID) {+	key := db.keyer.GeneratePendingDeviceKey(nil, device[:])+	if err := db.Delete(key); err != nil {+		l.Warnf("Failed to remove pending device entry: %v", err)+	}+}++// PendingDevices drops any invalid entries from the database after a+// warning log message, as a side-effect.  That's the only possible+// "repair" measure and appropriate for the importance of pending+// entries.  They will come back soon if still relevant.+func (db *Lowlevel) PendingDevices() (map[protocol.DeviceID]ObservedDevice, error) {+	iter, err := db.NewPrefixIterator([]byte{KeyTypePendingDevice})+	if err != nil {+		return nil, err+	}+	defer iter.Release()+	res := make(map[protocol.DeviceID]ObservedDevice)+	for iter.Next() {+		keyDev := db.keyer.DeviceFromPendingDeviceKey(iter.Key())+		deviceID, err := protocol.DeviceIDFromBytes(keyDev)+		var bs []byte+		var od ObservedDevice+		if err != nil {+			goto deleteKey+		}+		if bs, err = db.Get(iter.Key()); err != nil {+			goto deleteKey+		}+		if err = od.Unmarshal(bs); err != nil {+			goto deleteKey+		}+		res[deviceID] = od+		continue+	deleteKey:+		l.Infof("Invalid pending device entry, deleting from database: %x", iter.Key())+		if err := db.Delete(iter.Key()); err != nil {+			return nil, err+		}+	}+	return res, nil+}++func (db *Lowlevel) AddOrUpdatePendingFolder(id, label string, device protocol.DeviceID) error {+	key, err := db.keyer.GeneratePendingFolderKey(nil, device[:], []byte(id))+	if err != nil {+		return err+	}+	of := ObservedFolder{+		Time:  time.Now().Round(time.Second),+		Label: label,+	}+	bs, err := of.Marshal()+	if err == nil {+		err = db.Put(key, bs)+	}+	return err+}++// RemovePendingFolder removes entries for specific folder / device combinations, or all+// combinations matching just the folder ID, when given an empty device ID.+func (db *Lowlevel) RemovePendingFolder(id string, device []byte) {+	if len(device) > 0 {+		key, err := db.keyer.GeneratePendingFolderKey(nil, device, []byte(id))+		if err != nil {+			return+		}+		if err := db.Delete(key); err != nil {+			l.Warnf("Failed to remove pending folder entry: %v", err)+		}+	} else {+		iter, err := db.NewPrefixIterator([]byte{KeyTypePendingFolder})+		if err != nil {+			l.Warnf("Could not iterate through pending folder entries: %v", err)+			return+		}+		defer iter.Release()+		for iter.Next() {+			if id != string(db.keyer.FolderFromPendingFolderKey(iter.Key())) {+				continue+			}+			if err := db.Delete(iter.Key()); err != nil {+				l.Warnf("Failed to remove pending folder entry: %v", err)+			}+		}+	}+}++// PendingFolders drops any invalid entries from the database as a side-effect.+func (db *Lowlevel) PendingFolders() (map[string]map[protocol.DeviceID]ObservedFolder, error) {+	iter, err := db.NewPrefixIterator([]byte{KeyTypePendingFolder})

Yes. moved over please, because it's less complex given entries are stored in individual device->folder keys already.

acolomb

comment created time in an hour

push eventarendst/Tasmota

Staars

commit sha f53ac700133fdb6a919f8c40c5955be3e5621071

ports from MI32 to HM10

view details

Theo Arends

commit sha 22fe64e265112d1de2dd6d4dc6898c1aa0ed0fc8

Merge pull request #9992 from Staars/HM10 HM10: ports from MI32 to HM10

view details

push time in an hour

PR merged arendst/Tasmota

HM10: ports from MI32 to HM10

Description:

This should roughly bring feature parity compared to the ESP32-version (minus decryption).

  • add commands HM10block and HM10option
  • allows negative temperature values
  • send generic BLE scan (with Beacon0) to MQTT
  • minor bugfixes and code refactoring

Checklist:

  • [x] The pull request is done against the latest dev branch
  • [x] Only relevant files were touched
  • [ ] Only one feature/fix was added per PR and the code change compiles without warnings
  • [x] The code change is tested and works on Tasmota core ESP8266 V.2.7.4.7
  • [ ] The code change is tested and works on Tasmota core ESP32 V.1.0.4.2
  • [x] I accept the CLA.

NOTE: The code change must pass CI tests. Your PR cannot be merged unless tests pass

+239 -158

1 comment

1 changed file

Staars

pr closed time in an hour

push eventarendst/Tasmota

Federico Leoni

commit sha 866b286bb84b565ef9ee083180286075cbb28d4d

HaTasmota: enhanced support for shutters

view details

Theo Arends

commit sha d58524aa19eb54acb45423c9791c5c56be6e8abf

Merge pull request #9994 from effelle/discovery HaTasmota: enhanced support for shutters

view details

push time in an hour

PR merged arendst/Tasmota

HaTasmota: enhanced support for shutters

Description:

Related issue (if applicable): fixes #<Tasmota issue number goes here>

Checklist:

  • [X] The pull request is done against the latest dev branch
  • [X] Only relevant files were touched
  • [X] Only one feature/fix was added per PR and the code change compiles without warnings
  • [X] The code change is tested and works on Tasmota core ESP8266 V.2.7.4.7
  • [X] The code change is tested and works on Tasmota core ESP32 V.1.0.4.2
  • [X] I accept the CLA.

NOTE: The code change must pass CI tests. Your PR cannot be merged unless tests pass

+11 -3

0 comment

1 changed file

effelle

pr closed time in an hour

issue commentnodemailer/wildduck

AWS migration done - ZoneMTA is not receiving inputs from Wildduck

That isn't an issue with wildduck. Try reading the linked article.

The determination of whether or not an IP address is authorized to send mail is made by the ISP that provides you with the IP address.

So I guess contact Amazon?

lindaravindran1612

comment created time in an hour

push eventpavel-odintsov/fastnetmon

Pavel Odintsov

commit sha 93bea219f35b034c929505a67ac9cb4c6e8689e1

Added logic to completely suppress traffic log collection. Remediation for crashes

view details

push time in an hour

pull request commentsyncthing/syncthing

gui: Split folders into two categories on the sharing tab for devices.

Right, but on the HTML side you added something related to encrypted devices, which I didn't examine in detail yet. I'll I dive into it and update this PR to cover the untrusted GUI as well.

acolomb

comment created time in an hour

issue commentpavel-odintsov/fastnetmon

Fastnetmon Freeze due to segfault

here is the fastnetmon.log 2020-11-27 15:34:53,377 [INFO] announce x.x.15.87/32 to GoBGP 2020-11-27 15:34:53,738 [INFO] Attack with direction: outgoing IP: x.x.15.87 Power: 32763 traffic samples collected 2020-11-27 15:34:53,739 [INFO] Call script for notify about attack details for: x.x.15.87 2020-11-27 15:34:53,740 [INFO] Script for notify about attack details is finished: x.x.15.87 2020-11-27 15:35:39,058 [ERROR] Attack to IP x.x.10.106 still going! We should not unblock this host 2020-11-27 15:36:39,058 [INFO] We will unban banned IP: x.x.10.106 because it ban time 1900 seconds is ended 2020-11-27 15:36:39,059 [INFO] Call script for unban client: x.x.10.106 2020-11-27 15:36:39,059 [INFO] Script for unban client is finished: x.x.10.106 2020-11-27 15:36:39,059 [INFO] Call GoBGP for unban client started: x.x.10.106 2020-11-27 15:36:39,152 [INFO] Call to GoBGP for unban client is finished: x.x.10.106 2020-11-27 15:36:39,166 [INFO] withdraw x.x.10.106/32 to GoBGP 2020-11-27 15:36:41,274 [INFO] We run execute_ip_ban code with following params in_pps: 4048 out_pps: 28248 in_bps: 810975 out_bps: 21206359 and we decide it's outgoing attack 2020-11-27 15:36:41,275 [INFO] Attack with direction: outgoing IP: x.x.10.106 Power: 28248 2020-11-27 15:36:41,275 [INFO] Call script for ban client: x.x.10.106 2020-11-27 15:36:41,275 [INFO] Script for ban client is finished: x.x.10.106 2020-11-27 15:36:41,275 [INFO] Call GoBGP for ban client started: x.x.10.106 2020-11-27 15:36:41,399 [INFO] Call to GoBGP for ban client is finished: x.x.10.106 2020-11-27 15:36:41,399 [INFO] announce x.x.10.106/32 to GoBGP 2020-11-27 15:36:42,171 [INFO] Attack with direction: outgoing IP: x.x.10.106 Power: 28248 traffic samples collected 2020-11-27 15:36:42,172 [INFO] Call script for notify about attack details for: x.x.10.106 2020-11-27 15:36:42,172 [INFO] Script for notify about attack details is finished: x.x.10.106 2020-11-27 15:44:46,441 [INFO] We run execute_ip_ban code with following params in_pps: 154004 out_pps: 10061 in_bps: 9783737 out_bps: 1183202 and we decide it's incoming attack 2020-11-27 15:44:46,442 [INFO] Attack with direction: incoming IP: x.x.14.187 Power: 154004 2020-11-27 15:44:46,442 [INFO] Call script for ban client: x.x.14.187 2020-11-27 15:44:46,442 [INFO] Script for ban client is finished: x.x.14.187 2020-11-27 15:44:46,442 [INFO] Call GoBGP for ban client started: x.x.14.187 2020-11-27 15:44:46,443 [INFO] Call to GoBGP for ban client is finished: x.x.14.187 2020-11-27 15:44:46,553 [INFO] announce x.x.14.187/32 to GoBGP

dmesg -T output [Fri Nov 27 15:44:38 2020] fastnetmon[33673]: segfault at 7ff910021028 ip 000000000061ca98 sp 00007ffafe48e370 error 4 in fastnetmon[400000+4c5000] [Fri Nov 27 15:44:40 2020] device eno2 left promiscuous mode [Fri Nov 27 15:50:36 2020] device eno2 entered promiscuous mode

blackmetal1

comment created time in an hour

pull request commentsyncthing/syncthing

gui: Split folders into two categories on the sharing tab for devices.

Again, thanks for that. I am talking about the untrusted bit now:

This PR does not include replicating the changes to the untrusted GUI, as it is nontrivial in this case.

The problem is the same copy-paste error you just fixed in the "classic" JS. All that's needed is:

--- a/gui/default/untrusted/syncthing/core/syncthingController.js
+++ b/gui/default/untrusted/syncthing/core/syncthingController.js
@@ -1457,16 +1457,16 @@ angular.module('syncthing.core')
         };
 
         $scope.selectAllSharedFolders = function (state) {
-            var devices = $scope.currentSharing.shared;
-            for (var i = 0; i < devices.length; i++) {
-                $scope.currentSharing.selected[devices[i].deviceID] = !!state;
+            var folders = $scope.currentSharing.shared;
+            for (var i = 0; i < folders.length; i++) {
+                $scope.currentSharing.selected[folders[i].id] = !!state;
             }
         };
 
         $scope.selectAllUnrelatedFolders = function (state) {
-            var devices = $scope.currentSharing.unrelated;
-            for (var i = 0; i < devices.length; i++) {
-                $scope.currentSharing.selected[devices[i].deviceID] = !!state;
+            var folders = $scope.currentSharing.unrelated;
+            for (var i = 0; i < folders.length; i++) {
+                $scope.currentSharing.selected[folders[i].id] = !!state;
             }
         };
acolomb

comment created time in 2 hours

issue commentnapalm-automation/napalm

IOS XE Traceroute ttl

Sorry for the confusion. I'll try to be a little bit more consise. Imagine that you do not want to specify the ttl in the command that is sent to the device(e.g. for devices that do not support ttl, or just to get the default value of the device itself)

In my opinion the best way to do that would be to pass None into the traceroute function. device.traceroute("8.8.8.8", ttl=None)

In the most drivers this method is even (partially) implemented by the if ttl: case. However some drivers also use the ttl to calculate a bunch of other things such as the max_loops variable. max_loops = (5 * ttl * timeout) + 150

MatthiasGabriel

comment created time in 2 hours

issue commentReactTraining/react-router

[v6] <Link> crashes if "to" prop is undefined

I think my point is that if to is undefined due to a developer's silliness, then crashing completely with c is undefined can be hard to debug in any non-trivial application, and it's not a great developer experience.

If it's the intention of the library authors that it should crash with an indecipherable error message, then that's fine. But I don't think it is - it smells like a bug - and I think the issue should be reopened.

thedanwoods

comment created time in 2 hours

Pull request review commentsyncthing/syncthing

Store pending devices and folders in database

+// Copyright (C) 2020 The Syncthing Authors.+//+// This Source Code Form is subject to the terms of the Mozilla Public+// License, v. 2.0. If a copy of the MPL was not distributed with this file,+// You can obtain one at https://mozilla.org/MPL/2.0/.++package db++import (+	"time"++	"github.com/syncthing/syncthing/lib/protocol"+)++func (db *Lowlevel) AddOrUpdatePendingDevice(device protocol.DeviceID, name, address string) error {+	key := db.keyer.GeneratePendingDeviceKey(nil, device[:])+	od := ObservedDevice{+		Time:    time.Now().Round(time.Second),+		Name:    name,+		Address: address,+	}+	bs, err := od.Marshal()+	if err == nil {+		err = db.Put(key, bs)+	}+	return err+}++func (db *Lowlevel) RemovePendingDevice(device protocol.DeviceID) {+	key := db.keyer.GeneratePendingDeviceKey(nil, device[:])+	if err := db.Delete(key); err != nil {+		l.Warnf("Failed to remove pending device entry: %v", err)+	}+}++// PendingDevices drops any invalid entries from the database after a+// warning log message, as a side-effect.  That's the only possible+// "repair" measure and appropriate for the importance of pending+// entries.  They will come back soon if still relevant.+func (db *Lowlevel) PendingDevices() (map[protocol.DeviceID]ObservedDevice, error) {+	iter, err := db.NewPrefixIterator([]byte{KeyTypePendingDevice})+	if err != nil {+		return nil, err+	}+	defer iter.Release()+	res := make(map[protocol.DeviceID]ObservedDevice)+	for iter.Next() {+		keyDev := db.keyer.DeviceFromPendingDeviceKey(iter.Key())+		deviceID, err := protocol.DeviceIDFromBytes(keyDev)+		var bs []byte+		var od ObservedDevice+		if err != nil {+			goto deleteKey+		}+		if bs, err = db.Get(iter.Key()); err != nil {+			goto deleteKey+		}+		if err = od.Unmarshal(bs); err != nil {+			goto deleteKey+		}+		res[deviceID] = od+		continue+	deleteKey:+		l.Infof("Invalid pending device entry, deleting from database: %x", iter.Key())+		if err := db.Delete(iter.Key()); err != nil {+			return nil, err+		}+	}+	return res, nil+}++func (db *Lowlevel) AddOrUpdatePendingFolder(id, label string, device protocol.DeviceID) error {+	key, err := db.keyer.GeneratePendingFolderKey(nil, device[:], []byte(id))+	if err != nil {+		return err+	}+	of := ObservedFolder{+		Time:  time.Now().Round(time.Second),+		Label: label,+	}+	bs, err := of.Marshal()+	if err == nil {+		err = db.Put(key, bs)+	}+	return err+}++// RemovePendingFolder removes entries for specific folder / device combinations, or all+// combinations matching just the folder ID, when given an empty device ID.+func (db *Lowlevel) RemovePendingFolder(id string, device []byte) {+	if len(device) > 0 {+		key, err := db.keyer.GeneratePendingFolderKey(nil, device, []byte(id))+		if err != nil {+			return+		}+		if err := db.Delete(key); err != nil {+			l.Warnf("Failed to remove pending folder entry: %v", err)+		}+	} else {+		iter, err := db.NewPrefixIterator([]byte{KeyTypePendingFolder})+		if err != nil {+			l.Warnf("Could not iterate through pending folder entries: %v", err)+			return+		}+		defer iter.Release()+		for iter.Next() {+			if id != string(db.keyer.FolderFromPendingFolderKey(iter.Key())) {+				continue+			}+			if err := db.Delete(iter.Key()); err != nil {+				l.Warnf("Failed to remove pending folder entry: %v", err)+			}+		}+	}+}++// PendingFolders drops any invalid entries from the database as a side-effect.+func (db *Lowlevel) PendingFolders() (map[string]map[protocol.DeviceID]ObservedFolder, error) {+	iter, err := db.NewPrefixIterator([]byte{KeyTypePendingFolder})

Yes sure. Just felt cleaner to limit the complexity within db package at some point. Do you really want it moved over, or is the current separation also acceptable?

acolomb

comment created time in 2 hours

Pull request review commentsyncthing/syncthing

Store pending devices and folders in database

+// Copyright (C) 2020 The Syncthing Authors.+//+// This Source Code Form is subject to the terms of the Mozilla Public+// License, v. 2.0. If a copy of the MPL was not distributed with this file,+// You can obtain one at https://mozilla.org/MPL/2.0/.++package db++import (+	"time"++	"github.com/syncthing/syncthing/lib/protocol"+)++func (db *Lowlevel) AddOrUpdatePendingDevice(device protocol.DeviceID, name, address string) error {+	key := db.keyer.GeneratePendingDeviceKey(nil, device[:])+	od := ObservedDevice{+		Time:    time.Now().Round(time.Second),+		Name:    name,+		Address: address,+	}+	bs, err := od.Marshal()+	if err == nil {+		err = db.Put(key, bs)+	}+	return err+}++func (db *Lowlevel) RemovePendingDevice(device protocol.DeviceID) {+	key := db.keyer.GeneratePendingDeviceKey(nil, device[:])+	if err := db.Delete(key); err != nil {+		l.Warnf("Failed to remove pending device entry: %v", err)+	}+}++// PendingDevices drops any invalid entries from the database after a+// warning log message, as a side-effect.  That's the only possible+// "repair" measure and appropriate for the importance of pending+// entries.  They will come back soon if still relevant.+func (db *Lowlevel) PendingDevices() (map[protocol.DeviceID]ObservedDevice, error) {+	iter, err := db.NewPrefixIterator([]byte{KeyTypePendingDevice})+	if err != nil {+		return nil, err+	}+	defer iter.Release()+	res := make(map[protocol.DeviceID]ObservedDevice)+	for iter.Next() {+		keyDev := db.keyer.DeviceFromPendingDeviceKey(iter.Key())+		deviceID, err := protocol.DeviceIDFromBytes(keyDev)+		var bs []byte+		var od ObservedDevice+		if err != nil {+			goto deleteKey+		}+		if bs, err = db.Get(iter.Key()); err != nil {+			goto deleteKey+		}+		if err = od.Unmarshal(bs); err != nil {+			goto deleteKey+		}+		res[deviceID] = od+		continue+	deleteKey:+		l.Infof("Invalid pending device entry, deleting from database: %x", iter.Key())+		if err := db.Delete(iter.Key()); err != nil {+			return nil, err+		}+	}+	return res, nil+}++func (db *Lowlevel) AddOrUpdatePendingFolder(id, label string, device protocol.DeviceID) error {+	key, err := db.keyer.GeneratePendingFolderKey(nil, device[:], []byte(id))+	if err != nil {+		return err+	}+	of := ObservedFolder{+		Time:  time.Now().Round(time.Second),+		Label: label,+	}+	bs, err := of.Marshal()+	if err == nil {+		err = db.Put(key, bs)+	}+	return err+}++// RemovePendingFolder removes entries for specific folder / device combinations, or all+// combinations matching just the folder ID, when given an empty device ID.+func (db *Lowlevel) RemovePendingFolder(id string, device []byte) {+	if len(device) > 0 {+		key, err := db.keyer.GeneratePendingFolderKey(nil, device, []byte(id))+		if err != nil {+			return+		}+		if err := db.Delete(key); err != nil {+			l.Warnf("Failed to remove pending folder entry: %v", err)+		}+	} else {+		iter, err := db.NewPrefixIterator([]byte{KeyTypePendingFolder})+		if err != nil {+			l.Warnf("Could not iterate through pending folder entries: %v", err)+			return+		}+		defer iter.Release()+		for iter.Next() {+			if id != string(db.keyer.FolderFromPendingFolderKey(iter.Key())) {+				continue+			}+			if err := db.Delete(iter.Key()); err != nil {+				l.Warnf("Failed to remove pending folder entry: %v", err)+			}+		}+	}+}++// PendingFolders drops any invalid entries from the database as a side-effect.+func (db *Lowlevel) PendingFolders() (map[string]map[protocol.DeviceID]ObservedFolder, error) {+	iter, err := db.NewPrefixIterator([]byte{KeyTypePendingFolder})

You can keep that possibility by using the same semantics as on model: EmptyDeviceID means all devices.

acolomb

comment created time in 2 hours

Pull request review commentnapalm-automation/napalm

Add ncclient dependency for iosxr_netconf driver

 junos-eznc>=2.2.1 ciscoconfparse scp lxml>=4.3.0+ncclient

I think ncclient should already be a dependency of junos-eznc referenced above, unless we need to pin to a specific ncclient version. What issues are you seeing @111pontes?

111pontes

comment created time in 2 hours

issue openedsyncthing/syncthing

Connections aren't actually closed when closing a protocol connection

Came up starting in https://github.com/syncthing/syncthing/pull/7141#issuecomment-733247434, short summary: A protocol connection doesn't control the underlying tls connection, thus it cannot close it. All the methods that supposedly close it, just tell the model that it's closed and potentielly send a Closed BEP msg, but don't actually close the connection.

The type hierarchy looks like this:
model has: connections.Service implemented by completeConn{internalConn, protocol.Connection} and internalConn are wrappers of the concrete underlying tls connection type (tcp, quic, ...).
The protocol connections gets a reader and a writer based on internalConn. An option would be to instead pass it a io.ReadWriteCloser, based on the same.

created time in 2 hours

delete branch napalm-automation/napalm

delete branch : dependabot/pip/coveralls-2.2.0

delete time in 2 hours

push eventnapalm-automation/napalm

dependabot-preview[bot]

commit sha 8aa0c30d2c30d4aa86ace2436dd855a5685c44fe

Bump coveralls from 2.1.2 to 2.2.0 Bumps [coveralls](https://github.com/coveralls-clients/coveralls-python) from 2.1.2 to 2.2.0. - [Release notes](https://github.com/coveralls-clients/coveralls-python/releases) - [Changelog](https://github.com/coveralls-clients/coveralls-python/blob/master/CHANGELOG.md) - [Commits](https://github.com/coveralls-clients/coveralls-python/compare/2.1.2...2.2.0) Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

view details

Mircea Ulinic

commit sha 3683af48a465e5307125804c9d9fe8b740bb2458

Merge pull request #1325 from napalm-automation/dependabot/pip/coveralls-2.2.0 Bump coveralls from 2.1.2 to 2.2.0

view details

push time in 2 hours

PR merged napalm-automation/napalm

Bump coveralls from 2.1.2 to 2.2.0 dependency issue

Bumps coveralls from 2.1.2 to 2.2.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/coveralls-clients/coveralls-python/releases">coveralls's releases</a>.</em></p> <blockquote> <h2>2.2.0</h2> <p><!-- raw HTML omitted --><!-- raw HTML omitted --></p> <h2>2.2.0 (2020-11-20)</h2> <h4>Features</h4> <ul> <li><strong>api:</strong> add workaround allowing job resubmission (<a href="https://github-redirect.dependabot.com/coveralls-clients/coveralls-python/issues/241">#241</a>)</li> </ul> <h4>Bug Fixes</h4> <ul> <li><strong>integrations:</strong> fixup environment detection for Semaphore CI (<a href="https://github-redirect.dependabot.com/coveralls-clients/coveralls-python/issues/236">#236</a>)</li> </ul> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/coveralls-clients/coveralls-python/blob/master/CHANGELOG.md">coveralls's changelog</a>.</em></p> <blockquote> <h2>2.2.0 (2020-11-20)</h2> <h4>Features</h4> <ul> <li><strong>api:</strong> add workaround allowing job resubmission (<a href="https://github-redirect.dependabot.com/coveralls-clients/coveralls-python/issues/241">#241</a>) (<a href="https://github.com/coveralls-clients/coveralls-python/blob/master/0de0c019">https://github.com/coveralls-clients/coveralls-python/blob/master/0de0c019</a>)</li> </ul> <h4>Bug Fixes</h4> <ul> <li><strong>integrations:</strong> fixup environment detection for Semaphore CI (<a href="https://github-redirect.dependabot.com/coveralls-clients/coveralls-python/issues/236">#236</a>) (<a href="https://github.com/coveralls-clients/coveralls-python/blob/master/ad4f8fa8">https://github.com/coveralls-clients/coveralls-python/blob/master/ad4f8fa8</a>)</li> </ul> <p><!-- raw HTML omitted --><!-- raw HTML omitted --></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/coveralls-clients/coveralls-python/commit/a9b36299ce9ba3bb6858700781881029d82e545d"><code>a9b3629</code></a> chore(release): bump version</li> <li><a href="https://github.com/coveralls-clients/coveralls-python/commit/631f9e9d82e25528d91a0738943a4c6f86466a95"><code>631f9e9</code></a> refactor(api): fixup lint issues</li> <li><a href="https://github.com/coveralls-clients/coveralls-python/commit/d59acd319c66ef5d810740709bb7c8607260ef50"><code>d59acd3</code></a> docs(readme): highlight documentation links</li> <li><a href="https://github.com/coveralls-clients/coveralls-python/commit/ad4f8fa81afbf70216e248807dfe9b8d9492848b"><code>ad4f8fa</code></a> fix(integrations): fixup environment detection for Semaphore CI (<a href="https://github-redirect.dependabot.com/coveralls-clients/coveralls-python/issues/236">#236</a>)</li> <li><a href="https://github.com/coveralls-clients/coveralls-python/commit/0de0c019e6574226c9d6fe2906fc150eff40c8c2"><code>0de0c01</code></a> feat(api): add workaround allowing job resubmission (<a href="https://github-redirect.dependabot.com/coveralls-clients/coveralls-python/issues/241">#241</a>)</li> <li><a href="https://github.com/coveralls-clients/coveralls-python/commit/6c8639cc22a5d9ef9f7e46fdb008fb3f53b99f26"><code>6c8639c</code></a> docs: fix typo in Tox docs (<a href="https://github-redirect.dependabot.com/coveralls-clients/coveralls-python/issues/237">#237</a>)</li> <li>See full diff in <a href="https://github.com/coveralls-clients/coveralls-python/compare/2.1.2...2.2.0">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
  • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

Additionally, you can set the following in your Dependabot dashboard:

  • Update frequency (including time of day and day of week)
  • Pull request limits (per update run and/or open at any time)
  • Out-of-range updates (receive only lockfile updates, if desired)
  • Security updates (receive only security updates, if desired)

</details>

+1 -1

2 comments

1 changed file

dependabot-preview[bot]

pr closed time in 2 hours

PR opened arendst/Tasmota

HaTasmota: enhanced support for shutters

Description:

Related issue (if applicable): fixes #<Tasmota issue number goes here>

Checklist:

  • [X] The pull request is done against the latest dev branch
  • [X] Only relevant files were touched
  • [X] Only one feature/fix was added per PR and the code change compiles without warnings
  • [X] The code change is tested and works on Tasmota core ESP8266 V.2.7.4.7
  • [X] The code change is tested and works on Tasmota core ESP32 V.1.0.4.2
  • [X] I accept the CLA.

NOTE: The code change must pass CI tests. Your PR cannot be merged unless tests pass

+11 -3

0 comment

1 changed file

pr created time in 2 hours

more