profile
viewpoint

krishmasand/Companion 1

Companion App for BMW i3

ankitpatel96/aws-application-auto-scaling-kinesis 0

Leveraging Amazon Application Auto Scaling you have now the possibility to interact to custom resources in order to automatically handle infrastructure or service resize. You will find a demo regarding how to integrate Amazon Application Auto Scaling with Amazon Kinesis Data Stream in order to scale your shards, in a fully serverless fashion

ankitpatel96/cpython 0

The Python programming language

ankitpatel96/django 0

The Web framework for perfectionists with deadlines.

Loquats/cs184-snow 0

Snow simulation using material point method, based on Stomakhin 2013

issue commentsqlalchemy/dogpile.cache

Make the Region Invalidation Strategy more flexible

I like the extensions that @zzzeek proposed, however...

Is this best implemented with a region invalidation though?

This looks like a usecase for “wraps” or a custom deserializer/backend. At least I have done similar things with that concept. (I’m not sure if the internal payload is available in wraps hooks or not). You can just perform the date operations there, and issue a miss so it repopulates with the generator.

JonathanWylie

comment created time in 43 minutes

issue commentsqlalchemy/dogpile.cache

Make the Region Invalidation Strategy more flexible

looks like #38 , sort of midway through, is where this came about. I'll name some of the usual suspects in case this issue is interesting to them, @morganfainberg @jvanasco

JonathanWylie

comment created time in 2 hours

issue commentsqlalchemy/dogpile.cache

Make the Region Invalidation Strategy more flexible

hi there -

I agree it seems strange I went through all the trouble to add a custom region invalidation strategy and did not think that the key, which is available, should be passed through as well. that extension was created for someone else's use case, I have no idea what it was.

here is a proposed API. if you could vet this for sanity etc. that would be very helpful, I rarely work with dogpile.cache so I came up with this through rote:

diff --git a/dogpile/cache/region.py b/dogpile/cache/region.py
index 65ed334..224b148 100644
--- a/dogpile/cache/region.py
+++ b/dogpile/cache/region.py
@@ -104,6 +104,9 @@ class RegionInvalidationStrategy:
                 return (self._hard_invalidated and
                         timestamp < self._hard_invalidated)
 
+            def key_is_hard_invalidated(self, key, timestamp):
+                return self.is_hard_invalidated(timestamp)
+
             def was_soft_invalidated(self):
                 return bool(self._soft_invalidated)
 
@@ -111,6 +114,9 @@ class RegionInvalidationStrategy:
                 return (self._soft_invalidated and
                         timestamp < self._soft_invalidated)
 
+            def key_is_soft_invalidated(self, key, timestamp):
+                return self.is_soft_invalidated(timestamp)
+
     The custom implementation is injected into a :class:`.CacheRegion`
     at configure time using the
     :paramref:`.CacheRegion.configure.region_invalidator` parameter::
@@ -164,6 +170,21 @@ class RegionInvalidationStrategy:
 
         raise NotImplementedError()
 
+    def key_is_hard_invalidated(self, key: KeyType, timestamp: float) -> bool:
+        """Check timestamp and key to determine if it was hard invalidated.
+
+        Calls :meth:`.RegionInvalidator.is_hard_invalidated` by default.
+
+        :return: Boolean. True if ``timestamp`` is older than
+         the last region invalidation time and region is invalidated
+         in soft mode.
+
+        .. versionadded:: 1.1.2
+
+        """
+
+        return self.is_hard_invalidated(timestamp)
+
     def is_soft_invalidated(self, timestamp: float) -> bool:
         """Check timestamp to determine if it was soft invalidated.
 
@@ -175,6 +196,21 @@ class RegionInvalidationStrategy:
 
         raise NotImplementedError()
 
+    def key_is_soft_invalidated(self, key: KeyType, timestamp: float) -> bool:
+        """Check timestamp and key to determine if it was soft invalidated.
+
+        Calls :meth:`.RegionInvalidator.is_soft_invalidated` by default.
+
+        :return: Boolean. True if ``timestamp`` is older than
+         the last region invalidation time and region is invalidated
+         in soft mode.
+
+        .. versionadded:: 1.1.2
+
+        """
+
+        return self.is_soft_invalidated(timestamp)
+
     def is_invalidated(self, timestamp: float) -> bool:
         """Check timestamp to determine if it was invalidated.
 
@@ -185,6 +221,18 @@ class RegionInvalidationStrategy:
 
         raise NotImplementedError()
 
+    def key_is_invalidated(self, key: KeyType, timestamp: float) -> bool:
+        """Check timestamp and key to determine if it was invalidated.
+
+        :return: Boolean. True if ``timestamp`` is older than
+         the last region invalidation time.
+
+        .. versionadded:: 1.1.2
+
+        """
+
+        return self.is_invalidated(timestamp)
+
     def was_soft_invalidated(self) -> bool:
         """Indicate the region was invalidated in soft mode.
 
@@ -761,21 +809,21 @@ class CacheRegion:
             key = self.key_mangler(key)
         value = self._get_from_backend(key)
         value = self._unexpired_value_fn(expiration_time, ignore_expiration)(
-            value
+            key, value
         )
 
         return value.payload
 
     def _unexpired_value_fn(self, expiration_time, ignore_expiration):
         if ignore_expiration:
-            return lambda value: value
+            return lambda key, value: value
         else:
             if expiration_time is None:
                 expiration_time = self.expiration_time
 
             current_time = time.time()
 
-            def value_fn(value):
+            def value_fn(key, value):
                 if value is NO_VALUE:
                     return value
                 elif (
@@ -783,8 +831,8 @@ class CacheRegion:
                     and current_time - value.metadata["ct"] > expiration_time
                 ):
                     return NO_VALUE
-                elif self.region_invalidator.is_invalidated(
-                    value.metadata["ct"]
+                elif self.region_invalidator.key_is_invalidated(
+                    key, value.metadata["ct"]
                 ):
                     return NO_VALUE
                 else:
@@ -838,7 +886,7 @@ class CacheRegion:
         return [
             value.payload if value is not NO_VALUE else value
             for value in (
-                _unexpired_value_fn(value) for value in backend_values
+                _unexpired_value_fn(key, value) for key, value in zip(keys, backend_values)
             )
         ]
 
@@ -858,7 +906,7 @@ class CacheRegion:
             log.debug("No value present for key: %r", orig_key)
         elif value.metadata["v"] != value_version:
             log.debug("Dogpile version update for key: %r", orig_key)
-        elif self.region_invalidator.is_hard_invalidated(value.metadata["ct"]):
+        elif self.region_invalidator.key_is_hard_invalidated(orig_key, value.metadata["ct"]):
             log.debug("Hard invalidation detected for key: %r", orig_key)
         else:
             return False
@@ -965,7 +1013,7 @@ class CacheRegion:
                 raise NeedRegenerationException()
 
             ct = cast(CachedValue, value).metadata["ct"]
-            if self.region_invalidator.is_soft_invalidated(ct):
+            if self.region_invalidator.key_is_soft_invalidated(key, ct):
                 if expiration_time is None:
                     raise exception.DogpileCacheException(
                         "Non-None expiration time required "

JonathanWylie

comment created time in 2 hours

issue openedsqlalchemy/dogpile.cache

Make the Region Invalidation Strategy more flexible

I have a use case where I can only determine if cache entry has expired by knowing the key and the time it was created. As it stands the RegionInvalidationStrategy interface only receives the creation time. If it was passed the key as well then I could implement the behaviour I need.

The use case is this: I am querying the google analytics API, and caching the results in Redis. Although you can query for recent data, it is not guaranteed to be up to date. The key is the datetime for the analytics data, I want to expire a cache entry if it was created less than a day after key (i.e it was recent data when it was created), but only if it is now more than an hour after it was created. I have currently implemented this by overriding Region._is_cache_miss in a subclass. This is not great, because it only applies to the get_or_create class of methods, get_multi and get use self._unexpired_value_fn, which of course I could also override, but given RegionInvalidationStrategy is the proper way to manage invalidation I think it should be done there, and it also means it only has to be done in one place.

created time in 13 hours

issue closedGrokzen/redis-py-cluster

some keys set fail

python 3.8.2 env

image

when the key set f1 x3 foo1 etc, run error

closed time in 5 days

windyStreet

issue commentGrokzen/redis-py-cluster

some keys set fail

yes ,it's configuer err

cause by : whem i am make the cluster in two server , i used the '127.0.0.1' as member ip err like this : /usr/local/redis/5.0.2/7004/bin/redis-cli \ --cluster create \ 127.0.0.1:7004 \ 127.0.0.1:7005 \ 192.168.5.109:7006 \ 192.168.5.109:7007 \ 127.0.0.1:7008 \ 192.168.5.109:7004 \ 192.168.5.109:7005 \ 127.0.0.1:7006 \ 127.0.0.1:7007 \ 192.168.5.109:7008 \ --cluster-replicas 1 -a xxx

right mehtod like this: /usr/local/redis/5.0.2/7004/bin/redis-cli
--cluster create
192.168.5.110:7004
192.168.5.110:7005
192.168.5.109:7006
192.168.5.109:7007
192.168.5.110:7008
192.168.5.109:7004
192.168.5.109:7005
192.168.5.110:7006
192.168.5.110:7007
192.168.5.109:7008
--cluster-replicas 1 -a xxx

windyStreet

comment created time in 5 days

issue closedGrokzen/redis-py-cluster

the unit of socket_timeout and socket_connect_timeout in RedisCluster

hi, the unit of the socket_timeout and the socket_ connect_timeout, seconds or milliseconds? who can help me?

closed time in 6 days

zhaoyi2

issue commentGrokzen/redis-py-cluster

the unit of socket_timeout and socket_connect_timeout in RedisCluster

@zhaoyi2 The issues section is for issues, if you have a general question please ask them inside the Discussion tabb here in github

zhaoyi2

comment created time in 6 days

issue openedGrokzen/redis-py-cluster

the unit of socket_timeout and socket_connect_timeout in RedisCluster

hi, the unit of the socket_timeout and the socket_ connect_timeout, seconds or milliseconds? who can help me?

created time in 6 days

issue commentGrokzen/redis-py-cluster

some keys set fail

@windyStreet This is not enough information. This basic code set works across the board for most ppl. I also need the stack trace that you get when keys fail like you mention. Most likley it is a configuration error in some way on your end.

windyStreet

comment created time in 7 days

issue commentGrokzen/redis-py-cluster

some keys set fail

rc = RedisCluster(startup_nodes=startup_nodes, decode_responses=True, password='xxx')

if name == 'main': # print(rc.connection_pool.nodes.nodes) rc.set("foo", "bar") print(rc.get("foo")) # success rc.set("foox", "barx") print(rc.get("foox")) # success rc.set("x3", "bar12") print(rc.get("x3")) # fail key like f1 foo1 x3 ...

####### #code like this

windyStreet

comment created time in 7 days

issue openedGrokzen/redis-py-cluster

some keys set fail

python 3.8.2 env

image

when the key set f1 x3 foo1 etc, run error

created time in 7 days

PR opened kitspace/awesome-electronics

Update README.md

Added Louis Rossmann to Videos section

+1 -0

0 comment

1 changed file

pr created time in 9 days

issue closedGrokzen/redis-py-cluster

close

closed time in 13 days

wangpanfeng

issue openedGrokzen/redis-py-cluster

close

created time in 13 days

push eventsqlalchemy/dogpile.cache

Mike Bayer

commit sha 505b35a3b7792a2c3a9d8e06c5f3915d5fe2fbaf

happy new year Change-Id: Ia8e4138f8c869704addfd0ffcc01b95886db3dd6

view details

push time in 16 days

pull request commentGrokzen/redis-py-cluster

fix: possible connection leak in pipeline when max_connections is reached

@alivx Hi, thanks for the comment.

use timeout option and retry in X time for any available connection

Sure. I will update the code.

Neon4o4

comment created time in 21 days

issue closedGrokzen/redis-py-cluster

UnboundLocalError: local variable 'connection' referenced before assignment

Hello guys, I've been playing with the library a little, but it seems we have an unbound variable on edge-cases:

  File "/nix/store/fklw44xkyjdw60g7yxyq8baswc6m6p3s-integrates-back-async/site-packages/redis/client.py", line 3050, in hset
    return self.execute_command('HSET', name, *items)
  File "/nix/store/fklw44xkyjdw60g7yxyq8baswc6m6p3s-integrates-back-async/site-packages/rediscluster/client.py", line 551, in execute_command
    return self._execute_command(*args, **kwargs)
  File "/nix/store/fklw44xkyjdw60g7yxyq8baswc6m6p3s-integrates-back-async/site-packages/rediscluster/client.py", line 702, in _execute_command
    self.connection_pool.release(connection)
UnboundLocalError: local variable 'connection' referenced before assignment

Seems like on error conditions, the variable has not been declared (depending on the try-catch flows):

image

closed time in 22 days

kamadorueda

issue commentGrokzen/redis-py-cluster

UnboundLocalError: local variable 'connection' referenced before assignment

I hit the bug in a very extreme corner case (last version of the library on pypi)

I think it was because I had locally a redis-server instead of a redis-cluster and that is supposed to fail, and it failed, so I'm ok with that

Finally I used a local redis-cluster and it worked very well, actually this library is the one that allowed me to solve this issue: https://gitlab.com/fluidattacks/product/-/issues/3874

I don't think it's worth keeping the issue open as the behaviour on real use cases is correct

Thanks!

kamadorueda

comment created time in 22 days

issue commentGrokzen/redis-py-cluster

UnboundLocalError: local variable 'connection' referenced before assignment

@kamadorueda From a quick look you are using an older version of this lib and you should upgrade it as if you look inside master branch here https://github.com/Grokzen/redis-py-cluster/blob/master/rediscluster/client.py#L703 you will see that a fix was added that will check if connection variable is set and it might help to avoid this issue. Also if you look here https://github.com/Grokzen/redis-py-cluster/blob/master/rediscluster/client.py#L590 you will see that the variable should no longer be unbound as it should be set to None by default and avoid this case that you encounter.

kamadorueda

comment created time in 22 days

issue commentGrokzen/redis-py-cluster

UnboundLocalError: local variable 'connection' referenced before assignment

@kamadorueda I need to know what version of the library you are running, what redis-server version and what redis-py version you ran. And if possible the piece of code that you ran and if you can reproduce this reliably in some way?

kamadorueda

comment created time in 22 days

issue openedGrokzen/redis-py-cluster

UnboundLocalError: local variable 'connection' referenced before assignment

Hello guys, I've been playing with the library a little, but it seems we have an unbound variable on edge-cases:

  File "/nix/store/fklw44xkyjdw60g7yxyq8baswc6m6p3s-integrates-back-async/site-packages/redis/client.py", line 3050, in hset
    return self.execute_command('HSET', name, *items)
  File "/nix/store/fklw44xkyjdw60g7yxyq8baswc6m6p3s-integrates-back-async/site-packages/rediscluster/client.py", line 551, in execute_command
    return self._execute_command(*args, **kwargs)
  File "/nix/store/fklw44xkyjdw60g7yxyq8baswc6m6p3s-integrates-back-async/site-packages/rediscluster/client.py", line 702, in _execute_command
    self.connection_pool.release(connection)
UnboundLocalError: local variable 'connection' referenced before assignment

Seems like on error conditions, the variable has not been declared (depending on the try-catch flows):

image

created time in 22 days

pull request commentGrokzen/redis-py-cluster

Update connection.py

@noqcks Thank you for thiscontribution

noqcks

comment created time in 25 days

push eventGrokzen/redis-py-cluster

Benji Visser

commit sha 570d7f23faa9bde8031f5e2622e4c01aef3c4f7a

Update connection.py

view details

push time in 25 days

PR opened Grokzen/redis-py-cluster

Update connection.py
+1 -1

0 comment

1 changed file

pr created time in a month

issue openedGrokzen/redis-py-cluster

Logging error message in RedisClusterException

Hi. Today I got RedisClusterException

image

and problem there is that error handling of ResponseError loosing original error message

image

It would be easier to debug my problem if original error message (e.str()) was logged.

created time in a month

pull request commentGrokzen/redis-py-cluster

fix: possible connection leak in pipeline when max_connections is reached

@Grokzen Thanks for the comment

Is this the same behavior that redis-py is using?

I think they are not the same. redis-py's pipeline keeps used connection with self.connection and releases it in reset() (called in finally block) https://github.com/andymccurdy/redis-py/blob/master/redis/client.py#L4150

For redis-py-cluster, connections are released after read/writes. If error occurred when making new connections (e.g. Too many connections) these connections would be considered as used and will never be released. https://github.com/Grokzen/redis-py-cluster/blob/master/rediscluster/pipeline.py#L233 I noticed the comments after read/writes about why not releasing connections in finally block and I do agree with it. However if error occurred even before any operation is performed on these connections, it should be safe to release them.

it might not be that good to release connections to other server/nodes in your cluster

Actually I'm trying to release the connections. I think they should be released because we run into an error when making new connections and these connections are not used and will not be used in this pipeline execution. Connections to other nodes will be considered as "used" if they are not released. If that happened we would never be able to use these connections in the future. And if we have max_connections set and got enough un-released connections, we would not be able to make any new connections with this cluster object. For some micro-service setups, there is only one cluster object per process.

Neon4o4

comment created time in a month

pull request commentGrokzen/redis-py-cluster

fix: possible connection leak in pipeline when max_connections is reached

@Neon4o4 Is this the same behavior that redis-py is using? Also are you considering that it might not be that good to release connections to other server/nodes in your cluster as it seems like you are doing that as well and not only connections to a specific node?

Neon4o4

comment created time in a month

PR opened Grokzen/redis-py-cluster

fix: possible connection leak in pipeline when max_connections is reached

When max_connections is reached connection_pool.get_connection_by_node will throw an error (Too many connections). However other connections allocated before the error are clean and should be returned to connection pool.

+7 -1

0 comment

1 changed file

pr created time in a month

more