Wednesday, January 29, 2014

Yet another post on Redis

While working for a project , we used Redis as queue, using python-rq.  Running a redis-cli , I used the following commands -


  • keys *
  • type <key name>
  • and then according to the type , hash,list I would query the data
Some things were quite easy to understand
  • rq:workers
  • rq:queue:failed
  • rq:queue:default
  • and a success one as well
But apart from these, there were several entries - with name rq:job:<job_id>. After much reading, I found the internal working at http://python-rq.org/contrib/.

It says whenever a function call gets enqueued - 
  • Pushes the job's ids into queue , in my case the default
  • adds a hash objects of the job instance
So, when dequeue happens - 
  • Pops jobid from queue
  • Fetches Job data 
  • Executes the function and saves result has a hash key if success
  • else saves in failed queue with stack trace
All of this is given on Python-rq site.

There are two kinds of error I saw -
  • RuntimeError: maximum recursion depth exceeded while calling a Python object - This happened at queue.py of python-rq module, where I think, it was caused when control crossed max recursive limit, when it didnt find the jobs hashes, as discussed above in dequeue
  • Socket closed on remote end - The server closes client connection after 300s, in my case I didnt want to do them, so. let it be on forever by changing in /etc/redis/redis.conf , timeout value to 0
Go Redis!!

No comments:

Post a Comment