Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QUESTION] Redis is not using the right redis database number from the broker url #8842

Open
6 of 18 tasks
rvillablanca opened this issue Feb 12, 2024 · 1 comment
Open
6 of 18 tasks

Comments

@rvillablanca
Copy link

rvillablanca commented Feb 12, 2024

Checklist

  • I have verified that the issue exists against the main branch of Celery.
  • This has already been asked to the discussions forum first.
  • I have read the relevant section in the
    contribution guide
    on reporting bugs.
  • I have checked the issues list
    for similar or identical bug reports.
  • I have checked the pull requests list
    for existing proposed fixes.
  • I have checked the commit log
    to find out if the bug was already fixed in the main branch.
  • I have included all related issues and possible duplicate issues
    in this issue (If there are none, check this box anyway).

Mandatory Debugging Information

  • I have included the output of celery -A proj report in the issue.
    (if you are not able to do this, then at least specify the Celery
    version affected).
  • I have verified that the issue exists against the main branch of Celery.
  • I have included the contents of pip freeze in the issue.
  • I have included all the versions of all the external dependencies required
    to reproduce this bug.

Optional Debugging Information

  • I have tried reproducing the issue on more than one Python version
    and/or implementation.
  • I have tried reproducing the issue on more than one message broker and/or
    result backend.
  • I have tried reproducing the issue on more than one version of the message
    broker and/or result backend.
  • I have tried reproducing the issue on more than one operating system.
  • I have tried reproducing the issue on more than one workers pool.
  • I have tried reproducing the issue with autoscaling, retries,
    ETA/Countdown & rate limits disabled.
  • I have tried reproducing the issue after downgrading
    and/or upgrading Celery and its dependencies.

Related Issues and Possible Duplicates

Related Issues

Possible Duplicates

  • None

Environment & Settings

Celery version: 5.3.6

celery report Output:

software -> celery:5.3.6 (emerald-rush) kombu:5.3.5 py:3.10.13
            billiard:4.2.0 redis:4.6.0
platform -> system:Linux arch:64bit, ELF
            kernel version:6.5.0-17-generic imp:CPython
loader   -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis

broker_url: 'redis://redis:6379/5'
result_backend: 'redis'
deprecated_settings: None
broker_transport_options: 
 'global_keyprefix': '********'}

Steps to Reproduce

Required Dependencies

  • Minimal Python Version: N/A or Unknown
  • Minimal Celery Version: N/A or Unknown
  • Minimal Kombu Version: N/A or Unknown
  • Minimal Broker Version: N/A or Unknown
  • Minimal Result Backend Version: N/A or Unknown
  • Minimal OS and/or Kernel Version: N/A or Unknown
  • Minimal Broker Client Version: N/A or Unknown
  • Minimal Result Backend Client Version: N/A or Unknown

Python Packages

pip freeze Output:

amqp==5.2.0
asgiref==3.6.0
asttokens==2.2.1
async-timeout==4.0.3
attrs==22.2.0
backcall==0.2.0
billiard==4.2.0
black==22.12.0
celery==5.3.6
click==8.1.3
click-didyoumean==0.3.0
click-plugins==1.1.1
click-repl==0.3.0
coverage==7.2.1
decorator==5.1.1
Django==4.1.5
django-filter==22.1
django-model-utils==4.3.1
django-redis==5.4.0
djangorestframework==3.14.0
exceptiongroup==1.1.0
executing==1.2.0
factory-boy==3.2.1
Faker==18.4.0
gunicorn==20.1.0
iniconfig==2.0.0
ipdb==0.13.11
ipython==8.9.0
jedi==0.17.2
kombu==5.3.5
Markdown==3.4.1
matplotlib-inline==0.1.6
mypy-extensions==0.4.3
mysqlclient==2.1.1
packaging==23.0
parso==0.7.1
pathspec==0.11.0
pexpect==4.8.0
pickleshare==0.7.5
pip-autoremove==0.10.0
platformdirs==2.6.2
pluggy==1.0.0
prompt-toolkit==3.0.36
ptyprocess==0.7.0
pure-eval==0.2.2
Pygments==2.14.0
PyJWT==2.6.0
pytest==7.2.2
pytest-cov==4.0.0
pytest-django==4.5.2
python-dateutil==2.8.2
python-dotenv==1.0.0
pytz==2022.7.1
redis==4.6.0
ruff==0.0.237
six==1.16.0
sqlparse==0.4.3
stack-data==0.6.2
tomli==2.0.1
traitlets==5.9.0
tzdata==2023.4
vine==5.1.0
wcwidth==0.2.6

Other Dependencies

N/A

Minimally Reproducible Test Case

I don't have a reproducible test case because I haven't been able to reproduce it. Basically, I have a typical and regular django project to which I added Celery for scheduled tasks. I followed the suggested guide for adding Celery support to a django project https://docs.celeryq.dev/en/stable/django/first-steps-with-django.html, using Redis as broker.

All works as expected in terms of scheduling tasks, however, when I tried to use another redis database number, it seems like it does not take effect.

Part of my current configuration is:

import os

from celery import Celery
from django.conf import settings

import logging

logger = logging.getLogger(__name__)

# Set the default Django settings module for the 'celery' program.
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "api.settings")

redis_url = f"{os.getenv('REDIS_URL')}/{settings.REDIS_DB_CELERY}"
print(f"Using redis url: {redis_url}")
app = Celery(
    "api",
    backend="redis",
    broker=redis_url,
)
app.conf.broker_transport_options = {
    "global_keyprefix": f"celery.{settings.COMPANY}"
}

# Load task modules from all registered Django apps.
app.autodiscover_tasks()

That's run using the command

celery -A api.celery worker --log-level=debug -E -B

Ignore the warnings of running the beat with the worker together, I'm fine with that. Also, removing the -B makes not difference.

I'm running redis using nomad and consul, so the REDIS_URL var is: redis://redis.service.consul:6379 and the REDIS_DB_CELERY is 5.

However, when I deploy this using nomad (I don't think this makes a difference but I haven't been able to reproduce the issue), then I get the next output:

Using redis url: redis://redis.service.consul:6379/5
 
 -------------- celery@16b53adade1e v5.3.1 (emerald-rush)
--- ***** ----- 
-- ******* ---- Linux-5.15.0-91-generic-x86_64-with-glibc2.36 2024-02-12 03:11:09
- *** --- * --- 
- ** ---------- [config]
- ** ---------- .> app:         api:0x7fb04eebd240
- ** ---------- .> transport:   redis://redis.service.consul:6379//
- ** ---------- .> results:     
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: ON
--- ***** ----- 
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery
                

[tasks]
...

The log above was taken using the original version of Celery, but then after I upgraded it to the latest, the issue remains the same.

I was debugging a bit, and I found that the app.conf var it is a object of a Settings class (like a dictionary of dictionaries), and it holds the correct value for the property broker_url, which is redis://redis.service.consul:6379/5.

It is like somehow it can't determine the virtual_host property from that url. However, when I run exactly the same code on my local machine (funny), it works, so I tried the same but this time, using docker-compose to run it in the same environment as in prod, but it was the same result, it assigns correctly the transport (displaying the 5 for the selected redis database number).

The dockerfile I'm using does not have any interesting:

FROM python:3.10

RUN apt-get update && apt-get -y install default-libmysqlclient-dev

WORKDIR /usr/src/app

ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .

I was testing some functions that parse that url, for example parse_url from kombu, and it parses it correcty, so basically I ran out of ideas.

Let me know if you need more information. This is not an issue until now, it could be that I'm missing something, but not sure because locally using a pyenv environment and with docker-compose it runs as expected.

@aayushostwal
Copy link

#8938 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants