You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't have a reproducible test case because I haven't been able to reproduce it. Basically, I have a typical and regular django project to which I added Celery for scheduled tasks. I followed the suggested guide for adding Celery support to a django project https://docs.celeryq.dev/en/stable/django/first-steps-with-django.html, using Redis as broker.
All works as expected in terms of scheduling tasks, however, when I tried to use another redis database number, it seems like it does not take effect.
Part of my current configuration is:
importosfromceleryimportCeleryfromdjango.confimportsettingsimportlogginglogger=logging.getLogger(__name__)
# Set the default Django settings module for the 'celery' program.os.environ.setdefault("DJANGO_SETTINGS_MODULE", "api.settings")
redis_url=f"{os.getenv('REDIS_URL')}/{settings.REDIS_DB_CELERY}"print(f"Using redis url: {redis_url}")
app=Celery(
"api",
backend="redis",
broker=redis_url,
)
app.conf.broker_transport_options= {
"global_keyprefix": f"celery.{settings.COMPANY}"
}
# Load task modules from all registered Django apps.app.autodiscover_tasks()
That's run using the command
celery -A api.celery worker --log-level=debug -E -B
Ignore the warnings of running the beat with the worker together, I'm fine with that. Also, removing the -B makes not difference.
I'm running redis using nomad and consul, so the REDIS_URL var is: redis://redis.service.consul:6379 and the REDIS_DB_CELERY is 5.
However, when I deploy this using nomad (I don't think this makes a difference but I haven't been able to reproduce the issue), then I get the next output:
The log above was taken using the original version of Celery, but then after I upgraded it to the latest, the issue remains the same.
I was debugging a bit, and I found that the app.conf var it is a object of a Settings class (like a dictionary of dictionaries), and it holds the correct value for the property broker_url, which is redis://redis.service.consul:6379/5.
It is like somehow it can't determine the virtual_host property from that url. However, when I run exactly the same code on my local machine (funny), it works, so I tried the same but this time, using docker-compose to run it in the same environment as in prod, but it was the same result, it assigns correctly the transport (displaying the 5 for the selected redis database number).
The dockerfile I'm using does not have any interesting:
FROM python:3.10
RUN apt-get update && apt-get -y install default-libmysqlclient-dev
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
I was testing some functions that parse that url, for example parse_url from kombu, and it parses it correcty, so basically I ran out of ideas.
Let me know if you need more information. This is not an issue until now, it could be that I'm missing something, but not sure because locally using a pyenv environment and with docker-compose it runs as expected.
The text was updated successfully, but these errors were encountered:
Checklist
main
branch of Celery.contribution guide
on reporting bugs.
for similar or identical bug reports.
for existing proposed fixes.
to find out if the bug was already fixed in the main branch.
in this issue (If there are none, check this box anyway).
Mandatory Debugging Information
celery -A proj report
in the issue.(if you are not able to do this, then at least specify the Celery
version affected).
main
branch of Celery.pip freeze
in the issue.to reproduce this bug.
Optional Debugging Information
and/or implementation.
result backend.
broker and/or result backend.
ETA/Countdown & rate limits disabled.
and/or upgrading Celery and its dependencies.
Related Issues and Possible Duplicates
Related Issues
Possible Duplicates
Environment & Settings
Celery version: 5.3.6
celery report
Output:Steps to Reproduce
Required Dependencies
Python Packages
pip freeze
Output:Other Dependencies
N/A
Minimally Reproducible Test Case
I don't have a reproducible test case because I haven't been able to reproduce it. Basically, I have a typical and regular django project to which I added Celery for scheduled tasks. I followed the suggested guide for adding Celery support to a django project https://docs.celeryq.dev/en/stable/django/first-steps-with-django.html, using Redis as broker.
All works as expected in terms of scheduling tasks, however, when I tried to use another redis database number, it seems like it does not take effect.
Part of my current configuration is:
That's run using the command
celery -A api.celery worker --log-level=debug -E -B
Ignore the warnings of running the beat with the worker together, I'm fine with that. Also, removing the
-B
makes not difference.I'm running redis using nomad and consul, so the REDIS_URL var is:
redis://redis.service.consul:6379
and the REDIS_DB_CELERY is 5.However, when I deploy this using nomad (I don't think this makes a difference but I haven't been able to reproduce the issue), then I get the next output:
The log above was taken using the original version of Celery, but then after I upgraded it to the latest, the issue remains the same.
I was debugging a bit, and I found that the app.conf var it is a object of a Settings class (like a dictionary of dictionaries), and it holds the correct value for the property
broker_url
, which isredis://redis.service.consul:6379/5
.It is like somehow it can't determine the virtual_host property from that url. However, when I run exactly the same code on my local machine (funny), it works, so I tried the same but this time, using docker-compose to run it in the same environment as in prod, but it was the same result, it assigns correctly the transport (displaying the 5 for the selected redis database number).
The dockerfile I'm using does not have any interesting:
I was testing some functions that parse that url, for example
parse_url
from kombu, and it parses it correcty, so basically I ran out of ideas.Let me know if you need more information. This is not an issue until now, it could be that I'm missing something, but not sure because locally using a pyenv environment and with docker-compose it runs as expected.
The text was updated successfully, but these errors were encountered: