When building the containers from docker-compose
there is no error from the Dockerfile and copies effectively the files but after the images are built docker-compose use the command: ls -la /app
and the file doesn't exists.
Here is the setup:
docker-compose.yml
version: '3.8'
services:
web:
build:
context: .
dockerfile: Dockerfile
command: ls -la /app
volumes:
- ./ecommerce:/app
ports:
- "8000:8000"
depends_on:
- redis
- rabbitmq
environment:
- DEBUG=${DEBUG}
- DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY}
- DJANGO_ALLOWED_HOSTS=${DJANGO_ALLOWED_HOSTS}
- DEVELOPMENT_MODE=${DEVELOPMENT_MODE}
- STRIPE_PUBLISHABLE_KEY=${STRIPE_PUBLISHABLE_KEY}
- STRIPE_SECRET_KEY=${STRIPE_SECRET_KEY}
- STRIPE_WEBHOOK_SECRET=${STRIPE_WEBHOOK_SECRET}
- REDIS_HOST=redis
- CELERY_BROKER_URL=amqp://guest:guest@rabbitmq:5672//
redis:
image: redis:alpine
rabbitmq:
image: rabbitmq:3-management
ports:
- "5672:5672"
- "15672:15672"
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
Dockerfile
# Use the official Python image from the Docker Hub
FROM python:3.9-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set the working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
libpango1.0-0 \
libcairo2 \
libgdk-pixbuf2.0-0 \
libffi-dev \
shared-mime-info \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# Copy the entrypoint script into the container
COPY ./entry/entrypoint.sh /app <--- this runs OK but it not listed in the container
RUN chmod +x /app/entrypoint.sh
# Copy the Django project code into the container
COPY ./ecommerce /app
# Collect static files
RUN python manage.py collectstatic --noinput
When building the containers from docker-compose
there is no error from the Dockerfile and copies effectively the files but after the images are built docker-compose use the command: ls -la /app
and the file doesn't exists.
Here is the setup:
docker-compose.yml
version: '3.8'
services:
web:
build:
context: .
dockerfile: Dockerfile
command: ls -la /app
volumes:
- ./ecommerce:/app
ports:
- "8000:8000"
depends_on:
- redis
- rabbitmq
environment:
- DEBUG=${DEBUG}
- DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY}
- DJANGO_ALLOWED_HOSTS=${DJANGO_ALLOWED_HOSTS}
- DEVELOPMENT_MODE=${DEVELOPMENT_MODE}
- STRIPE_PUBLISHABLE_KEY=${STRIPE_PUBLISHABLE_KEY}
- STRIPE_SECRET_KEY=${STRIPE_SECRET_KEY}
- STRIPE_WEBHOOK_SECRET=${STRIPE_WEBHOOK_SECRET}
- REDIS_HOST=redis
- CELERY_BROKER_URL=amqp://guest:guest@rabbitmq:5672//
redis:
image: redis:alpine
rabbitmq:
image: rabbitmq:3-management
ports:
- "5672:5672"
- "15672:15672"
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
Dockerfile
# Use the official Python image from the Docker Hub
FROM python:3.9-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set the working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
libpango1.0-0 \
libcairo2 \
libgdk-pixbuf2.0-0 \
libffi-dev \
shared-mime-info \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# Copy the entrypoint script into the container
COPY ./entry/entrypoint.sh /app <--- this runs OK but it not listed in the container
RUN chmod +x /app/entrypoint.sh
# Copy the Django project code into the container
COPY ./ecommerce /app
# Collect static files
RUN python manage.py collectstatic --noinput
Share
Improve this question
asked Nov 18, 2024 at 13:35
RodragonRodragon
1671 gold badge3 silver badges17 bronze badges
1 Answer
Reset to default 2Your Compose file has
volumes:
- ./ecommerce:/app
You should delete this block.
This setting overwrites the /app
directory in the image with whatever content exists on the host system. In particular, the entrypoint.sh
script isn't in the ecommerce
directory on your host system, so the bind mount will hide it in the image.
That is: mounting local content over your image's code reintroduces the "works on my machine" problem that Docker usually tries to avoid. If you happen to have the source code checked out locally this will work, but now you're never running what's built into the image if you go to deploy it.
If you delete this volume mount then you will be running what's actually built into the image. In general this is a more reproducible setup. You can do day-to-day development with host-based tools (your MacOS or Linux host even has Python preinstalled) and use the full container setup for integration testing and deployments.