Story on how to setup Django as cheaply as possible.
NB: Still work in progress.
This story will show how to create a Django application and deploy it to a hosting provider as cheaply as I could possible do it.
Prerequisites
I will not be free, but my aim is to have a working solution that costs less than 10 USD pr. month.
This will not be a complete from zero to hero tutorial, rather I'll assume that you know enough basic Django to be able to apply the shown yourself.
Overview
Local setup
- Create Django
- Dockerize Django
- Postgres
- NGINX and gunicorn
Github and github actions
- Github repo
- Github actions to test
Hosting provider
- Dokku
- Push to deploy
- Local and dev/prod specific files
Database
- ElephantHQ
- Update settings
Workes (rabbitmq)
- RabbitMQ instance locally
- Worker locally
- RabbitMQ hosted
- Workers hosted
Local Setup
Create local Django application
First step is to setup a local working Django application. We will setup a virtual environment to hold our python and Django dependencies.
$ virtualenv venv -p python3
created virtual environment CPython3.10.4.final.0-64 in 257ms
creator CPython3Posix(dest=/django_on_the_cheap/venv, clear=False, no_vcs_ignore=False, global=False)
seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/username/.local/share/virtualenv)
added seed packages: pip==22.0.4, setuptools==62.1.0, wheel==0.37.1
activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
$ source venv/bin/activate
(venv) /django_on_the_cheap$ python --version
Python 3.10.4
(venv) /django_on_the_cheap$
In this case we are on Python 3.10 as I'm using an Ubuntu 22.04. Let's also install Django while we are at it:
$ pip install django
Collecting django
Using cached Django-4.0.5-py3-none-any.whl (8.0 MB)
Collecting sqlparse>=0.2.2
Using cached sqlparse-0.4.2-py3-none-any.whl (42 kB)
Collecting asgiref<4,>=3.4.1
Using cached asgiref-3.5.2-py3-none-any.whl (22 kB)
Installing collected packages: sqlparse, asgiref, django
Successfully installed asgiref-3.5.2 django-4.0.5 sqlparse-0.4.2
WARNING: You are using pip version 22.0.4; however, version 22.1.2 is available.
You should consider upgrading via the '/home/username/django_on_the_cheap/venv/bin/python -m pip install --upgrade pip' command.
We will ignore the pip upgrade for now as it is not super important.
We initialize a Django project:
$ django-admin startproject conf .
I call the project "conf" for the reason that django-admin will create a folder with this name holding files like the settings.py file. Thus, I find that calling the project "conf" or "config" makes sense and avoids some nameing confusion down the line. The final "." just means that we will not create an extra layer of folders that django does by default. I find that for smaller projects this makes it a bit easier to manage.
Let's migrate the database and spin up the development server:
$ python manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
...
...
...
$ $ python manage.py runserver
Watching for file changes with StatReloader
Performing system checks...
System check identified no issues (0 silenced).
June 05, 2022 - 11:20:55
Django version 4.0.5, using settings 'conf.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
And if you go to the localhost url, you should see the django index page.
We will finally do a pip freeze in order to generate a requirements file with all our dependencies:
$ pip freeze > requirements.txt
We also need to prepare for using static files. Add this to the settings.py file in the config folder:
STATICFILES_DIRS = [
BASE_DIR / "static",
]
STATIC_ROOT = BASE_DIR / "staticfiles"
Dockerize the application
Now we want to run our Django app in a Docker container, and, we also want to use docker-compose to manage this, as we will add more containers in the future, like Nginx and a database. Docker-compose just makes this easier to manage.
Make sure you have docker and docker-compose installed:
$ docker --version
Docker version 20.10.16, build aa7e414
$ docker-compose --version
docker-compose version 1.29.2, build 5becea4c
Create a docker-compose.yml file in the root of your project and add this:
version: '3.7'
services:
django:
build:
dockerfile: ./docker/local/Dockerfile
context: .
environment:
- DEBUG=1
- PYTHONDONTWRITEBYTECODE=1
- PYTHONUNBUFFERED=1
- PORT=8000
ports:
- 8000:8000
volumes:
- .:/app
command: python manage.py runserver 0.0.0.0:8000
and create a folder called docker and within that folder, another folder called local. In there (in docker/folder) create a file called Dockerfile and put this in there:
FROM python:3.10
EXPOSE 8000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
ENV DEBUG 0
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
&& apt-get clean
RUN pip install -U pip
# Install pip requirements
COPY . /app/
RUN python -m pip install -r /app/requirements.txt
# collect static files
WORKDIR /app
RUN python manage.py collectstatic --noinput
CMD python manage.py runserver
run
$ docker-compose up
and your application should be available on localhost:8000
Postgress database
We don't want to keep using the development sqlite database but rather use a Postgres database. This will also make it easier once we deploy to a production environment in the cloud. We do this in the docker-compose yml file.
Update the docker-compose.yml file to this:
version: '3.7'
services:
db:
image: postgres:12
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=12345
- POSTGRES_DB=local_db
volumes:
- postgres_data:/var/lib/postgresql/data/
django:
build:
dockerfile: ./docker/local/Dockerfile
context: .
environment:
- DEBUG=1
- PYTHONDONTWRITEBYTECODE=1
- PYTHONUNBUFFERED=1
- PORT=8000
ports:
- 8000:8000
volumes:
- .:/app
volumes:
postgres_data:
We are adding a postgres service and a volume to persist data. Do a new:
$ docker-compose up --build
We now have a running postgres database instance we can make Django connect to. Update the Database key in the settings.py file to:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'local_db', # set in docker-compose.yml as POSTGRES_DB
'USER': 'postgres', # set in docker-compose.yml as POSTGRES_USER
'PASSWORD': "12345", # set in docker-compose.yml as POSTGRES_PASSWORD
'HOST': 'db', # set in docker-compose.yml as the service name
'PORT': 5432 # default postgres port as the port number
}
}
We also need to add postgres to our dependencies. Do this by adding psycopg2-binary to the requirements.txt file and run a docker-compose up --build again.
Now, however, we are getting this output:
django_1 | You have 18 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
django_1 | Run 'python manage.py migrate' to apply them.
as we are using a new database it hasn't been migrated yet. For now, we do this manually by logging in to the django docker container and manually run migrate.
$ docker-compose run django bash
Creating django_on_the_cheap_django_run ... done
root@49efabfd6d54:/app# python manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
...
...
...
If you try and go to localhost:8000 now, you will first get a warning relating to missing ssl (https), and second, if you bypass this warning, an error like:
Invalid HTTP_HOST header: '0.0.0.0:8000'. You may need to add '0.0.0.0' to ALLOWED_HOSTS.
So, we will update our ALLOWED_HOSTS in the settings.py file to include localhost.
ALLOWED_HOSTS = ["localhost", "0.0.0.0"]
Now you should be able to successfully load the django index page.
Gunicorn
For now we have used the development server that comes with Django. But we can't use this in production, so we wanna use something else, and in this case Gunicorn. So lets first add it to our requirements and rebuild our images.
requirements.txt
asgiref==3.5.2
Django==4.0.5
sqlparse==0.4.2
psycopg2-binary
gunicorn
and run docker-compose up --build
Update the Dockerfile to use the gunicorn instead of the development server:
WORKDIR /app
RUN python manage.py collectstatic --noinput
CMD gunicorn conf.wsgi --bind 0.0.0.0:8000
Nginx
Now, if you try and go to http://0.0.0.0:8000/admin you will notice that the static assets (css, js) are missing. We need to have nginx setup to serve these. Let's add nginx as a service to our docker-compose.yml file.
nginx:
build:
dockerfile: ./docker/local/nginx/Dockerfile
context: .
volumes:
- static_volume:/app/staticfiles
ports:
- 8000:80
depends_on:
- django
networks:
- nginx_network
We are going to make a few changes to both our Dockerfile and docker-compose.yml in order to setup nginx. The full docker-compose.yml should look like:
version: '3.7'
services:
db:
image: postgres:12
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=12345
- POSTGRES_DB=local_db
volumes:
- postgres_data:/var/lib/postgresql/data/
nginx:
build:
dockerfile: ./docker/local/nginx/Dockerfile
context: .
volumes:
- static_volume:/app/staticfiles
ports:
- 8000:80
depends_on:
- django
networks:
- nginx_network
django:
build:
dockerfile: ./docker/local/Dockerfile
context: .
environment:
- DEBUG=1
- PYTHONDONTWRITEBYTECODE=1
- PYTHONUNBUFFERED=1
- PORT=8000
expose:
- 8000
volumes:
- .:/app
- static_volume:/app/staticfiles
depends_on:
- db
networks:
- nginx_network
volumes:
postgres_data:
static_volume:
networks:
nginx_network:
driver: bridge
First: We are adding the nginx service.
Second: We are creating a static_volume volume to hold static assets that should be served by nginx.
Third: We are explicitly adding a network, the nginx_network (this is not strictly necessary).
We will also add a nginx config file which we copy into the nginx docker container. In the docker/local folder make a subfolder called "nginx", in here, we will have two files: Dockerfile and nginx.conf.
The Dockerfile should look like:
FROM nginx:1.21-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY ./docker/local/nginx/nginx.conf /etc/nginx/conf.d
and the nginx.conf:
upstream django_on_the_cheap {
server django:8000;
}
server {
listen 80 default_server;
server_name _;
location / {
proxy_pass http://django_on_the_cheap;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /app/staticfiles/;
}
}
We will make a minor update to the docker/local/Dockerfile as well, just to clean up a bit:
FROM python:3.10
ENV HOME=/
ENV APP_HOME=/app
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
ENV DEBUG 0
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
&& apt-get clean
RUN pip install -U pip
# Install pip requirements
COPY . $APP_HOME/
RUN python -m pip install -r $APP_HOME/requirements.txt
WORKDIR $APP_HOME/
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "conf.wsgi:application"]
Now, do a docker-compose up --build
and your website should show including the static assets.
Local setup conclusions
We now have a fully working Django setup running a production like infrastructure.
Github and github actions
Github repo
Lets make our folder into a git repository and make a commit.
First, we create a .gitignore
file in our root folder to ensure we don't commit stuff we don't want to:
venv/
staticfiles/
*.pyc
__pycache__/
db.sqlite3
now do:
$ git init
$ git status
On branch master
No commits yet
Untracked files:
(use "git add <file>..." to include in what will be committed)
.gitignore
conf/
docker-compose.yml
docker/
manage.py
requirements.txt
nothing added to commit but untracked files present (use "git add" to track)
All looks well, so lets add and commit
$ git add .
$ git commit -m "Initial commit with working local docker setup"
[master (root-commit) 0989c45] Initial commit with working local docker setup
12 files changed, 308 insertions(+)
create mode 100644 .gitignore
create mode 100644 conf/__init__.py
create mode 100644 conf/asgi.py
create mode 100644 conf/settings.py
create mode 100644 conf/urls.py
create mode 100644 conf/wsgi.py
create mode 100644 docker-compose.yml
create mode 100644 docker/local/Dockerfile
create mode 100644 docker/local/nginx/Dockerfile
create mode 100644 docker/local/nginx/nginx.conf
create mode 100755 manage.py
create mode 100644 requirements.txt
Make sure you have created an empty Github repository, then add this repository and push:
$ git remote add origin git@github.com:perwagner/django_on_a_dime.git
$ git push -u origin master
Enumerating objects: 18, done.
Counting objects: 100% (18/18), done.
Delta compression using up to 8 threads
Compressing objects: 100% (16/16), done.
Writing objects: 100% (18/18), 4.24 KiB | 4.24 MiB/s, done.
Total 18 (delta 1), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (1/1), done.
To github.com:perwagner/django_on_a_dime.git
* [new branch] master -> master
Branch 'master' set up to track remote branch 'master' from 'origin'.
Github action and tests
Hosting Provider
Vultr
create new user on the vps:
$ sudo adduser username
$ sudo usermod -aG sudo username
on you local do this to setup ssh key:
ssh-copy-id username@123.321.123.32
If you get tired of remembering the IP address, you can add this to your host file (/etc/hosts):
$ sudo nano /etc/hosts
123.321.123.32 dokku
and now you can login to your server with:
$ ssh username@dokku
Dokku on VPS
Log in to the server and do a:
$ sudo apt-get update
$ sudo apt-get upgrade
$ wget https://raw.githubusercontent.com/dokku/dokku/v0.28.1/bootstrap.sh
$ sudo DOKKU_TAG=v0.30.1 bash bootstrap.sh
Next, we add our public key from the .ssh folder
$ echo "your-public-key-contents-here" | sudo dokku ssh-keys:add admin
$ dokku domains:set-global 123.321.123.32
Now, login to the dokku server and create an app:
$ dokku apps:create myapp
and lets also allow the firewall to let through http trafic:
$ sudo ufw allow http
In your local repository, you can now add your server as a remote:
$ git remote add production dokku@123.321.123.32:myapp
$ git push production
"production" is the name of the remote. It is important that the username is "dokku", otherwise it won't work.
Set the domain for your app so it will serve on port 80 by default:
$ dokku domains:set myapp 123.321.123.32