How To Use Docker To Make Local Development A Breeze

Docker is a very powerful tool for developing applications that run in the cloud. If you want to get the most out of it, you need to make sure that the way you’re running your code locally matches as closely as possible with how it runs in the cloud.

Today I’m going to show you how to do this, using a simple API server in Python as an example. The code for this episode is available on GitHub:

💡 Get my FREE 7-step guide to help you consistently design great software:


🎓 Sign up to Brilliant now and get 20% off the annual subscription:

Check out @codeSTACKr and @JackHerrington’s channel I mentioned in the video here:

👍 If you enjoyed this content, give this video a like. If you want to watch more of my upcoming videos, consider subscribing to my channel!

💬 Discord:

👀 Code reviewers:
– Yoriz
– Ryan Laursen
– James Dooley
– Dale Hagglund

🎥 Video edited by Mark Bacskai:

🔖 Chapters:
0:00 Intro
1:04 Explaining the code example
3:22 Running the server locally (without Docker)
5:05 Docker & cloud deployment
6:45 Issues with running code in your local development environment directly
8:01 Building and running a Docker container locally
14:40 Docker-compose introduction
15:54 Docker-compose YAML file example
19:32 Dealing with changes in the data

#arjancodes #softwaredesign #python

DISCLAIMER – The links in this description might be affiliate links. If you purchase a product or service through one of those links, I may receive a small commission. There is no additional charge to you. Thanks for supporting my channel so I can continue to provide you with free content each week!


45 thoughts on “How To Use Docker To Make Local Development A Breeze”
  1. Running the code from your GitHub using the docker-compose command gives me the message: "Will watch for changes in these directories: ['/app/__pycache__']" and it does not update the content until manually restarting the server, like w/o the docker-compose file and cmd. What gives?

  2. Great video! Thank you a lot.
    Perhaps, you could do a video talking about CI/CD. I think it is the next level from here.

  3. 14:25 Many thanks for this detailed explanation. But, how does docker know that it may take the pip installation from cache? It is very well possible that some of the dependencies have changed and would deserve a new installation. I guess you have to explicitly tell docker to do the step, again?

  4. Feedback please for the downside of what I’m proposing. How about?

    1) Use the python image versions and install ssh server.
    2) Create a user, with sudo privileges. Also install other tools you may use: vim, git, zsh, procps, etc.
    3) Upon startup of the new image, have your host home folder (or dev) environment to the new user (in container) home folder.
    Also, re-assign uid and gid of your home to the container user, so we don’t have to worry about permissions and any new files created will also be your host machine’s user.
    When the container starts up, you run the sshd in foreground.
    This all can be coordinated in a startup script.
    Also, expose all ports necessary. I map ssh 22 of container to 2222 of my host machine.
    4) Use VSCode’s remote ssh extension to actually use the container environment. If your ssh keys are setup correctly, you can ssh without password to the container as the container user, which is equivalent to you since you re-assigned the uid/gid of the container user to your host user.
    5) Your VSCode now can do what you normally do on your host machine such as debugging inside the container. For VSCode, it’s as if you’re running completely inside the container.

  5. Will we have another video that deploys this local image to some common cloud services, or better a vps? That will be the cherry on the cake

  6. I've got some questions (for Arjan and/or viewers). Just starting to get into containerization

    • Does this mean I could skip creating a `venv` for my projects, build it as a Docker "app", and run the built image from a container (which contains all the needed dependencies)? I feel like the building step would be a potential bottleneck, but the caching Arjan showed may help with that

    • On the `docker compose` part of the vid, "volumes" were used, which overrides the copying of files into the `/app` directory. What if I wanted to run the Docker image/container on another machine? Would the files updated since the initial building be available there? Or do I also "package" the volume with the image/container?

    • How much space would Docker images/containers/volumes generally take up on my machine? I'm running a little low on storage now 😄

  7. Yes! Definitely would want to see a video on CI/CD and deployments in future. Always looking forward to learning from you

  8. So far one the most detailed videos about docker. I've seen like more than 15 videos about docker but none of them mentioned the point that you pointed at 11:00. Please carry on posting, for someone like me who likes to learn these stuff these videos are very informative.

  9. I noticed you’re using a Mac, hitting the URL gives me cannot be reached…curious how you made it work

  10. I need to restart the container from docker desktop to reflect changes in file in the output. I am using docker setup on windows.

  11. I have few minor comments on this one, mainly because I'm working mostly with docker day-to-day:
    – use multi-stage builds, this example may not show it but docker images' size gets really blown out when you create many layers on them, in this example you could f.e. use venv in first part of the build (or poetry, tbh I recommend it as a package manager it blows the pip from earth) and then copy the venv to the second, much lighter image (f.e. python alpine)
    – I'd recommend to not use the latest image unless you're running some testing for building docker images, I've met some situations when something changed between let's say python3.6->3.9 and the docker image would be non functional
    – docker-compose for volume append only was a overkill, however I agree on the need of docker-compose files, they unify the work in the team. While using python with poetry you can also use the PoeThePoet for managing simple jobs as building docker image and then docker-compose up if you're making crucial changes in the code itself
    – I'd look into docker-slim after the two-stage images, Your image is probably ~800-900 MB, two stage build with alpine-python or python:slim would make it go to ~100-150MB and after docker slim you could be left with at most 50MB. I know that developers don't really care about that, but imagine having 10-15 images of yours size, that's 15GB of disk wasted
    – and the last thing, I'd recommend building with Kaniko (tool from google) instead of docker build, it's faster, produces slightly lighter images and has better caching at least in my experience

  12. It would be great if you could make a series about best practices for managing dev teams or some tips on splitting large software projects into manageable chunks among a dev team.

  13. Arjan, you made me a better python software developer, I love your content! Good job, keep it going! 🚀

  14. Maybe it is just because I am primarily an embedded developer, but to me this seems, practical as it is, like a huge waste of memory and computing power. All the stuff that needs to go on just in order to run something, all the libraries that need to be installed, all the cached steps when building the image – that just doesn't look anywhere near efficient to me.
    Of course I understand that this is not targeted at embedded systems, where "using a library" means compiling the code into the executable image, but even on PCs, while they do have the computing power and storage, I think this approach is the primary reason why we tend to need more and more and more and more both of memory and CPU power. And I honestly don't like this wastefulness. (Which is the primary reason for me kicking Windows out after sticking with Microsoft since Dos 3.3)

  15. Love this. Very easy to follow, and a simple yet viable example which does not look like everyone else's copy/paste examples that are out there. Someone else mentioned a video on deployment – would like to put in a vote for that as well. I've done Docker and docker-compose for a long time, but the deployment aspect still eludes me. Obviously, there are nuances between cloud providers (Azure/AWS/GCP), but a simple example using one of them would be awesome if you could put it together. Thanks

  16. OK, but then when you deploy in the cloud you don't use the docker-compose right? how do you make sure multiple apps are still in sync in the cloud. let say you have a docker image for the API server, and one for the web server providing the client page. how do you do?

  17. Hey Arjan,
    Your videos are the best. It's well explained and precise. Keep up the good work. Just one request – If possible, please make a video series on MLOps !!!!!

  18. Nice, simple clear instructions. Would be nice to have seen how to debug (with breakpoints) in the container.

  19. Really nice Docker introduction, thank you Arjan! 🙏😀
    I love that you included how to handle automatic reload for changes in the data during development and the caching thingy
    Also I really like your code readability, for example that when you load the data you name it channels_raw, and that you name them channel when they become dataclass Channel Objects.

    More tutorials about Docker and deploying them would be very much appreciated!
    Also hoping for best practices on how to update code or container versions after deployment, especially when there is live database data involved.
    (In my case I've setup Docker containers for Django, PostgreSQL, Celery + Beat, Redis and pgadmin.)

  20. The main benefits of compose appear when you have more than one subsystem, e.g. a DB or cache (like redis) that is also involved in your project. (i.e. very often) Then, your containers can depend upon the DB and refer to 1) volumes, 2) networks, and 3) one another by name, along with some well-known ports, which are possible to expose and map (as desired) via the host node, no matter which port(s) the process in the container thinks it has bound.

    I would suggest: a second, follow-up video 🙂 that could go a little deeper into the basics of using Docker in development/otherwise–e.g. a Dockerfile should ideally include a USER directive so that the service doesn't "run as root" even inside the container, and expose 8080 to map to another port on the host. User management in a container might seem kinda "belt-and-suspenders" before you read about Docker CVEs, but it's really a good practice for peace of mind if nothing else!

    My personal style is also to keep docker-compose for dev only, and my Dockerfile (with build args where relevant, plus targets like `make dev` vs. `make image` in a Makefile) for dev/prod envs.

  21. All these years and a convoluted docker-sync setup is still the closest thing to acceptable performance you can possibly squeeze out of Docker. Throw it in the trash and everything gets a lot easier.

  22. nice video! but you don't need docker compose for the reload functionality, you can do the volume mapping from the docker run command and also override the uvicorn with the –reload:
    Here is the command to run: docker run -p 8080:80 -v $(pwd):/app –rm channel-api uvicorn main:app –host –port 80 –reload

  23. Really great video! Only thing I would like to ask is how do you manage the dockerfile used in the docker-compose config when you have different dependencies for prod, testing, local, etc.

  24. Although docker is powerfull and I use it in my work I like it less and less. But one thing I like is your style of presenting and explain things!

  25. That was a nice simple explanation of Docker, fairly new to it myself so I'm gradually getting up to speed with it as we use it heavily in work on Cloudrun in GCP, thanks.

  26. Over in the PHP world, our bosses have a terrible obsession with running totally obsolete versions of PHP. For this, Docker is a godsend!

  27. Thanks for the amazing video 🙂 I am curious what theme you are using in VSCode? It is elegant!

  28. Great video, thank you! The only issue is that when i now switch from my local python venv development environment to the local docker env, VS code asks me for every project to install the development dependencies like black etc. Could you explain how you integrate your VS code python extensions with your docker workflow? Thanks!

  29. Thanks for your video. It is very interesting as always. How about a step forward to ci/cd with github actions? I'm currently working with it, but some details remain uclear to me. Anyway thanks a lot for your great work Aryan.

  30. that is not really local development, more like testing if the image is working 🙂 You can create a docker container with only the python interpreter you need and use it to run your code

  31. I’ve tried using docker desktop for developments in the past, everything is more difficult. I use pycharm pro, the debug feature and go to definition for libraries don’t work, you have to setup a remote development environment. I’m on macOS, docker volume performance is very poor, when running a big project it takes more than double the time to run the unit tests. Now I just make sure the tests are passing in my CI pipeline, and accepts the risk of the incompatibility.

  32. i was avoiding docker for practically years now, but your great example here made me try it out now – thank you! Not only did it work in a day, it forced me to get a much cleaner base setup for my (old and "naturally grown") project. This injected a whole lot new joy – and a lot of possibilities.

Leave a Reply

Your email address will not be published.

Captcha loading...