Those are my recommendations & practices against Docker deployment & dperations. If you think it’s wrong, please argue against it. Anyway, this is how I take advantage of the Docker features and work around its operational drawbacks.
You don’t necessarily have to play with Dockerfiles. You can also run e.g. the official ubuntu image, tweak the container on its shell and then commit to an image. It’s actually easier. It’s just that you need to keep track of command lines somewhere, so you’ll able to rebuild the image in the future.
Also, when I build a Docker image using Dockerfile
, I just put all the command lines into a shell script. Why bother with hundreds of ^RUN
commands?
FROM ubuntu:latest MAINTAINER user@example.com COPY upload/ /root/upload/ RUN /root/upload/install.bash CMD /root/upload/fakeinit.sh
First, you need to think docker images & containers in terms of applications' deployments on several nodes (ideally the applications are cluster-ready). And the nicest thing in that situation would be to deploy an entire app with only one image/container right? This is a way to go. However it would make quite some redundant processes for example MariaDB and Elasticsearch (preferably on bare-hardware anyway) instances would be running several times. So if you asking what parts of an application you need to split into containers, here’s my method to decide:
So for example if you got an application doing LAMP + specific JAVA + ES, I would split into:
Therefore, when you deploy the application, you need to make sure the DB & indexes are ready for it on the mutualized maria & elastic containers.