I had a chance to explore some options on what tools my teams can use for documentation. I've considered Confluence, which I've used in the past and it's what my teams use currently for some technical documentation, I've also seen Read-the-Docs in the past. I like the idea of having the document under git version control, and using pull-requests with code reviews to control the edits – a flow that my team members are very familiar with.
One constraint I have is that the documentation must be internal and can't be public, so I looked for static files or local hosting options. After some searching, settled on MkDocs for now. Read-the-Docs seemed promising, but local hosting is not supported officially as they are focusing on their own cloud host offering. Some other tools lacked search functionality.
Creating a Docker Image for MkDocs
I wanted to keep my local dev machine clean, which means not installing MkDocs and its tooling (MkDocs is written in python), so let's use a docker image. This will also make it easier for other developers in my team to run it on their local machines. Here's the Dockerfile
:
FROM python:3.8.1-alpine3.11 RUN pip install mkdocs
Yes, it's simple. It will be expanded in Part 2, but good enough for now. I'm using a specific tag – 3.8.1-alpine3.11
– for the base because I already have alpine 3.11
image locally, so it wouldn't take up extra space if it was using another base image.
Let's build the image:
C:\data\my-docs\docker-image>docker build -t dusklight/mkdocs:0.1 .
Running MkDocs through Docker Container
Let's assume that I have C:\data\my-docs
folder which will store the documentation. I want this folder to be accessible from the Docker container that I'll be running from the image I created above. I also want to serve the documentation from the container by MkDocs and access it from the host.
Share the Host Drive for Docker Containers
To make a folder on the host available to containers, in Docker Desktop for Windows, use the Settings to share the drive:
Start the Docker Container
C:\>docker run -it --rm -p 8888:8000 --mount type=bind,source="C:\data\my-docs",target=/mnt/my-docs dusklight/mkdocs:0.1 /bin/sh
I won't go into details in this post about the command, but basically it will start the container in interactive mode and expose container's port 8000 to port 8888 on the host, and make the folder available to the container at /mnt/my-docs
.
Note that if you haven't shared the C
drive first, you may see the following error: docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: Drive has not been shared"}.
Creating a Sample Project with MkDocs
After the container starts, let's create a sample MkDocs project:
/ # cd /mnt/my-docs / # mkdocs new . INFO - Writing config file: ./mkdocs.yml INFO - Writing initial docs: ./docs/index.md
The files are now created in C:\data\my-docs
.
Serving Documents with MkDocs Dev Server
MkDocs comes with a dev server that has live reloading feature, so we'll use that for now. When we are at a point where the documentation is ready and can be published to others, we'll build and publish with a Azure DevOps, in Part 2 of this series.
By default, MkDocs will bind to 127.0.0.1, so when it's run in the container, it's not accessible to the outside even with the right ports published in the docker command line. Open the mkdocs.yml
and add the following so that it binds to all:
dev_addr: '0.0.0.0:8000'
Now let's start the server:
mkdocs serve
From the host machine, browse to htp://localhost:8888/
and MkDocs should come up.
In Part 2, we'll look at how to use the Docker image in a CI/CD process utilizing Azure DevOps and Azure Container Registry.
No comments:
Post a Comment