I just wanted to record a simple method I used to utilize the grav skeletons in docker using (mostly) the Dockerfile found in the official docker repo for grav (GitHub - getgrav/docker-grav: Official Docker Image for Grav).
I am by NO means a docker expert, nor even altogether that literate with grav (I’ve only done the literal bare minimum to ever get anything working), but I figured this may help someone else that just wants what I believe is the easiest but still fairly “professional” (if such a thing can be used to describe my home server) setup to start with: A) spin up server via docker and B) start with skeleton. I believe it should work with most skeletons.
I made only 3 changes to the Dockerfile in the official repo:
- I installed wget as I’m more familiar with it than curl.
- Replace the curl command with a wget command to retrieve the .zip file normally found on the github repo (can also be elsewhere).
- Modify the unzip command to output to the expected directory as the zips I found tended to directly contain the grav install (e.g. unzipping put assets/ bin/ etc. in the current directory)
Boom. Now all it needed to run was:
docker-compose up -d
A copy of my Dockerfile (using the “Sora Article” skeleton) can be found here:
Tested this on my raspberry pi and my x64 desktop with no noticeable issues.
Yes, I started with the same docker repo and use it in all my Grav website projects today. Running Grav in a docker container is beautiful in many ways. To begin with, it isolates the dependencies from other projects on the local machine but at the same time your collaborators can run the website on their machines in the exact same environment too.
I’m actually developing Grav websites in a multistage environment (localhost, review, staging, production) with continuous delivery to the cloud service. Feel free to reach out if your ever plan to take the next step in DevOps. I’ll gladly help you to get on board.
Just a short note: You should be able to run your environment with just:
docker-compose up -d
The docker-compose.yml file is the default file docker-compose is looking for therefore you shouldn’t have to force it with -f.
Have a great day and enjoy your docker experience
Quite true! Thanks for the tidbit - will update accordingly
More of an embedded systems guy, opposite of docker/grav - so I just stumble along on anything with too much OS on top of it (and sometimes in embedded too!) even though I try to self-host almost everything I can haha
I just noticed your post here and I’m reaching out to hear more about your setup. I also use docker in three-stage environment setups and find myself automating more and more each time, although I know very little about formal DevOps.
Reply when you can (no rush) wherever you prefer or in a PM.
I can confirm docker is a very useful platform for developing grav sites, I use it all the time.
for anyone interested, I have modified the official grav Dockerfile to also contain xdebug, which is useful for debugging, especially when developing plugins.
it can be found on my github site.
Nice one @hoernerfranz. I use my own Caddy image but I might see if I can add a tag containing xdebug inspired by your code.
A week or so back, I added some content to the official Grav docs about using Docker. It also contains a link to the nginx image I’ve used in the past.
sure, I’m always happy to share, collaborate and help.
I run my GRAV projects on GitLab template repositories with a multi stage CI/CD to a Hetzner Cloud VM on Linux. I use Traefik as a reverse proxy which automates the entire process of revealing all environments to the web via Apache.
In the meantime between the last post and now we have added Feature Flags as GRAV plugin, IP protection for the dev environment and password protection for staging environment. Makes it a pretty cool workflow and rolls out the entire multi stage environment for new GRAV projects in only 10 to 15 minutes. Which is a blast and speeds up the development workflow tons.
In case you want to see things live, I can show you the setup and walk you through the features. I’m preparing to make this an Open Source project anyway. Always looking for future contributors.
At the moment we are connecting the same setup to Google Cloud Platform VM. Which means by just changing one variable you’ll be able to deploy to a different Cloud Platform. Actually trying to perform this stunt with AWS, Azure, IBM Cloud etc. The days of vendor lock in are over.