Skip to content

Sweet and simple CI/CD with bare git repositories

Implementing continuous deployment is often a massive undertaking, requiring developers to write a hard-to-debug YAML files full of security traps, courtesy of GitHub Actions and complex Kubernetes clusters.

But... it doesn't have to be that complex! With just Git and SSH, we can have fast, easy-to-debug deployments that feel like magic.

Git supports pushing changes to a remote server over SSH. You can configure a shell script hook, which runs when new changes are received. That shell script can do anything, including deploy our website, run our tests, refuse our commit on lint failures, and anything else that meets our needs.

Others have done this before; there's bt's post on the topic (which inspired me), Dave Ceddia's post from 2020, and even a 2014 post on the Hidden Blog!

A Hilbert curve superimposed on top of another with different styles. Not related to the article, but it looks cool, aye?

Prerequisites

To set up Git over SSH with CI/CD, we'll need:

Ready? Let's begin:

Setting up the bare repository

W can't push commits to ordinary Git repositories—the kind that git clone or git init create. Instead, we need a "bare" repository, which we can create with git init --bare or git clone --bare.

For this step, we want to SSH to the machine, then initialize a bare repository like so:

# On the server:
mkdir -p ~/path/to/git/repo.git
cd ~/path/to/git/repo.git
git init --bare  # Alternatively, git clone --bare our-git-repo's-clone-url.git

At this point, we should have a repo.git folder, which contains HEAD, refs, hooks, and a few other folders—the same files stored in the hidden .git directory of any regular repository. This a bare repository because these files are out in the open.

Updating the local repository to point to the remote

Now that we have an SSH-accessible Git repository, we want to push our commits to it.

If we already have a local clone, the git remote command is our friend:

# On our own machine
cd repo

# If `ssh user@machine` is how we connect to the machine, then:
git remote add deploy user@machine:path/to/git/repo.git
# Now push the changes:
git push deploy

Setting up a checkout of the repository

After we've got the Git history on the remote machine, we want to get the source files "checked-out" somewhere. For this, we can use a regular clone of the repository.

# On the server:
mkdir -p ~/path/to/checkout/
cd ~/path/to/checkout/

git clone ~/path/to/git/repo.git # Yes, we can clone a bare repository locally!

To avoid storing the history twice, we can instead use git worktree, as will be explored in the Using Git worktrees instead of Git clones section.

Making a deployment

Now that we have all the files of our project somewhere on the SSH-accessible machine, we want to create a shell script for deploying our website/project/etc. This shell script is what we will be running later, and will vary a lot between different projects.

Here are two examples from projects where I used a similar setup:

The first example is for a website built with Eleventy and npm.

#!/bin/bash
# ~/deploy-sitename.sh

# Make sure we exit on error and echo all commands executed
set -euxo pipefail 

cd ~/sitename.build/

# Then, we update all dependencies
npm ci

# Remove stale files from _site, as 11ty doesn't clean those up
[ -e ./_site/ ] && rm -r ./_site/

# Run the Eleventy build
npm run build

# Finally, some magic: use exch to atomically swap the built folder and the served folder with no downtime
exch ./_site/ ~/sitename.dist/
rm -r ./_site/

The second example is from my own website's Git/Docker Compose setup:

#!/bin/bash
# ~/hooks/post-merge

# Update all submodules
git submodule update --init --recursive

# Regenerate missing parts of the .env file (to reduce manual upkeep needed)
(
  source .env
  [ -z "$ZULIP_POSTGRESS_PASS" ] && echo "ZULIP_POSTGRESS_PASS="$(openssl rand -base64 15) >> .env
  # ...
)

# Finally, trigger docker compose
docker compose up -d --remove-orphans

There are only two requirements for the deployment script we use:

  1. The script should exit once it's done deploying—it should not hang while the project runs.
  2. The script should be able to run multiple times in a row with no bad effects. Ideally, we should be able to interrupt the script at any step and it should still leave the system in a predictable state.

If we need to start a long-running service after the script is done, there are some ideas later on in the Restarting long-running services section.

Automating the deployment

By now, we should have:

The final piece of automating the deployment is linking the three together!

We want pushes to the bare git repository to result in the script running and redeploying the project.

To do this, we will add a Git hook to the bare repository which updates the clone. Then, we will add another hook, this time to the clone, which will handle the newly-updated code and deploy it.

The first hook is a called a post-receive hook, since it runs after new commits have been received.
To add it, we need to make a script in ~/path/to/bare/repo.git/hooks/post-receive (without an extension).
It should contain something like this:

#!/bin/bash
# ~/path/to/bare/repo.git/hooks/post-receive

set -euxo pipefail # Exit on errors

unset GIT_DIR # Important, otherwise git pull will get confused
(cd ~/path/to/checkout/ && git pull)

# Alternatives to using unset:
# (GIT_DIR=~/path/to/checkout/.git/ GIT_WORKTREE=~/path/to/checkout/ git pull)
# (cd ~/path/to/checkout/ && env -u GIT_DIR git pull)

Make sure the hook script is executable, e.g. with chmod u+x ~/path/to/bare/repo.git/hooks/post-receive.

Then, we need the a hook that will run after the git pull / git merge has completed.
This is the post-merge hook, in the cloned repository (~/path/to/checkout/hooks/post-merge), which will be invoking the deployment script from before:

#!/bin/bash
# ~/path/to/checkout/.git/hooks/post-merge

set -euxo pipefail

# A trick: don't process commits that include e.g. [skip-ci] anywhere in the message
if git show --no-patch | grep -E '\[(skip|no)-(ci|update|build)\]'; then
  exit
fi

# Run the deployment script!
~/path/to/deployment/script.sh

As before, we should make sure the hook is executable with chmod u+x ~/path/to/checkout/.git/hooks/post-merge

Using the automation

Finally, we need to test the system. For this, we should go back to the local repository, make a change, and push to the new remote.
That's all, just a single git push.
If everything works, the redeployed version will be live once after the push completes.

# On our own machine

# Make changes to the repository as normal
git add changed_file.md
git commit

# Now, trigger the system...
git push deploy

Here is how it looks when I push to my git-with-docker-compose VM:

$ git push deploy

Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 8 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 337 bytes | 337.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0), pack-reused 0 (from 0)
# This is from post-receive
remote: Pulling changes...
remote: From /var/local/gitrepo
remote:  * branch            master     -> FETCH_HEAD
remote:    f47ed22..6b9c13f  master     -> origin/master
<snip>
remote: Fast-forward
remote:  config.env | 4 ++--
remote:  docker-gen | 2 +-
remote:  2 files changed, 3 insertions(+), 3 deletions(-)
# This is from post-merge
remote: Updating...
<snip>
remote:  e3e719a953e5 Downloading [==========================================>        ]  12.56MB/14.63MB
<snip>
remote:  dockergen-nginx  Built
remote:  zulip  Built
<snip>
remote:  Container zulip-zulip  Running
remote:  Container dockergen-nginx  Recreated
remote:  Container jitsi-whiteboard  Running
remote:  Container dockergen-nginx  Starting
<snip>
To ssh://bojidar-bg.dev/var/local/gitrepo
   f47ed22..6b9c13f  master -> master

There's a high chance our script doesn't work right away. If we need to tweak it, we can we can manually trigger the hooks inside the repositories like so:

# On the server:
cd ~/path/to/checkout/
.git/hooks/post-merge

# Or:
cd ~/path/to/git/repo.git
.git/hooks/post-receive

(Note that the hooks must be named exactly post-merge and post-receive; the filenames can't have an extension like post-merge.sh for example.)

Once it works, though... Congratulations! 🎉 We now have a very simple, very powerful continuous deployment system! And, to top that off, we have a brand new tool in our IT toolbox! Unix-y tools are awesome!

Variations on the theme

Here are some ideas for how we can improve the setup:

Creating a dedicated SSH user for git

If there are others working on the project, we might want to create a special user for Git access to the machine. Assuming a typical SSH configuration and a relatively-trusted environment (since we are still giving direct SSH access to the machine), we can achieve this as follows:

# On the server, as root
useradd git -p '*' # -p '*' makes it possible to log in via SSH, but only with SSH keys
su - git
# On the server, as the new user
mkdir ~/.ssh
$EDITOR ~/.ssh/authorized_keys # Edit the authorized_keys files, add our local ~/.ssh/id_*.pub
chmod 600 ~/.ssh/authorized_keys
chmod 700 ~/.ssh/

# git remote set-url deploy git@machine:path/to/git/repo.git

Then, we can run all the steps from before, this time as the new user called git!

If we want, we can check the output of whoami in the hooks scripts to confirm we are not pushing to the repository as root on accident:

# Check the current user (sanity-check so the script doesn't enable privilege escalation)
[ "$(whoami)" == "git" ] || exit 1

Further securing the setup (Added: 2025-10-26)

With the SSH-based system we set up so far, any user with access to Git also has shell access to the rest of the server, even if it's just as an unprivileged user. If we trust people we give Git access to, this is not an issue, but otherwise we want to restrict access to the command-line shell.

I asked on Mastodon about ways to secure Git-over-SSH and Richard Levitte suggested the following:

  1. Using git-shell as explained in the Git manual. This is probably the best solution if you have a small team, but want to restrict shell access to the server.
  2. Using gitolite to manage fine-grained access control. This is an enhanced solution for larger teams, when controlling access to individual repositories becomes a problem.

To secure the server from authorized personnel pushing malicious code, our Git CI/CD hooks need to never execute code from the repositories themselves. Instead, we could execute code inside containers.

I will update this article with details if I end up trying either solution.

Hosting the repository on the server

If we want to host the on our own SSH-accessible server, without using a Git forge like GitHub, we can do so by modifying our local repository to pull and push from the SSH machine. For that, we want to switch the remote called origin to point at our server.

git remote set-url origin user@machine:path/to/git/repo.git
git push # no need to mention deploy here

If we want a web interface for the repository, we can use something like cgit.

Using Git worktrees instead of Git clones

A repository created by git clone stores the whole history of the project next to the latest version. But we already have all that history in the bare Git repository, so we don't need to duplicate it!

To avoid the duplicate, we use git worktree to manage a check-out of the repository.

For this, at the Setting up a checkout of the repository step we would use git worktree instead of git clone like so:

# On the server:
cd ~/path/to/git/repo.git
git worktree add ~/path/to/checkout/

Then, for the post-receive hook, we should use git merge instead of git pull (there is no remote to pull from, it's all the same repository):

# Rest of ~/path/to/git/repo.git/hooks/post-receive...

unset GIT_DIR
(cd ~/path/to/checkout/ && & git merge main --ff-only)

Finally, for the post-merge hook, we need to configure Git, since git worktree does not create a .git/hooks folder. As explored in this StackOverflow question:

# On the server:
cd ~/path/to/checkout/

git config set extensions.worktreeconfig true # Enable per-worktree configuration
git config set --worktree core.bare false # Don't inherit the bare repository status

hooks="$(git rev-parse --git-dir)/hooks" # Get a unique folder for the hooks
git config --worktree core.hookspath "$hooks" # Update the hooks path to point to it

$EDITOR "$hooks/post-merge" # Finally, create the post-merge hook in $hooks/post-merge

Rebuilding only when source files have changed

If building a part of the project takes a long time, we might want to monitor particular files for changes before redeploying things that depend on them.

The best option for this would be to use a build system, like Make, Tup, or even Turbo to keep track of dependencies for us.

However, Git can detect changed files, with git diff-tree, as explored in this other StackOverflow question. For example, to re-download packages only if package-lock.json has changed, we can modify the post-merge hook:

# Somewhere in our post-merge hook:

if git diff-tree -r --name-only HEAD@{1} HEAD package-lock.json | grep ''; then
  npm ci
fi

Here, HEAD@{1} is the commit that the repository was at before the merge, and HEAD is the current commit. We use grep '' to detect non-empty output, which implies that there were changes to the listed files.

Restarting long-running services

As mentioned, Git hook scripts need to exit once the deployment is done. But sometimes, we have long-running services (web servers, API servers, etc.) that depend on the code in the repository and need to be restarted.

If we are using Podman/Docker/containers to manage services, we can finish our post-merge by recreating or restarting the container with the newly-built image. Docker Compose and similar specifications simplify this by taking care of the build process too!

If we are using systemd to manage services, we can use systemctl restart after installing the latest version. However, if we are running with an unprivileged user that doesn't have access to systemctl, we can again use a file that gets modified together with a Path unit:

# At the end post-merge
touch /home/git/myservice.restartfile
# In a new Systemd unit, myservice.path
[Service]
Type=oneshot
ExecStart=/usr/bin/systemctl restart myservice.service
[Path]
PathChanged=/home/git/myservice.restartfile
[Install]
WantedBy=multi-user.target

(Alternatively, we can use user services with e.g. systemctl --user restart myservice, which would be simpler.)

Likewise, if we are using monit to manage services, we can use the monit restart command after building a new version. However, if we are running with an unprivileged user, we can instead use a file that gets modified once the build completes like so:

# At the end of our post-merge hook

touch /home/git/myservice.restartfile
# In our monitrc
check file myservice-restart with path /home/git/myservice.restartfile
    if changed timestamp then exec "/bin/env service myservice restart"

Conclusion

Pushing to bare Git repositories and using Git hooks has been one of my favorite recent additions to my programming/IT toolbox. Compared to other CI/CD systems, Git hooks are really fast, since we aren't waiting for workers to become available, containers to spin up, and packages to be redownloaded. And I can always SSH in to the machine and fix any problems as they arise. The developer experience is incredible!

When I use complex post-receive hooks, I'm still surprised by the lack of time limits on how long a hook is allowed to run for. Apart from the practical limits of TCP connections, we can run our build scripts for as long as we need to, and the user will keep receiving the results in the console they triggered git push from. A breath of fresh air compared to the harsh time limits of other CI/CD systems!

Yet, the best part of Git hooks is how they integrate with shell scripting. I already use a ton of shell scripts: for small in-project tasks, for transferring files from Android, for analyzing/transforming copied text, and now: for managing the full deployment process of websites.
The strength of shell scripts seems to me is not in just managing pipelines—it's that shell scripts can be used virtually anywhere to customize how programs work.


This has been my 28-th article for #100DaysToOffload.

Articles tagged toolbox (5/5) →|

Articles tagged technology (15/16)

Articles tagged 100DaysToOffload (28/37)

Articles on this blog (35/44)

Comments?