<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
		<title>Mike Street's Blog</title>
		<link>https://www.mikestreety.co.uk</link>
		<description>Blog posts from Mike Street (mikestreety.co.uk)</description>
		<language>en-gb</language>
		<pubDate>Wed, 08 Apr 2026 07:32:59 GMT</pubDate>
		<lastBuildDate>Wed, 08 Apr 2026 07:32:59 GMT</lastBuildDate>
		<atom:link href="https://www.mikestreety.co.uk/rss.xml" rel="alternate" type="application/xml" />
		
		
		
		<item>
			<title>Checking your websites with the BLAT test</title>
			<link>https://www.mikestreety.co.uk/blog/checking-your-websites-with-the-blat-test/</link>
			<pubDate>Thu, 26 Mar 2026 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/checking-your-websites-with-the-blat-test/</guid>
			<description><![CDATA[
You've been staring at the same project for weeks. Your design eye is shot and the finish line is in sight. You can't see the wood
for the trees.

This is exactly when you need fresh eyes. At Liquid Light, that's what the BLAT test is for.

BLAT is a timeboxed, no-holds-barred review where everyone gets a go at one of our nearly finished websites. Each person gets an
hour to click around, prod things, and use the site as a real person would. The goal is to find holes.

It might be a personal preference. It might be a niggle. It might be the tiniest of nitpicks. Doesn't matter. It goes on the list.

The project manager then reviews the list and decides what to action, postpone, or drop. It's not personal. It's priorities.

Things that tend to come up:

 * Spacing between two particular elements
 * Accessibility of a link in a specific context
 * Unexpected (or missing) behaviour from an interaction
 * Image sizes affecting performance
 * Print styles nobody tested
 * Odd flows between pages
 * Future website improvements or additions

Anything goes, as long as the note includes:

 * A link
 * A description
 * A screenshot where possible

Side note: No, BLAT doesn't actually stand for anything. We all know what it means (although out of all the recommendations from
Claude my favourite was Brutally Look At Things )
<p><strong>Read time:</strong> 1 mins</p>
<p><strong>Tags:</strong> General, Testing</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Completely remove DDEV from your computer</title>
			<link>https://www.mikestreety.co.uk/blog/completely-remove-ddev-from-your-computer/</link>
			<pubDate>Fri, 06 Feb 2026 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/completely-remove-ddev-from-your-computer/</guid>
			<description><![CDATA[
I recently ran into an issue with DDEV 1.25 and was upgrading and downgrading between the two versions to test, check and verify.

Eventually, my DDEV got confused and started producing 404s. With mixed version images & config, I wanted to remove everything and
start again.

With the help of Claude, I created a bash script which will run through and delete every DDEV related configuration.

Completely remove DDEV [https://gist.github.com/mikestreety/07d531b346ab8ce9c62ec655dd4274a4]


STEPS

 1. Copy the contents or download the zip
 2. Make the file executeable - cd path/to/file and chmod +x ./remove-ddev.sh
 3. Run the script ./remove-ddev.sh - there is --help and --dry-run flags available
<p><strong>Read time:</strong> 1 mins</p>
<p><strong>Tags:</strong> CLI</p>
			]]></description>
		</item>
		
		
		<item>
			<title>2025 In Review</title>
			<link>https://www.mikestreety.co.uk/blog/2025-in-review/</link>
			<pubDate>Wed, 31 Dec 2025 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/2025-in-review/</guid>
			<description><![CDATA[
Like 2024, 2025 passed without major incident or upheaval. All in all, it was an enjoyable year - seeing some firsts for us all.


LIFE

Plenty of goings-on with the Street family this year. Ruby, our youngest, started school which meant another shift in routines and
schedules. Alfie moved up to Beavers and Ruby started Squirrels, which now means I'm the only one of our family to not be
currently invested in a scouting section.

Alfie has become a classic "kid" and discovered Minecraft. He'd been talking about it at school and we finally gave in and bought
a copy for the XBOX. I've also enjoyed playing it, trying to actually build things and thinking about layout (although it's still
enjoyable to build a tower of TNT and blog it up).

There were some small home improvements - I redecorated the garden office and painted the "TV corner" of the lounge with matching
bookcases. The biggest change was the demolition of the back garden patio and building of a deck - carried out by my dad and me.

To finish off the year, our car decided to give us a Christmas present of breaking. We couldn't get it booked in until the 5th
Jan, however my mother-in-law leant us her car for the festive period which meant Christmas was saved.


TRIP AND HOLIDAYS

For our main holiday this year we took the kids to Disneyland Paris. It was a first for me - going on the channel tunnel and
driving on foreign soil in my own car. Disneyland was hectic and expensive and fun and chaos. We went with my wife's family which
did mean we had the opportunity to leave the kids with each other and head off to the big rides.

Little trips included taking Ruby to London for the first time, a boat trip out to the Rampion Wind Farm
[https://en.wikipedia.org/wiki/Rampion_Wind_Farm] and a visit to Brooklands Museum in Weybridge.

We also spent a week in a static caravan in the New Forest - it was a classic "caravan park" style holiday, with on-site swimming,
golf and evening entertainment. We sprinkled in some day trips to Paultons Park and a miniature steam railway. We also found a pub
round the corner with an incredible outdoor play area for the kids (while Chilly and I kicked back with a book and a beer).

My attendance to gigs sky-rocketed this year as I got the taste for it last year. This year saw a few shows with the kids along
with seeing OneRepublic (my first trip to the 02 since it was the millennium dome), Nizlopi (a birthday present), Self Esteem and
it seems I can't go through a year without seeing Bastille. The last gig of the year was with my mum and family to see
Stereophonics at the 02 again.


STATS ANALYSIS

Visit the stats page [https://www.mikestreety.co.uk/stats/].

Cycling was a big part this year - recording the highest ever number of miles done in a year since I started recording. A big part
of this was the turbo trainer I purchased at the end of last year, with just 85 miles separating virtual and real-world bike rides
(and I rode more virtual miles then I did on an eBike)!

I also hit a few big rides this year, doing the London to Brighton bike ride (for the third time), a 65 mile bike ride in August
and 70 miles around the Isel of Wight in October. I was pretty happy that I was able to pull these out the bag without much
concious training. It seems cycling to and from work combined with the turbo proved to be a pretty effective training plan. I'd
like to hit 4000 miles again in 2026.

My Geocaching suffered this year - only finding 2 at the beginning of the year. I need to get some caching days booked in in 2026
to get those numbers back up. I feel like 75 Geocaches is a good target.

Everything else, such as blog posts, steps and music streams stayed steady.
<p><strong>Read time:</strong> 3 mins</p>
<p><strong>Tags:</strong> General, Ramblings, Annual Review</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Makefile includes and other Makefile tips and tricks</title>
			<link>https://www.mikestreety.co.uk/blog/makefile-includes-and-other-makefile-tips-and-tricks/</link>
			<pubDate>Sun, 28 Dec 2025 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/makefile-includes-and-other-makefile-tips-and-tricks/</guid>
			<description><![CDATA[
We use Makefiles in our repositories to wrangle commands for setting up, pulling and linting our code. Nothing revolutionary, but
they've become pretty essential to our workflow.


MAKEFILE INCLUDES

The thing that initially had me scratching my head was wanting to share commands between sites. A lot of our sites follow the same
patterns and need identical Makefile commands - seemed daft to copy-paste the same stuff everywhere when we could have one central
file doing the heavy lifting.

I spent ages trawling through Stack Overflow and various forums, but here's the thing - because our Makefiles essentially just
contain bash commands (rather than actually "making" anything in the traditional sense), there was loads of conflicting advice
that didn't quite fit our use case. Eventually, I gave up being stubborn and asked AI, which promptly gave me exactly what I
needed:

-include ./path/to/file.mk

Word of warning: That hyphen - before include is doing important work - it prevents Make from throwing a tantrum if the file
happens to be missing (which can happen if it's installed via a dependency that hasn't been pulled yet).

.mk is the recognised file extension for Makefiles that aren't called Makefile - bit of trivia for you there.

When you're including a file with commands, you can overwrite them in your local Makefile if you need project-specific tweaks.
Make will give you a gentle warning about this, but it's just letting you know something's been overridden - nothing to worry
about.


VARIABLES

Your central Makefile might need some paths or other bits that vary between projects. You can handle this much like bash - define
variables without a prefix, use them with one:

SITE_PATH_PRODUCTION := ~/www/current

-include ./path/to/file.mk

Then reference that variable in your shared Makefile:

## Pull the full database from production
db-full-pull-production:
	ssh $(SSH_HOST) \
		"$(SITE_PATH_PRODUCTION)/vendor/bin/typo3 database:export ...

As a bit of a safety net, you can set defaults at the top of your shared makefile too:

SITE_PATH_PRODUCTION ?= ~/www/current

That way things won't explode if someone forgets to set a variable.


.PHONY COMMANDS

All our commands are basically bash scripts masquerading as Make targets. This works fine until you accidentally create a folder
that matches one of your command names.

For example, if you've got make config but also have a config/ folder hanging about, running make config will target the folder
instead of your command. Bit annoying when you're expecting it to do something entirely different.

You can tell Make which commands are .PHONY (i.e., they don't correspond to actual files), but since all of ours are just bash
commands in disguise, we take the sledgehammer approach and mark everything as phony:

.PHONY: *


HELP BLOCK

Here's something that'll save you from accidentally running the wrong command: by default, running make with no target executes
whatever's first in the file. This could be anything - a rebuild command, a deployment script, something that downloads half the
internet. Not ideal.

We stick a help generator at the top of our Makefiles that serves double duty - it shows available commands and acts as a safety
net:

help:
	@echo "\033[0;33mAvailable targets\033[0m"
	@echo "\033[0;33m-----------------\033[0m"
	@awk '/^[[:alnum:]_-]+:/ { \
		helpMessage = match(lastLine, /^## (.*)/); \
		if (helpMessage) { \
			helpCommand = substr($$1, 0, index($$1, ":")-1); \
			helpMessage = substr(lastLine, RSTART + 3, RLENGTH); \
			printf "%-25s %s\n", helpCommand, helpMessage; \
		} \
	} \
	{ lastLine = $$0 }' $(MAKEFILE_LIST)

Any command with a double hash comment (##) above it gets picked up and displayed in the help. Simple but effective - and it means
accidentally running make just shows you what's available rather than doing something potentially destructive.

Certainly not the most groundbreaking setup, but it's made managing our various projects much less of a faff. If you've got a
different approach or improvements to suggest, I'd love to hear about them.
<p><strong>Read time:</strong> 3 mins</p>
<p><strong>Tags:</strong> CLI</p>
			]]></description>
		</item>
		
		
		<item>
			<title>2025 Quiz of the Year</title>
			<link>https://www.mikestreety.co.uk/blog/2025-quiz-of-the-year/</link>
			<pubDate>Sat, 27 Dec 2025 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/2025-quiz-of-the-year/</guid>
			<description><![CDATA[
This years quiz of the year [category/quiz/] needs the following:

 * The slides (linked below)
 * The info and notes below
 * Pens and paper for your teams

The quiz can be played in teams or individually - I'll leave it to you to work it out. There doesn't need to be a "quiz master"
per say, just someone who can click "next slide please".


SLIDES

Get the quiz slides [https://docs.google.com/presentation/d/1iHsGoROP03uZz5BUKbMiLn9i4Tk7WISxWbR-ZgkYClY/edit?usp=sharing]

The slides are on Google, however if you need them in a different format, let me know [https://www.mikestreety.co.uk/contact/].


QUIZ INFORMATION

This quiz is 5 rounds with 7 questions in most rounds (except the picture round, which has 11).

Most answers are 2 points per correct answer (allowing single points to be rewarded where deserved).

When running the quiz I ask that phones are put away - more for politeness than fear of cheating. I also make it clear that the
answers in the quiz are always right - even if they are not. This way it is fair and should hopefully avoid arguments.


ROUND EXPLANATIONS

1.BATTLENIPS (AND OTHER PARTS)

Identify the grid reference where the specified body part of object is.

2. MUSIC

Fill in the missing lines of the song. Single points can be awarded if the teams are nearly right.

3. FILM

Identify the films featuring my face

4. PICTURE

I got my 7 & 4 year-olds to draw things from the garden and park. What are they?

5. 2025

7 questions about what happened in 2025.


THE END

Let me know if you use this quiz and how you get on - was it to easy? to hard? to complicated?
<p><strong>Read time:</strong> 1 mins</p>
<p><strong>Tags:</strong> General, Quiz</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Keeping RustFS clear of old assets</title>
			<link>https://www.mikestreety.co.uk/blog/keeping-rustfs-clear-of-old-assets/</link>
			<pubDate>Thu, 06 Nov 2025 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/keeping-rustfs-clear-of-old-assets/</guid>
			<description><![CDATA[
After having RustFS [https://www.mikestreety.co.uk/blog/setting-up-rustfs-as-an-amazon-s3-replacement/] running as our Gitlab CI
cache [https://www.mikestreety.co.uk/blog/use-minio-to-cache-gitlab-containers-and-runners/] for a few weeks the server (as
expected) filled up.

Since we're only using RustFS to cache build assets, we can safely bin the old ones without worry. We settled on a 14-day cut-off
- bit arbitrary really, but it works. The worst that can happen is the application won't deploy without them, which means you'll
have to re-run the entire pipeline if you're trying to deploy something that hasn't been built in a fortnight. Not ideal, but
hardly the end of the world.

RustFS is comapitble with mc - the MinIo command so I started looking there but then ended up with a default linux command

find /data/rustfs0/XXX -mindepth 1 -type f -mtime +14 | xargs rm

Note: Make sure you specify the path (XXX in the example above) to your bucket as RustFS stores configuration in
/data/rustfs0/.rustfs.sys - I ended up deleting our user access by removing files older than 14 days in this folder

Once you have the command and are happy with it, add it to a crontab to run once a night.

To edit the crontab, run crontab -e and place the following at the bottom (this will run a 10pm every evening)

0 22 * * * find /data/rustfs0/gitlab-ci -mindepth 1 -type f -mtime +14 | xargs rm
<p><strong>Read time:</strong> 1 mins</p>
<p><strong>Tags:</strong> CLI</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Email authentication records to improve deliverability</title>
			<link>https://www.mikestreety.co.uk/blog/email-authentication-records-to-improve-deliverability/</link>
			<pubDate>Mon, 13 Oct 2025 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/email-authentication-records-to-improve-deliverability/</guid>
			<description><![CDATA[
Sending emails in this mad world of spam is a tricky business. Spoofing and phishing are all too common, and email providers try
to be smart to it, although sometimes at the detriment to honest and "real" emails.

If your website is sending emails at all (even to you for contact form responses), it is worth considering spending time to verify
that you own the domain and you are allowed to send emails from it. Word of warning: this isn't going to be the most thrilling
post, but it's one of those things that'll save you a proper headache down the line when your carefully crafted emails end up in
spam folders.

SPF, DKIM and DMARC records all help with this and below each one is explained as to what it does and how you set it up. I'll be
honest - I found these records a bit bewildering at first, but once you've set them up a few times they become second nature.
Think of them as your email's passport - proving you are who you say you are.


TESTING TOOLS

Before we dive in, bookmark this:

 * mail-tester [https://www.mail-tester.com/] - Send an email to get real-world data (proper useful, this one - gives you a score
   out of 10 and tells you exactly what's wrong)


SPF RECORD

 * SPF Records explained [https://mailtrap.io/blog/spf-records-explained/]
 * SPF Tester [https://mxtoolbox.com/spf.aspx]

SPF (Sender Policy Framework) is basically your domain saying "these are the mail servers allowed to send email on my behalf".
Without it, anyone could pretend to send emails from your domain - which is as dodgy as it sounds.

This is how the most common SPF record looks:

v=spf1 a mx -all


Breaking this down: v=spf1 is the version, a and mx mean your domain's A record and MX records are allowed to send mail, and -all
means "reject anything else". That last bit is important - it's like saying "if it's not on the list, it's not coming in".

As an example, if your client uses Google Workspace, you'll need to add include:_spf.google.com. Same goes for services like
Mailchimp:

v=spf1 a mx include:_spf.google.com include:mailchimpapp.net -all


Most services which send emails on your behalf will have some documentation detailing what SPF Record you need.


DMARC RECORD

 * DMARC wizard [https://dmarcian.com/dmarc-record-wizard/] (weirdly satisfying to use, this one)

DMARC (Domain-based Message Authentication, Reporting and Conformance) tells receiving mail servers what to do if your SPF or DKIM
checks fail. It also sends you reports so you can see if someone's trying to spoof your domain - certainly interesting to see
what's being attempted in the wild.

A good standard is something like the following:

 * Target: _dmarc.@
 * Type: TXT
 * Record:

v=DMARC1; p=quarantine; rua=mailto:email@example.com; aspf=r;


Or if you want to be a bit stricter:

v=DMARC1; p=quarantine; pct=100; aspf=s;


The p=quarantine means "if this looks dodgy, put it in spam rather than rejecting it outright". You can use p=reject if you're
feeling confident, but I'd recommend starting with quarantine until you're sure everything's configured properly.


DKIM RECORD

This can only be configured if the service you are using emits a DKIM signature or similar. CMS's, like TYPO3, does not include a
DKIM header so make sure you know before you start.

DKIM (DomainKeys Identified Mail) adds a digital signature to your emails - like a wax seal on a letter proving it hasn't been
tampered with. Your email service provider will generate the keys for you.

Check with the system for instructions - each provider does it slightly differently and they'll give you the specific DNS records
to add.


BIMI

Right, this one's a bit fancy and optional, but if you want your logo to appear next to your emails in supported clients (Gmail,
Yahoo, etc.), BIMI is what you need.

 * BIMI Generator & Inspector [https://bimigroup.org/bimi-generator/]
 * Use a 512px square SVG for the image (the Favicon SVG is perfect for this)
 * We don't generally have a VMC (Verified Mark Certificate) available - these cost proper money and are only really worth it for
   big brands

Example BIMI:

 * Target: default._bimi.@
 * Type: TXT
 * Record: v=BIMI1; l=https://link/to/svg;


FINAL THOUGHTS

I should really test email deliverability more systematically on projects, but these DNS records are a good foundation. Set them
up early and you'll avoid that awkward conversation later where the client asks why their contact form emails keep ending up in
spam.

If anyone's got experience with VMC certificates for BIMI or has tips on DKIM implementation in TYPO3, I'd love to hear your
thoughts. There's still a lot of nuance to email deliverability that catches me out occasionally.
<p><strong>Read time:</strong> 3 mins</p>
<p><strong>Tags:</strong> DNS, Email</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Setting up RustFS as an Amazon S3 replacement</title>
			<link>https://www.mikestreety.co.uk/blog/setting-up-rustfs-as-an-amazon-s3-replacement/</link>
			<pubDate>Sun, 12 Oct 2025 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/setting-up-rustfs-as-an-amazon-s3-replacement/</guid>
			<description><![CDATA[
I was at TYPO3 Camp London [https://t3cl25.typo3.com/] recently when Martin Helmich casually dropped that RustFS
[https://rustfs.com/en/] was a solid MinIO replacement. That got my attention.

I've written about MinIO before - speeding up Gitlab CI
[https://www.mikestreety.co.uk/blog/how-i-improved-the-speed-of-docker-builds-in-gitlab-ci/] and caching Gitlab assets
[https://www.mikestreety.co.uk/blog/use-minio-to-cache-gitlab-containers-and-runners/] - and it's been great as a self-hosted S3
alternative. But I'm always up for trying new toys, especially when they promise improvements.

Turns out RustFS has been benchmarked against MinIO and is faster across the board
[https://github.com/orgs/rustfs/discussions/598#discussion-8952907]. That was enough to convince me to give it a go.


SERVER SETUP

We're running RustFS on a dedicated Ubuntu server with Hetzner [https://www.hetzner.com/] (our go-to VPS provider). I went with an
Intel CX32:

 * 4 vCPU
 * 8 GB RAM
 * 80 GB Disk

Note: You'll need an external IPv4 address - rustfs.com only supports IPv4 for setup.


INSTALLATION

Once your VPS is up, installation is pleasantly straightforward. First, make sure you've got unzip:

apt update && apt upgrade
apt install zip unzip

Then grab the install script [https://rustfs.com/en/download/?platform=linux]:

curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh

Follow the CLI prompts and you're sorted.


SETUP

After installation, hit up the web interface at http://[server-ip]:9000. Default credentials are rustfsadmin for both username and
password.

Change that immediately by editing:

/etc/default/rustfs


Once logged in, you can create buckets and extra users or access keys - which is what I've been using for Gitlab CI.
<p><strong>Read time:</strong> 1 mins</p>
<p><strong>Tags:</strong> CLI</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Shower thoughts that live in my head rent free</title>
			<link>https://www.mikestreety.co.uk/blog/shower-thoughts-that-live-in-my-head-rent-free/</link>
			<pubDate>Sat, 11 Oct 2025 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/shower-thoughts-that-live-in-my-head-rent-free/</guid>
			<description><![CDATA[
The idea of shower thoughts are things for you to ponder or wonder, these are ones that I have read on the internet (I take no
credit) and often think about for no reason at all.

> Sleep is one of the few things we pretend to do to actually do it

Close your eyes, lie down and pretend until you actually drift off

> Cleaning your teeth is the only time we clean our skeleton

You don't see a skull with a beard, do you?

> Why do we have round lenses on cameras, but rectangle sensors and photos?

Surely the lens is capturing more of the photo then we ever see?

> Have you ever walked in a "space" of the earth that no human has before?

Doesn't matter what is actually under your feet, but has a human ever been to that lat/long before?
<p><strong>Read time:</strong> 1 mins</p>
<p><strong>Tags:</strong> General</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Migrate your GitLab instance to a new domain</title>
			<link>https://www.mikestreety.co.uk/blog/migrate-your-gitlab-instance-to-a-new-domain/</link>
			<pubDate>Fri, 10 Oct 2025 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/migrate-your-gitlab-instance-to-a-new-domain/</guid>
			<description><![CDATA[
We were sunsetting a domain and wanted to migrate our GitLab instance to a different URL.

GitLab doesn't make this easy. There's no big "Change Domain" button, and if you're using package registries, you're in for a
proper adventure and need to get your team on board.


STEPS WE'LL COVER

 1. Add the new domain alongside the old one (30 mins)
 2. Switch the primary domain (30 mins + testing time)
 3. Set up redirects (1 hour)
 4. Clean up (10 mins)
 5. Deal with package registry authentication (varies, but budget a day if you're unlucky)

In the examples below, I'm using:

 * old.gitlab-company.org - the old domain
 * new.gitlab-instance.com - where we're migrating to

Word of warning: if you're using GitLab as a package registry (NPM, Docker, whatever), prepare yourself. This is where most of the
pain lives.


ADD THE SECOND DOMAIN

The first step is to allow GitLab to accept connections from the new domain. This lets you test everything works before you commit
to the switch - you'll be able to access GitLab on both domains simultaneously, which is brilliant for testing.

Once you point the domain record to your GitLab instance, you'll find you can navigate to it and click around - GitLab doesn't try
to redirect you back to the primary domain. You may, however, encounter an SSL error. This can be resolved by adding the secondary
domain to Let's Encrypt and allowing GitLab to generate an SSL certificate for it.

Edit the GitLab config file /etc/gitlab/gitlab.rb and add the following:

letsencrypt['alt_names'] = ['new.gitlab-instance.com', 'registry.new.gitlab-instance.com']

Note: If you have a container registry or any other subdomains, these will need to be added too.

Reconfigure the instance:

gitlab-ctl reconfigure

This will generate the SSL certificates while reconfiguring and will error if there are any subdomains it can't generate
certificates for.

I'd encourage using this new domain for a day or two (we left it longer, because paranoia, but two days is probably fine if you're
braver than us). Navigate around, clone some projects, generally kick the tyres to make sure there are no basic issues.


SWITCH THE INSTANCE DOMAIN

The next step is to change the instance URL. This won't force a redirect but will mean GitLab responds with the new URL for API
and internal requests. You'll still be able to navigate on the old URL, clone projects, and so on.

Edit the GitLab config file /etc/gitlab/gitlab.rb and update the external_url and letsencrypt['alt_names']:

external_url 'https://new.gitlab-instance.com'
registry_external_url 'https://registry.new.gitlab-instance.com'
letsencrypt['alt_names'] = ['old.gitlab-company.org', 'registry.old.gitlab-company.org']

Reconfigure the GitLab instance:

gitlab-ctl reconfigure

I'd advise using this new domain with GitLab for a week or two. It won't redirect you to the new domain, but clone URLs and other
requests will use the new domain.


PACKAGE REGISTRIES

This is where you'll begin to see issues. If you use your GitLab instance as a Docker or package registry, you'll need to ensure
you've authenticated with all your package managers using the new domain.

If you use GitLab as your NPM registry, this will be the biggest pain. Every project needs re-authenticating with the new domain,
and npm will absolutely refuse to install packages until you do. It'll update your package-lock.json happily enough, then leave
you staring at authentication errors wondering what you've done to deserve this.

If you use something like Renovate [https://docs.renovatebot.com/], this can help with migration, but it takes a lot of planning
(and a lot of head scratching). Weirdly, dealing with this across multiple projects was more time-consuming than the actual GitLab
migration.

Get in touch [https://www.mikestreety.co.uk/contact/] if you need advice with your specific setup - I spent enough time Googling
obscure package registry authentication that I might actually be able to help.


REDIRECT TO THE NEW DOMAIN

With the new instance battle-tested and working, it's time to set up a redirect. You may choose to do this when you switch the
instance domain above, but I left it a week or two in case we needed to fall back to the old domain (which, to be fair, we didn't,
but better safe than sorry).

Edit the GitLab config file /etc/gitlab/gitlab.rb and add the option to allow custom nginx configuration:

nginx['custom_nginx_config'] = 'include /etc/gitlab/nginx-extra.conf;'

Next, create a config file - /etc/gitlab/nginx-extra.conf. I chose not to redirect the registry, but if you need to, there's an
example in Running GitLab simultaneously on two domains [https://robinopletal.com/posts/gitlab-on-two-domains].

Before you reconfigure the GitLab instance, ensure the SSL certificates are in the location specified below:

# web
server {
  listen 443 ssl http2;
  server_name old.gitlab-company.org;
  server_tokens off;

  ssl_certificate /etc/gitlab/ssl/old.gitlab-company.org.crt;
  ssl_certificate_key /etc/gitlab/ssl/old.gitlab-company.org.key;
  ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256';
  ssl_protocols TLSv1.2;
  ssl_prefer_server_ciphers on;
  ssl_session_cache builtin:1000 shared:SSL:10m;
  ssl_session_timeout 5m;

  return 301 https://new.gitlab-instance.com$request_uri;
}


Reconfigure the GitLab instance:

gitlab-ctl reconfigure


CLEAN-UP

After some time (we left it a month, but we're cautious like that), you can tidy things up:

 * Delete /etc/gitlab/nginx-extra.conf
 * Remove nginx['custom_nginx_config'] from /etc/gitlab/gitlab.rb
 * Remove any references to the old domain in /etc/gitlab/gitlab.rb
 * Delete any references from your browser history

The whole migration took us about three weeks from start to finish, mostly because we were being cautious and dealing with the
package registry nightmare. If you're not using registries heavily, you could probably knock this out in a few days.

There's still a lot I don't know about GitLab's internals (it's a proper beast of a system), but this process worked well for us.
If you hit any snags or your setup is a bit different, get in touch [https://www.mikestreety.co.uk/contact/] - I might be able to
point you in the right direction.
<p><strong>Read time:</strong> 4 mins</p>
<p><strong>Tags:</strong> Gitlab</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Add a space to your OSX doc for organisation</title>
			<link>https://www.mikestreety.co.uk/blog/add-a-space-to-your-osx-doc-for-organisation/</link>
			<pubDate>Fri, 19 Sep 2025 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/add-a-space-to-your-osx-doc-for-organisation/</guid>
			<description><![CDATA[
When having apps in my dock, I like to have them seperated by "category" - e.g. browsers, web dev and productivity.

To add the spaces, you can run the following in terminal:

defaults write com.apple.dock persistent-apps -array-add '{"tile-type"="spacer-tile";}'; killall Dock

This adds an "empty" app icon that can be dragged around (and removed) as you see fit

Screenshot of my dock [/assets/img/content/add-space-to-dock/dock.png]
<p><strong>Read time:</strong> 1 mins</p>
<p><strong>Tags:</strong> General, Apple</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Setting up a new Apple computer for web development</title>
			<link>https://www.mikestreety.co.uk/blog/setting-up-a-new-apple-computer-for-web-development/</link>
			<pubDate>Thu, 18 Sep 2025 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/setting-up-a-new-apple-computer-for-web-development/</guid>
			<description><![CDATA[
I've been fortunate enough to get a new computer for work. Rather than migrate my old one, I always take it as an opportunity to
start afresh. I've honed the apps I work with, so I know what I want, but I use it as an excuse to really question if everything
is the right thing. I also use it as a test-case for running through our documentation for new starters, in case I've missed
anything.

This isn't a "must follow", but thought I would share the apps and settings I have, should anyone wish to take inspiration. This
isn't the first time I've done this [https://www.mikestreety.co.uk/blog/setting-up-a-new-apple-computer/], but this post is more
comprehensive than the post from 2022.


SETTINGS

🍎 -> System Settings


LOCK SCREEN

The first thing is to increase the screensaver and screen timeouts - a 2 minute default is far too short.

I set the following:

 * Start Screen Saver when inactive: Never
 * Turn display off on battery when inactive: 30 minutes
 * Turn display off on power adapter when inactive: 30 minutes

I tend to shut/lock my computer out of habit anyway, so if I want it open and on, I want it open and on.


DESKTOP & DOCK

 * Minimised window animation: Scale Effect
 * Automatically hide and show the Dock ✅
 * Show suggested and recent apps in Dock: ❌
 * Show desktop: Only in Stage Manager on Click
 * Hot Corners
   * Bottom left: Show desktop
 * 


KEYBOARD -> FUNCTION KEYS

 * Use F1, F2, etc. keys as standard function keys: ✅


ACCESSIBILITY -> ZOOM

Next I enable zooming - it's handy when screen sharing or showing someone in the office. It allows you to hold Control (by
default) and "scroll" up and down to zoom in and out.

 * Use scroll gesture with modifier keys to zoom: ✅
 * Advanced
   * Zoomed image moves: Continuously with pointer
   * Show zoomed image while screen sharing: ✅


UPDATES

Before I do anything else I make sure the computer is up-to-date with any system updates

🍎 -> System Settings -> General -> Software Updates


TOUCHID FOR SUDO COMMANDS

Having to enter your password while setting up your computer can be tiresome, following Nick Taylor's One Tip a Week: TouchID for
sudo commands [https://one-tip-a-week.beehiiv.com/p/one-tip-a-week-touchid-for-sudo-commands], you can use your finger for any
admin commands & settings (if you have TouchID)

Edit the file (enter your password one last time)

sudo vi /etc/pam.d/sudo_local


The add the following (you can always copy /etc/pam.d/sudo_local.template first)

# sudo_local: local config file which survives system update and is included for sudo
# uncomment the following line to enable Touch ID for sudo
auth       sufficient     pam_tid.so



HOMEBREW

Homebrew is the package manager which makes everything better. It's a central place to install and update your applications

Install Homebrew [https://brew.sh/].

As an optional extra, I use Cork [https://corkmac.app/] as a GUI to Homebrew - can help with the maintenance and visualisation of
your installed apps.

If I need to install an app, I tend to lean towards using Homebrew to help with the updates and to keep track of everything
installed.


APPS

I've tried to break apps down by category, but these are the ones I tend to have. Apps which can be installed with Homebrew have a
🍺 emoji next to them & their package name.


PRODUCTIVITY

 * Chrome (🍺 google-chrome)
 * Firefox (🍺 firefox)
 * 1Password (🍺 1password)
 * BitWarden (🍺 bitwarden)
 * ClickUp (🍺 clickup)
 * TickTick (🍺 ticktick)
 * Slack (🍺 slack)
 * Raycast (🍺 raycast)
 * Hyperkey (🍺 hyperkey)
 * Viscosity (🍺 viscosity)
 * Spark (Classic) [https://apps.apple.com/us/app/spark-classic-email-app/id1176895641?mt=12]


DEVELOPMENT

 * Visual Studio Code (VSCode) (🍺 visual-studio-code)
 * Sequel Ace (🍺 sequel-ace)
 * Iterm2 (🍺 iterm2)
 * Git (🍺 git)
 * Composer (🍺 composer)
 * Node (🍺 node)
 * NVM (🍺 nvm)
 * Orbstack (🍺 orbstack docker)
 * DDEV (🍺 ddev/ddev/ddev)
   * mkcert -install


OTHER

 * AppCleaner (🍺 appcleaner)
 * Spotify (🍺 spotify)
 * Claude & Claude code (🍺 claude claude-code)
 * Oh My ZSH [https://ohmyz.sh/]
   * powerlevel10k (🍺 powerlevel10k)
     * Run echo "source $(brew --prefix)/share/powerlevel10k/powerlevel10k.zsh-theme" >>~/.zshrc
     * Then add the following plugins: plugins=(git ssh-agent)


UNINSTALL APPS

Once AppCleaner is installed, I then uninstall

 * Garageband
 * iMovie


RESTART

Make sure you restart your computer during the setup, as some apps and settings need a clear cache to work effectively.


CONFIGURATION

With the applications set up, it's time to start configuring them


SSH ACCESS

If you have a previous computer (and access to it) you can copy the ~/.ssh folder across to allow you access to all your SSH
places (such as Github, servers etc). If not, you'll need to generate a new key:

ssh-keygen -t ed25519 -C "your_email@example.com"

Git

We have some sensible Git config options we enable globally:

git config --global init.defaultBranch main
git config --global merge.ff false
git config --global pull.ff true
git config --global pull.rebase true
git config --global fetch.prune true

And then you can configure your user config:

git config --global user.name "Your Name"
git config --global user.email "name@domain.example"

Raycast

I use RayCast as a Spotlight replacement. To do so, I disable Spotlight:

🍎 -> System Settings

 * Spotlight
   * Disable everything
   * Search Privacy
     * Click +
     * Select the root hard drive
 * Keyboard
   * Keyboard Shortcuts
     * Spotlight
       * Uncheck everything

Open RayCast and it will run through an onboarding:

 * Set the Hotkey as ⌘ + Space (what Spotlight was)
 * Grant access to
   * Calendar
   * Files
   * Accessibility

Once completed, open the settings & go to extensions. This is where it really pays off to have Hyperkey installed as you can set
shortcuts for apps.

E.g, for me, I have Caps Lock + E to open ITerm and Caps Lock + C to open the Clipboard history. Things to look at

 * Clipboard history
 * Window Management
 * Auto-join Meetings [https://one-tip-a-week.beehiiv.com/p/one-tip-a-week-raycast-s-auto-join-for-meetings]

iTerm

First, we need to set up our Vim config [https://www.mikestreety.co.uk/blog/syntax-highlighting-and-other-enhancements-for-vim/]

Then edit the iTerm preferences:

 * General
   * Startup
     * Window restoration policy: Ony Restore Hotkey Window Selection
   * ❌ Clicking on a command selects it to restrict Find and Filter.
 * Appearance
   * General
     * Theme: Minimal
 * Profiles
   * Colours
     * Modes: ❌ Use separate colours for light and dark mode
     * Color Preset: Tango Dark
   * Text
     * Cursor: |
     * Font
       * MesloLGSNF
       * Weight: Regular
       * Size: 14,
       * Letter spacing: 100
       * Line-height: 120
   * Window
     * New Windows: 235 columns by 40 rows
     * ❌ Use transparency
   * Terminal
     * Scrollback lines: ✅ Unlimited scrollback
     * Bell: ✅ Silence bell
<p><strong>Read time:</strong> 4 mins</p>
<p><strong>Tags:</strong> General, Apple</p>
			]]></description>
		</item>
		
		
		<item>
			<title>GitlabForm for Gitlab repository automation</title>
			<link>https://www.mikestreety.co.uk/blog/gitlabform-for-gitlab-repository-automation/</link>
			<pubDate>Mon, 04 Aug 2025 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/gitlabform-for-gitlab-repository-automation/</guid>
			<description><![CDATA[
While researching something else, I stumbled upon GitlabForm [https://gitlabform.github.io/], a nifty tool for synchronising
settings and files across your GitLab repositories and groups. It's super flexible: you can apply granular control over each repo
or use it as a one-and-done tool for everything. Thanks to its hierarchical configuration, it can handle pretty much anything in
between.

Enough with the jibber-jabber, let's dive into some examples.


GET A TOKEN

First things first, you need an access token. You can use a personal one, but for better security and separation of concerns, it's
best to create a dedicated "bot" user. If you're on GitLab Premium or higher, you can create a service account
[https://docs.gitlab.com/user/profile/service_accounts/]. This is the ideal approach, as it lets you clearly identify commits made
by GitlabForm and restrict its access to only the groups and projects it needs.

I went with a new service account and created an Impersonation Token with the api scope. I purposefully didn't make this user an
admin. GitlabForm will flag this with a warning, but it works just fine for most use cases without admin rights.

Once you have your token, create a .env file in your project root and place your token in there:

GITLAB_TOKEN=your_token_here


You'll also need a config.yaml file in the same directory.


FILE STRUCTURE

Once you get used to the file layout, it becomes clear that the structure is based on the Groups
[https://docs.gitlab.com/api/groups/] and Projects [https://docs.gitlab.com/api/projects/] APIs.

This means you can add any setting from the GitLab API documentation for groups and projects (like
only_allow_merge_if_pipeline_succeeds or merge_method) directly into your configuration, without it needing to be explicitly
documented by GitlabForm.

Start off with a file like this - which would add settings to any group or project you specify. The below would set main to be the
default branch.

config_version: 3

gitlab:
  url: http://your-gitlab-instance.com

projects_and_groups:
  ###
  # All repositories
  ###
  "*":
    project_settings:
      # General
      default_branch: main


EXAMPLE SCENARIO

Once you've got the basics down, you can use GitlabForm to solve some of GitLab's more annoying limitations, especially where
settings are locked to a repository instead of being configurable at the group level.

A perfect example of this for me was rolling out Code Owners [https://docs.gitlab.com/user/project/codeowners/]. I was keen to use
this feature after upgrading to GitLab Premium, but was pretty deflated to find out:

 * You need to commit a CODEOWNERS file to every single repository.
 * Users (or Groups) must be direct members of the repository; inherited permissions from a parent group don't count.

The thought of doing all that manual work (and having to undo it if something went wrong) made me hesitant to even start. This is
where GitlabForm makes it a breeze, using its files functionality.

First, create a files directory next to your config.yaml and put your master CODEOWNERS file inside (e.g., files/CODEOWNERS).

Then, in your config.yaml, you can tell GitlabForm to push this file to all repositories:

projects_and_groups:
  "*":
    files:
      ".gitlab/CODEOWNERS":
        commit_message: "build(gitlab): Add CODEOWNERS file"
        file: "./files/CODEOWNERS"
        overwrite: false
        skip_ci: true
        branches:
          - main

Using overwrite: false prevents the tool from overwriting a file that already exists locally. This allows you to customize files
without your changes being erased the next time you run GitlabForm.

Next, you can add people or groups as members of the repository. In this example, we add our three teams as direct members.

    members:
      groups:
        team/feds:
          group_access: maintainer
        team/beds:
          group_access: maintainer
        team/devops:
          group_access: maintainer
      enforce: true
      keep_bots: true

Setting enforce: true removes any other users or groups, ensuring only specified members have access to the project.

Lastly, you can use the Merge Request [https://gitlabform.github.io/gitlabform/reference/merge_requests/] to enforce merge request
approvals and remove existing approvals when the Code Owners file is modified.

    merge_requests_approvals:
      disable_overriding_approvers_per_merge_request: true
      reset_approvals_on_push: false
      selective_code_owner_removals: true
    merge_requests_approval_rules:
      any: # this is just a label
        approvals_required: 1
        name: "Any member"
        rule_type: any_approver
      enforce: true

All put together, it looks something like:

config_version: 3

gitlab:
  url: http://your-gitlab-instance.com

projects_and_groups:
  ###
  # All repositories
  ###
  "*":
    project_settings:
      # General
      default_branch: main

    files:
      ".gitlab/CODEOWNERS":
        commit_message: "build(gitlab): Update CODEOWNERS file"
        file: "./files/CODEOWNERS"
        overwrite: false
        skip_ci: true
        branches:
          - main

    members:
      groups:
        team/feds:
          group_access: maintainer
        team/beds:
          group_access: maintainer
        team/devops:
          group_access: maintainer
      enforce: true
      keep_bots: true

    merge_requests_approvals:
      disable_overriding_approvers_per_merge_request: true
      reset_approvals_on_push: false
      selective_code_owner_removals: true
    merge_requests_approval_rules:
      any: # this is just a label
        approvals_required: 1
        name: "Any member"
        rule_type: any_approver
      enforce: true

To change a setting for a specific sub-group or repository, place its configuration after the wildcard configuration. This
leverages the Configuration hierarchy [https://gitlabform.github.io/gitlabform/reference/#configuration-hierarchy] to apply the
overrides.


RUNNING

You can run the configuration using either Python or Docker. I prefer using Docker as it avoids installing additional software on
my machine.

To run it, use the following command, ensuring you pass in the .env file so the tool has access to your token:

docker run --env-file .env -it -v $(pwd):/config ghcr.io/gitlabform/gitlabform:latest gitlabform ALL

Although the configuration is set up for all projects, you can test it on a single project or group. The ALL argument instructs
GitlabForm to process all projects and groups it finds. For testing, you can replace ALL with a specific group or repository slug,
such as your-group/your-project.

When debugging, I find it helpful to add the following flags to see exactly what changes will be applied:

--verbose --diff-only-changed

<p><strong>Read time:</strong> 5 mins</p>
<p><strong>Tags:</strong> Gitlab</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Syntax Highlighting and other enhancements for Vim</title>
			<link>https://www.mikestreety.co.uk/blog/syntax-highlighting-and-other-enhancements-for-vim/</link>
			<pubDate>Fri, 11 Jul 2025 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/syntax-highlighting-and-other-enhancements-for-vim/</guid>
			<description><![CDATA[
Vim is my go-to command-line editor, so it's always useful to have some nice additions including syntax highlighting and nice
indenting. It also enables tab behaviour which matches the context.

To do this, create a .vimrc file in your home directory (e.g. vim ~/.vimrc) and paste the below

syntax on
filetype plugin on

set modeline

set ts=2
set sw=2
set smarttab
set expandtab
set autoindent
set smartindent

" enable showing of matching braces
set showmatch
set mat=5
set list

"Solarized colors!
set background=dark
if ! has('gui_running')
  let g:solarized_termcolors=256
  set t_Co=256
endif


This is a file i've copied and pasted between a few machines. So I could So I could fully understand what each line was doing I
used AI to explain:

 * ⁠syntax on: Enables syntax highlighting for improved readability of code by color-coding different elements based on their
   type.
 * ⁠filetype plugin on: Activates file type detection and loads relevant plugins, enhancing functionality based on the type of
   file being edited.
 * ⁠set modeline: Allows Vim to read modelines (special comments) within files to automatically set options when the file is
   opened.
 * ⁠set ts=2: Sets the tab size to 2 spaces, determining how many spaces a tab character represents.
 * ⁠set sw=2: Configures the shift width to 2 spaces, defining the number of spaces used for each step of (auto)indentation.
 * ⁠set smarttab: Enables intelligent behavior for the Tab key, adjusting its function based on the context of the line.
 * ⁠set expandtab: Converts tab characters into spaces when the Tab key is pressed, ensuring uniform indentation across different
   environments.
 * ⁠set autoindent: Automatically inherits the indentation of the previous line for new lines, maintaining consistent formatting.
 * ⁠set smartindent: Provides enhanced automatic indentation for programming languages, particularly useful for C-like languages.
 * ⁠set showmatch: Highlights matching parentheses, brackets, and braces when the cursor is placed on one, aiding in code
   structure visualization.
 * ⁠set mat=5: Sets the time (in tenths of a second) Vim will wait before highlighting matching braces, with a value of 5 meaning
   half a second.
 * ⁠set list: Enables the display of whitespace characters (like tabs and spaces) to help visualize formatting issues.
 * ⁠set background=dark: Configures the color scheme to a dark background, improving visibility when using dark themes.
 * ⁠if ! has('gui_running'): Checks if Vim is not running in a graphical user interface (GUI) environment to apply
   terminal-specific settings.
 * ⁠let g:solarized_termcolors=256: Sets the variable for the Solarized color scheme to use 256 colors when in a terminal.
 * ⁠set t_Co=256: Defines the number of colors supported by the terminal as 256, ensuring compatibility with the Solarized color
   scheme.
 * ⁠endif: Marks the end of the conditional statement that checks for GUI mode, closing the ⁠if block.
<p><strong>Read time:</strong> 2 mins</p>
<p><strong>Tags:</strong> Bash, CLI</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Timeline arc: Coffee</title>
			<link>https://www.mikestreety.co.uk/blog/timeline-arc-coffee/</link>
			<pubDate>Sun, 11 May 2025 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/timeline-arc-coffee/</guid>
			<description><![CDATA[
I've been on my coffee "journey" for a while now and have finally taken the plunge to upgrade my coffee grinder to a Niche Zero.
With that in mind, I thought I would post about where I've been so far with my coffee experience and how I ended up spending so
much money on a grinder.

It's worth noting that I am, by no means, a coffee aficionado. However, I am a coffee hipster and will forgo having one while I'm
out because "I can make one better at home". I struggle to identify the nuances of a coffee, but do know if one is good or bad and
well extracted.

This blog post will be part of a series of related posts, where I talk about my journey/timelines through different hobbies and
interests. Let me know [https://www.mikestreety.co.uk/contact/] if you write one of your own


MY COFFEE HISTORY

I entered into coffee via the french press (or cafetiere) when I was in my early twenties. Forever a tea boy, I slowly started
reaching for the coffee in the morning.

After getting more and more used to the french press, I started exploring coffee shops and appreciating the world of what was
available. Beginning at a latte, I eventually started favouring a flat white from the legal crack-dens.


MACHINE: GAGGIA BABY CLASS

Slowly, with the egging on from a friend, I started looking into purchasing a coffee machine of my own and, in Jan 2017, purchased
a Gaggia Baby Class.

After picking it up off someone on Gumtree, I don't think they were aware how much they generally went for as some searching when
I got home revealed I'd picked up a bargain.

Gaggia Baby Class [/assets/img/content/timeline-arc-coffee/gaggia-baby.jpg]

The next day I leapt out of bed and made my first "flat white" at home - it was great. Later that day I went to make a second and
the machine stopped - not a single drop of water made it through.

I then spent a month or so hitting my head against the wall and fully taking the machine apart, descaling, replacing parts - I
even got a multi-tool to grind off the limescale from inside the boiler. I learnt a lot about how coffee machines work and even
upgraded the steam-wand which seemed like a common mod for Gaggia machines.

I eventually got it working again and starting learning and honing my coffee-making skills.


GRINDER: SAGE SMART GRINDER PRO

After reading copious amounts of coffee forums, I realised I needed a proper coffee grinder. It wasn't long (March 2017) until I
had purchased a Sage Smart Grinder Pro.

Gaggia Baby Class and Sage Smart Grinder Pro [/assets/img/content/timeline-arc-coffee/gaggia-and-sage.jpg]

I was flying now, each coffee I made getting better and I was starting to appreciate the craft and skill required to make a "good"
cup of coffee.


MACHINE & GRINDER: SAGE BARISTA EXPRESS

After getting a book published [blog/how-i-wrote-a-book-the-writing-process-from-one-of-our-developers/] I used some of the money
from that to upgrade my coffee equipment. Selling the Gaggia (after making a good profit, even after the upgrades) and my Smart
Grinder Pro, I purchased a Sage Barista Express in November 2017.

Sage Barista Express [/assets/img/content/timeline-arc-coffee/sage-barista-express.jpg]

I chose it because of both the asthenic and features. Being able to turn it on and have it warm up in seconds was a dream - not to
mention the pressure gauge and timed water function made it much easier to dial in and get great coffee from it. It was a great
machine and would recommend to to anyone.


MACHINE AND GRINDER: SAGE BAMBINO PLUS AND SAGE SMART GRINDER PRO

After 4 years of the Sage Barista Express and plenty of great coffees, I wanted to explore different coffee options further. I was
already drinking french press & V60 in the week (saving the flat whites for the weekend) and wanted to freshly grind the coffee
for those mid-week drinks.

As the Barista express coffee grinder couldn't go course enough for a V60, I need to get a different grinder. I couldn't justify
having the Barista Express and a separate grinder (nor the space), so I needed to do some selling.

In July 2021 sold the Barista Express and purchased a Sage Bambino Plus with a Sage Smart Grinder Pro (again) to go alongside
(yes, ok, I'm a Sage fanboy). The idea being I could grind different varieties. I managed to do it without costing me too much as
the price of the two was comparable.

Sage Bambino and Smart Grinder Pro [/assets/img/content/timeline-arc-coffee/bambino-and-sgp.jpg]

Both of these bits of equipment are excellent and I would still recommend either of them to people looking to up their coffee game
at home and get into espresso-based drinks. I'd (at the time) unknowingly picked up a Bambino Plus which comes with some extra
features like the auto-frothing. This is a game changer, along with the quick start up, when kids are around as you can set it
frothing your milk while you deal with making weetabix or entertaining a child.

The same goes for the grinder - both of these tools allowed me to make pretty good baseline coffees in a pinch while still giving
me the freedom to really geek out and dial-in if I needed to

The different grinding was a fad and quickly subsided when I started learning about retention and what a hassle it is to keep
changing grind sizes between week day and weekends - especially on the Smart Grinder Pro. I stuck with pre-ground coffee for the
weekdays and left the grinder to the weekend.

I settled into home barista life, measuring and, generally, getting good cups of coffee out of my equipment.


GRINDER: NICHE ZERO

And this is where we are now, May 2025 having just purchased a Niche Zero.

I have been weighing my output from the grinder for a while as the Smart Grinder Pro gives different results depending on how full
the hopper is. The coffee I was getting was fine but now my kids are bit older and need less attention, I can really focus on
getting it bang-on.

Niche Zero [/assets/img/content/timeline-arc-coffee/niche-zero.jpg]

This grinder has a different workflow, weighing the beans in instead of weighing the output.

I'm still getting used to it and learning how to dial in, but already I've made a couple of spot-on coffees and even drunk a
couple of "neat" espressos - something I've never done before.


SO FAR...

I doubt this will be the end of my coffee journey - I'd like a coffee machine with a bit more customisation but the convenience of
the Bambino still wins.
<p><strong>Read time:</strong> 4 mins</p>
<p><strong>Tags:</strong> General, Timeline</p>
			]]></description>
		</item>
		

	</channel>
</rss>
