<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
		<title>Mike Street's Blog</title>
		<link>https://www.mikestreety.co.uk</link>
		<description>Blog posts from Mike Street (mikestreety.co.uk)</description>
		<language>en-gb</language>
		<pubDate>Tue, 13 Feb 2024 09:40:32 GMT</pubDate>
		<lastBuildDate>Tue, 13 Feb 2024 09:40:32 GMT</lastBuildDate>
		<atom:link href="https://www.mikestreety.co.uk/rss.xml" rel="alternate" type="application/xml" />
		
		
		
		<item>
			<title>Testing the frontend of a TYPO3 project</title>
			<link>https://www.mikestreety.co.uk/blog/testing-the-frontend-of-a-typo3-project/</link>
			<pubDate>Mon, 12 Feb 2024 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/testing-the-frontend-of-a-typo3-project/</guid>
			<description><![CDATA[
				Testing is an interesting topic in the web world; everyone you talk to seems to know how valuable it is, but finding concrete
examples on the web is hard. We have started testing our TYPO3 websites with Playwright [https://playwright.dev/] - an end-to-end
testing framework that can simulate a lot of browsers on a lot of devices.

This blog post is going to run through our conventions for testing with TYPO3 and Playwright. To help with onboarding and
consistency throughout all our projects, we created a meta-framework [https://github.com/liquidlight/playwright-framework] which
sets some sensible defaults for us.

Although this post will be TYPO3-centric, it can be applied to other CMS'. How to configure the framework can be found in the
documentation [https://github.com/liquidlight/playwright-framework/tree/main?tab=readme-ov-file#playwright-configuration].


WHY PLAYWRIGHT?


SETUP

Install the framework with NPM - we set the dev flag so it doesn't get installed for production.

npm i @liquidlight/playwright-framework -D --save

It is also helpful to add some helper scripts to your package.json file to make running and viewing tests easier:

{
  "scripts": {
    "test": "playwright test",
    "test:update": "playwright test --update-snapshots",
    "test:open": "playwright show-report",
    "test:codegen": "playwright codegen"
  },
}

It's also worth adding the following to your .gitignore file so you don't end up committing the test results:

/test-results/
/playwright-report/
/blob-report/
/playwright/.cache/


Lastly, create a playwright.config.ts file in the root of your project. If you're running a TYPO3 site that uses
config/sites/[site]/config.yaml files for your site config, you can use the following as the initial config:

import { defineConfig } from '@playwright/test';
import typo3Sites from '@liquidlight/playwright-framework/typo3';

const config = require('@liquidlight/playwright-framework')(
  typo3Sites()
);

module.exports = defineConfig(config);

This looks in app/sites/* for the same folder name as the site config. From there, it matches any tests with the base URLs, so you
don't have to specify the name. It also means you can specify an environment variable of PLAYWRIGHT_ENV to use Production/Staging
or Dev URLs.


ADD A TEST

We have a couple of conventions for our tests, but as long as you put them in the corresponding app/sites folder, it doesn't
really matter where you put them!

 * If you are testing a whole front-end flow or bit of functionality, then put it in Resources/Private/Tests with a sensible name
   of name.test.ts
 * If you are testing a specific bit of JavaScript (like a carousel or modal), then place the test file next to the JavaScript
   partial with the same name (replacing .js with .test.ts)
 * If you are testing a function or utility, then place it next to the file called .spec.ts

We can then easily tell if a browser is going to be created (.test.js) compared to a pure JavaScript test (.spec.ts)


TEST EXAMPLE

We have a specific page-tree of tests/ in the CMS that non-admins cannot see or edit. This way, we can test against these pages,
knowing we are using CMS-driven content, but we know it won't change by accident.

An example test might be like this:

/**
* Opens Fancybox when a link is pointing to a content element with a "Dialog Content" frame class
*/
test('"Dialog Content" opens in a Fancybox', async ({ page }) => {
  await page.goto('/tests/dialog/');

  // Open the fancybox
  await page.getByRole('link', { name: 'Link to Fancybox' }).click();

  // Do we see the content?
  awaitexpect(page.getByLabel('You can close this modal').getByRole('heading')).toContainText('This is a fancybox');
});

On our test pages, we have a link of "Link to Fancybox", which opens the modal. We can assert the modal loads when Playwright can
find the text inside.


PLAYWRIGHT TESTING TIPS

Some extra tips and tricks I've picked up along the way


INSTALL AN IDE EXTENSION

I use VSCode, and having the extension installed [https://marketplace.visualstudio.com/items?itemName=ms-playwright.playwright]
means VSCode picks up the tests in the "test" panel and allows me to pick devices and specific tests to run in the UI.


USE THE CODEGEN COMMAND

If you added the scripts block above, you can run npm run test:codegen. This opens up a special Chromium browser with a Playwright
app that allows you to navigate to URLs, click items, assert if things are visible, or say the right text. From there, it
generates all the test code for you to copy into a test file.

We tend to run this and then tweak the code as we see fit, but at least it gives us the initial test code to tweak rather than
writing from scratch. Once you have a few tests generated, you tend to get an idea of what is needed.


RUN LIMITED TESTS WHEN DEVELOPING

When you are testing to see if a test works, keep your devices down to a minimum and only run the test you need (by either using
the extension or passing in the test name); this helps speed up debugging (there is also test debugging available, although don't
ask me how that works!).


EXPLORE THE ECOSYSTEM

There are lots of guides, integrations, and utilities [https://mxschmitt.github.io/awesome-playwright/] out there with a little
bit of searching.

----------------------------------------------------------------------------------------------------------------------------------

Once you get into it, the little dopamine hit of writing a successful test that passes (or, even better, finds a bug that you can
fix) is addictive. If you have any questions or want to talk more, get in touch [https://www.mikestreety.co.uk/contact/] and I'll
see if I can help.
				<p><strong>Read time:</strong> 4 mins</p>
				<p><strong>Tags:</strong> TYPO3, Playwright, Testing, NPM</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Tweetback</title>
			<link>https://www.mikestreety.co.uk/blog/tweetback/</link>
			<pubDate>Fri, 09 Feb 2024 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/tweetback/</guid>
			<description><![CDATA[
				I finally downloaded my ~Twitter~ X archive [https://help.twitter.com/en/managing-your-account/how-to-download-your-x-archive] as
a trusty zip file and set up a version of Tweetback [https://github.com/tweetback/tweetback].

You can see my tweets here [https://tweets.mikestreety.co.uk/].

Interestingly, it uses tweetback-canonical [https://github.com/tweetback/tweetback-canonical] which replaces any Twitter links it
finds with other people's instances instead of linking to the bird site, as long as they have added themselves.

This kind of marks an end of an era for me, really. I don't care much for Elon or his businesses, I've not particularly boycotted
Twitter because of him, more because of what Twitter has become since he took over. A spammy, bug-riddled skeleton of itself.
Twitter has served it's purpose but, for me, it has now "finished"
[https://www.mikestreety.co.uk/blog/the-end-of-the-social-network-school-year/].

Creating the Tweetback instance also threw up some stats, like the fact I sent 25,053 tweets with 64.8% of them being replies. I
used emojis in 1,905 tweets and swore in 1.4% (333) of my posts.

I'm not quite sure what I'll do with the Tweetback, I might tweak the styling but, ultimately, I think i'll just leave it there as
a box of memories I can peruse once in a while.
				<p><strong>Read time:</strong> 1 mins</p>
				<p><strong>Tags:</strong> Twitter, General</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Composer Best Practices for TYPO3</title>
			<link>https://www.mikestreety.co.uk/blog/composer-best-practices-for-typo3/</link>
			<pubDate>Sun, 04 Feb 2024 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/composer-best-practices-for-typo3/</guid>
			<description><![CDATA[
				I revel in reading about other peoples processes, file structures and best practices. I soak up their standards, seeing if I can
optimise, tweak or evolve my own to seek that golden chalice of efficiency.

Following in the same vein of Daniel Siepmann's TYPO3 Composer Best Practices
[https://daniel-siepmann.de/typo3-composer-best-practices.html], I thought I would outline the file structure and conventions for
our TYPO3 composer-based projects. We look after more than 60 TYPO3 sites, so our methodology is based on being able to switch
between projects and have consistency between them. This helps speed up development as we don't have to familiarise ourselves with
different project layouts.


FILE STRUCTURE

In our top-level, we have an app for our site packages & custom extensions:

.
├── composer.json
├── app/
│   ├── sites/
│   │   └── site1
│   └── ext/
│       └── custom_extension
└── config/
    └── sites/
        └── site1


The thing you will notice is different is that the sites are then in an second sub-folder. This helps differentiate extensions
which are sites, compared to packages which are built purely for this TYPO3 install.

The sites sub-folder in app is required, and the folder/extension name must match that of the config/sites folder - this helps us
marry up the code based extension with the site's YAML config.

If the TYPO3 install is a multi-site, we tend to include a app/sites/site_package folder, which includes code shared between the
sites (e.g. TypoScript, CSS or TCA)

The ext folder is optional, and only used if there is a custom extension for this site. Not only does it separate it from the site
packages, it helps us quickly identify patterns where local extensions are re-used or could be published.

We also allow a couple of other folders in the app directory; composer for local non-TYPO3 but composer based packages and npm,
should we require a local node package.


COMPOSER CONVENTIONS

For local packages, we have a few conventions to help our developers quickly identify and separate functionality

 * All local packages have "version": "0.0.0" - we then know any packages of this version (if you see it in the Extension list or
   composer output) are local to the site
 * Any package in sites should be in the app/ namespace (e.g. app/liquidlight) - this identifies it as a site package
 * Any package in ext, composer or npm, is namespaced with liquidlight (e.g. liquidlight/cool-extension) - this means we can more
   easily port it between installs or make it globally more easily

All our PHP classes are namespaced with LiquidLight - regardless of if they are local sites, local extensions or our published
packages [https://www.liquidlight.co.uk/typo3-extensions/].

The top level composer should require the site packages and any install level packages (e.g. for deployment). Any packages for
each site should be in the site's composer file.


PACKAGE LOADING

Our composer/TYPO3 packages can come from 3 different locations: locally, our private Gitlab instance or Packagist
[https://packagist.org/].

As Packagist is assumed by default, this doesn't need to be specified in the repositories section of the Composer file, however
the other two do. You'll notice the path looks in ./app/*/* as we want to look in the sites, ext and composer folders.

As we want to prioritise the local packages, these are specified first, followed by the Gitlab private package URL
[https://www.mikestreety.co.uk/blog/build-and-release-composer-packages-using-a-self-hosted-gitlab/]:

{
    "license": "proprietary",
    "type": "project",
    "repositories": [
        {
            "type": "path",
            "url": "./app/*/*"
        },
        {
            "type": "composer",
            "url": "https://url.to.gitlab/api/v4/group/63/-/packages/composer/packages.json"
        }
    ]
}


UPDATING

All our sites are checked and kept up-to-date with Renovate [https://docs.renovatebot.com/] which runs between 8am and 4pm every
weekday. Bug fixes are auto deployed while minor package releases get a merge request raised for a developer to review. During
this process, the Composer files are linted and checked to make sure packages are compatible. Renovate knows to ignore any
packages with the version number of 0.0.0 - which is another reason for having local packages fixed to this.

----------------------------------------------------------------------------------------------------------------------------------

Thanks to Daniel Siepmann [https://daniel-siepmann.de/] for inspiring this post (and all the work he does for TYPO3)!
				<p><strong>Read time:</strong> 3 mins</p>
				<p><strong>Tags:</strong> TYPO3, Composer</p>
			]]></description>
		</item>
		
		
		<item>
			<title>2023 In Review</title>
			<link>https://www.mikestreety.co.uk/blog/2023-in-review/</link>
			<pubDate>Sun, 31 Dec 2023 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/2023-in-review/</guid>
			<description><![CDATA[
				2023 was a year which seemed to fly by - it doesn't seem like 12 months have passed since I was last reflecting my annual
achievements. Despite this, there were some pretty big events this year, when I actually think about it, but it all seems to be
compressed into, what feels like, only a couple of months.

As with previous years [https://www.mikestreety.co.uk/category/annual-review/], these posts are written mainly for me, to reflect
on the year and think about the one coming.


LIFE

There was a major milestone this year - Alfie starting school. We picked the school last December and found out in April we'd got
our first choice. My wife and I were quite apprehensive about how he would be but, thankfully, he thoroughly enjoys going. He is
absolutely smashing reading, writing and maths too. With Alfie starting school, all our home routines changed (again), but we've
made it work.

Alfie (and my wife) also started at Squirrels - the pre-cursor to Beavers, Cubs and Scouts. He was wary to start with but is
starting to join in with the activities, without being glued to my wife the whole evening.

Ruby has developed into a sassy second-child. Often following in her big brothers footsteps and happy to go along with things, she
does have a side of her the comes out every now and then which has ultimate attitude. She's hilarious and you can tell she does a
lot of things to just make us laugh.

2023 also saw plenty of trips out on Bob (the cargo bike), along with utilising it's carrying capacity - picking up bits on my
lunch breaks for the house and garden. We also went to see the Flying Scotsman at our local steam railway and took the kids to
their first wedding.

Holidays saw us go to Butlins, Alton Towers (staying in the hotel) and Center Parcs. The garden got another face-life (after
letting it run wild) and it now in a more manageable state for taking care of - I actually use the opportunity when walking down
to the garden office to do a spot of weeding or pruning.


BIKE

I can't seem to go through a year without buying a new bike. This time, it was the commuter bike's turn to be replaced. I never
got on with the Genesis [https://www.instagram.com/p/CUf7u6pIatL/] really, I didn't enjoy the gear shifter location or the seating
position and the the hub gear made me constantly worry about getting a puncture (as I didn't know how to take the back wheel off).

My new bike is a Triban RC520 [https://www.decathlon.co.uk/p/road-bike-rc-520-disc-brake-prowheel-blue/_/R-p-348230]. I could find
nothing but good reviews for Triban bikes and it had everything on my wish list. It reminds me of the old Planet X London Road
[https://www.instagram.com/p/B0dA_UTn4_Y/] I had. Trusty & nippy without being fragile. Everything on it is replaceable,
repairable and bog-standard.

While on the topic of bikes, we finally managed to convince Alfie to ride one with pedals. For ages he was riding a pedal bike as
a balance bike, but he is now speeding off and getting more and more confident each time. Ruby has progressed onto Alfie's old
balance bike but is still a bit wary of it.


LIQUID LIGHT

2023 saw a lot of new faces start working for Liquid Light which also means a few people left. With no-one handing in their notice
since 2019, a lot of the staff had been with us for a fair amount of time so we were due a bit of a change, I think we were just
surprised at the scale of it. This year we had 5 people hand in their notice and 6 people join (which for a company of 14 it's a
big change). Fortunately, no two people left for the same reason - most going off to pursue different interests but, as with any
staff churn, it is going to put pressure on the business.

We welcomed 3 developers (2 front-end, 1 backend) 2 new designers and 1 new account manager meaning every department had at least
one new team member. With new faces, though, come new processes, suggestions, ideas and energy. Most of this year was spent
bedding the new staff in, so I'm looking forward to what we can do in 2024 with all this fresh blood.

On the tech side, the biggest accomplishment was setting up Renovate [https://github.com/renovatebot/renovate]. This is a
bot/script that runs on a scheduled CI job and keeps dependencies up-to-date. This ensures bugs are auto-fixed, merged and
deployed while minor updates to packages are raised in a merge request.

We are also starting to look into testing (yes, I know) and I've done some research into a few popular frameworks - settling on
Backstop for lighter regression testing and Playwright for more in-depth end-to-end testing. I wrote a blog post summarising the
meta-frameworks i've created around them
[https://www.mikestreety.co.uk/blog/frameworks-tools-and-utility-meta-packages-for-quicker-configuration/].

We've continued with the upgrades and updates while still pushing forward with new builds and developments. We opened up a couple
of new roles for our senior account manager and backend developer to give them more responsibility and reduce the pressure and
workload on the directors. I'm excited to see how these roles develop and progress over time.


SIDE PROJECTS

Side projects took a bit of a hit this year as I delved into work-related evening activities. The work on Renovate, beginning of
testing and optimising processes occupied my mind, along with the ongoing upgrades.

Ale House Rock [https://alehouse.rocks/] had a couple of small improvements - brewery pages are now richer (with content pulled
from Untappd) and more of the information is unified (there was a lot of duplicated breweries and a duplicated beer, too). I've
also reduced the reliance on the numbers in the code for identifying the beers, as it became muddle when I posted a duplicate.

This year I've managed to adopt a couple of "real-world" side-projects - puzzles and going back to the gym. There former was
kick-started when building Lego with my son as I enjoyed the hunting and placing of the pieces, rather than the end build. We are
now at the point where my wife and I have an ongoing puzzle that sits under the sofa and comes out on occasion. It's nice to have
a no-screens no-pressure hobby to turn to in the evening. Sometimes we really get into it and smash one out in a week or so
whereas, other times, we let it tick over for weeks.

I've also re-joined a gym, trying my best to go twice a week. I'm not following any strict routine (which I enjoy), but I've been
enjoying the free, included classes and having a big room to get a bit sweaty in - it helps clear my mind. Due to the
aforementioned routine change, I am having to go in the evenings. This isn't my favourite time to go (especially after dinner) but
it works best for all of us.

Talking of evenings, we got "the kids" a fish tank with fish for Christmas, however my wife and I are far more invested in it then
the kids are. We picked up 4 fish so far, along with some live plants which I've enjoyed tending too. I'm fascinated with the
snails that have snuck in there and I'm enjoying tending to the plants and tank in the evenings.

As for digital side-projects I've been chipping away at an API-based one since October. The yet-to-be-named SaaS, is a central
aggregation of your data allowing you to view everything you've been up to along with having an API itself. The MVP is powering my
stats page [https://www.mikestreety.co.uk/stats/] as it is currently manually entered by me. I've so far got it connecting to
Strava, Garmin, Goodreads, Last.fm, Spotify, Letterboxd, Untappd and processing a Geocaching export. I've got my sights set on
more services too - it starts to become addictive adding more and more. I'm currently struggling with a name and how to visualise
all the data.


STATS

Talking of stats, the data on my stats page [https://www.mikestreety.co.uk/stats/] has been updated. Side note, I uninstalled the
Instagram app at the beginning of the year, so only uploaded 1 photo the whole year.


BLOG

After a shift in marketing & positioning at work, I've stopped writing for Liquid Light in 2022 (which often bolstered my blog
count and felt like cheating, as I was being paid to write). Even without this, 2023 was just 1 post off my biggest year for blog
writing (2021). I wasn't consciously trying to put posts out (and thought I'd had a "bad" year), they just seem to have
"appeared". Looking at the top 10 [https://www.mikestreety.co.uk/stats/#blog-posts], It's great to see plenty of new posts
becoming more popular but it still surprises me that a post from 2017 is in the top 2 (and has been since it was written).

In 2024 i'd like to continue this cadence of post writing - it felt like a nice spread over the year and I'm hoping the testing
we're introducing will help feed some of that content. I also feel like I could write more posts about TYPO3 (the CMS we use at
work) as I don't tend to blog about that at all.


ACTIVITIES

(Side-note: For the last few years I'd been looking at just e-bike rides when gathering the stats. I've updated the stats page to
include non-electrified rides too!)

All my cycling stats increased from 2023 (despite 2022 ending up being more than I thought - see above) which was great to see.
More regularly cycling to Brighton twice a week helped, along with cycling to and from the gym in the second half of the year.
It's no surprise that my step count increased with the gym being added to my routine, but it is a surprise to see I burnt less
calories last year - I guess walking (Geocaching) counts more than a few "Bootcamp" classes at the Gym, which is an encouragement
for me to get out and walk more.

I also noticed that my cycling for the last few years is the inversion of my beer reviews - this may just be a Spurious
Correlations [https://www.tylervigen.com/spurious-correlations], but I did consciously choose to drink less beer and join the gym
this year, so I suspect that had an influence.

In 2024 I hope to keep up with the mileage increase (or plateau). I feel the 2500 mile mark is a good target for this stage in my
life. Although, if this drops due to more walks and Geocaches found, I'd be ok with that.


GEOCACHING

Last year [https://www.mikestreety.co.uk/blog/2022-in-review/#geocaching] I said getting my 1000th find would be "easily doable"
as it was a "mere" 85 finds away (and I'd found 174 in 2022). I nearly didn't get it, however, and I had to make changes to head
out a couple of times before the end of the year to just about clinch it. It was great having this final push of motivation,
although I wish i'd paid more attention sooner. My find rate dropped to just 0.263 per day (45% decrease) but I still got out, saw
the countryside and got some fresh air.

In 2024 I aim to find a similar amount to 2023 [https://www.mikestreety.co.uk/stats/#geocaches] (96), if not more. I'd like to
find at least 100, if not ~150.
				<p><strong>Read time:</strong> 7 mins</p>
				<p><strong>Tags:</strong> General, Ramblings, Annual Review</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Frameworks, Tools and Utility meta-packages for quicker configuration</title>
			<link>https://www.mikestreety.co.uk/blog/frameworks-tools-and-utility-meta-packages-for-quicker-configuration/</link>
			<pubDate>Thu, 14 Dec 2023 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/frameworks-tools-and-utility-meta-packages-for-quicker-configuration/</guid>
			<description><![CDATA[
				There are a lot of fantastic tools, frameworks and utilities out there for helping with development in various ways. We are so
lucky to be living in a world where people make amazing stuff and then just, Open Source it.

The one thing I often struggle with is configuration of them all. We maintain a lot of sites and try to keep them in sync with
conventions and having to repeat the config for each site is sometimes tedious - especially if we want to tweak a setting.

With our sites created in similar ways, we tend to configure a tool and then roll it out to all the other sites. We rarely have a
need to change anything between installs but, when we do, we tend to do it on all.

I've got into the habit of making meta packages - small repos which contain the dependencies and initial config that we've settled
on. We can then quickly roll it out (and update) without having to tediously find and replace across a range of git repos.

This blog post is a quick overview of meta-packages I've created for this exact purpose.


QUICK LINKS

 * Docusaurus
   * npm i @liquidlight/docusaurus-framework --save
   * docusaurus-framework on GitHub [https://github.com/liquidlight/docusaurus-framework]
 * BackstopJS
   * npm i @liquidlight/backstopjs-framework --save
   * backstopjs-framework on GitHub [https://github.com/liquidlight/backstopjs-framework]
 * Playwright
   * npm i @liquidlight/playwright-framework --save
   * playwright-framework on GitHub [https://github.com/liquidlight/playwright-framework]


DOCUSAURUS - FOR GENERATING DOCUMENTATION

npm i @liquidlight/docusaurus-framework --save


This package allows for a lightweight Docusaurus [https://docusaurus.io/] site to be set up in an existing repo (e.g. in a docs
folder). It requires a small docusaurus.config.js and a minimal package.json which then generates a nice-looking, clean
documentation site with some sensible defaults.

module.exports = require('@liquidlight/docusaurus-framework/docusaurus.config')({
	title: 'Liquid Light',
});

The framework can be configured further and includes mermaid charting library, basic CSS, favicon and image overrides. It also
includes defaults for a blog should you want that enabled too.

View docusaurus-framework on GitHub [https://github.com/liquidlight/docusaurus-framework].


BACKSTOPJS - FOR VISUAL REGRESSION TESTING

npm i @liquidlight/backstopjs-framework -D --save


This framework reduces the config needed to get started with BackstopJS [https://github.com/garris/BackstopJS] and introduces a
hierarchy for URLs, allowing you to specify a base domain and add several pages to that site. It allows passing of options and
parameters to domains instead of every page tested on the site.

Create a backstop.config.js to begin your visual regression tests.

module.exports = require('@liquidlight/backstopjs-framework')([
	{
		envs: {
			test: {
				domain: 'https://www.liquidlight.co.uk',
			},
		},
		paths: [
			{
				label: 'Homepage',
				path: '/',
			}
		],
	}
]);

Along with masking a lot of the complexity, the framework offers the ability to configure backstop directly should a specific
option be required.

View backstopjs-framework on GitHub [https://github.com/liquidlight/backstopjs-framework].


PLAYWRIGHT FRAMEWORK - FOR CENTRALISED TESTING

npm i @liquidlight/playwright-framework -D --save


This provides an opinionated meta-framework for Playwright [https://playwright.dev/]. It allows for quick setup of testing on
multiple devices for each test. Playwright will only allow one device per Project, so this loops through Sites (a new concept) and
creates a Playwright Project for each site & device.

For TYPO3 users, there is an included function which will parse your `config.yaml`` and create the projects for you.

import { defineConfig } from '@playwright/test';

module.exports = defineConfig(require('@liquidlight/playwright-framework')(
	{
		label: 'Site name',
		envs: {
			local: 'https://ll.ddev.site',
		},
		project: {
			testDir: './tests/'
		}
	}
]));

View playwright-framework on GitHub [https://github.com/liquidlight/playwright-framework].
				<p><strong>Read time:</strong> 3 mins</p>
				<p><strong>Tags:</strong> Node, NPM, Docusaurus, BackstopJS, Playwright</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Accessing iframe content and JavaScript variables from Puppeteer</title>
			<link>https://www.mikestreety.co.uk/blog/accessing-iframe-content-and-javascript-variables-from-puppeteer/</link>
			<pubDate>Wed, 29 Nov 2023 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/accessing-iframe-content-and-javascript-variables-from-puppeteer/</guid>
			<description><![CDATA[
				Following on from the previous post about logging in and saving cookies with Puppeteer
[https://www.mikestreety.co.uk/blog/login-with-puppeteer-and-re-use-cookies-for-another-window/], I also needed to access content
and, more specifically, a JavaScript variable present within the iframe itself from within Puppeteer as this contained information
I was hunting down.

A working example of this code can be found in this git repository [https://github.com/liquidlight/puppeteer-typo3-translations].

In this example, we will be loading Wikipedia [https://www.wikipedia.org/] with an iframe tester
[https://iframetester.com/?url=https://www.wikipedia.org/]. Wikipedia has a rtlLangs variable available on the page which we will
be accessing


WHAT IS PUPPETEER?

Puppeteer [https://pptr.dev/] is a Node/NPM package which allows you to create & control a headless Chrome instance, allowing you
to do front-end/UI based tasks programmatically. It is hugely powerful and worth investigating if that is your thing. One of the
most common examples is opening a page and taking a screenshot or submitting a form for testing.


SETUP

For this we are going to be working in a single JavaScript file - make a new one called iframe.js in a fresh folder (or one where
you are adding this functionality)


INSTALL THE DEPENDENCIES

The only dependency we need for this is puppeteer.

npm i puppeteer --save


SET UP THE SCRIPT

Inside your iframe.js add the following skeleton Puppeteer code

const puppeteer = require('puppeteer');

// Our main function
const run = async () => {
    // Create a new puppeteer browser
    const browser = await puppeteer.launch({
        // Change to `false` if you want to open the window
        headless: 'new',
    });

    // Create a new page in the browser
    const page = await browser.newPage();


    // Close the browser once you have finished
    browser.close();
}

// Run it all
run();


Once saved, you can run the following to start your script

node iframe.js


FIND YOUR IFRAME

Once you have your code set up and running, the next step is to load (goto) the page with the iframe and locate it in the source.
The location can be either via an ID or a HTML selector.

Note: When selecting your iframe, be careful of who has control over the HTML and consider if the structure could change or if
more than one iframe could appear on the page. Have a look at the docs [https://pptr.dev/guides/query-selectors] about what kind
of selectors you can use.

const puppeteer = require('puppeteer');

const run = async () => {
    // Create a new puppeteer browser
    const browser = await puppeteer.launch({
        // Change to `false` if you want to open the window
        headless: 'new',
    });

    // Create a new page in the browser
    const page = await browser.newPage();

    // Go to the page and wait for everything to load - this ensures the iframe has loaded
   await page.goto('https://iframetester.com/?url=https://www.wikipedia.org/', {
        waitUntil: ['domcontentloaded', 'networkidle2'],
        timeout: 0
    });

+   // Get the iframe
+   const elementHandle = await page.$('#iframe-window');
+
+   // Get the `src` property to verify we have the iframe
+   const src = await (await elementHandle.getProperty('src')).jsonValue();
+
+   // Output the src
+   console.log(src);

    // Close the browser once you have finished
    browser.close();
};

run();


ACCESS THE IFRAME CONTENT & VARIABLES

With our iframe loaded and verified, we can now access the content on the iframe. This can be done with the contentFrame()
function on our iframe variable.

const frame = await elementHandle.contentFrame();

Once in our frame, we can run evaluate , which is a function which allows you to evaluate JavaScript
[https://pptr.dev/guides/evaluate-javascript/] on the page (or in this instance, frame).

The rtlLangs paramter is the name of the JavaScript variable on the page

const rtlLangs = await frame.evaluate('rtlLangs');

With that, the final code looks like:

const puppeteer = require('puppeteer');

const run = async () => {
    // Create a new puppeteer browser
    const browser = await puppeteer.launch({
        // Change to `false` if you want to open the window
        headless: 'new',
    });

    // Create a new page in the browser
    const page = await browser.newPage();

    // Go to the page and wait for everything to load - this ensures the iframe has loaded
   await page.goto('https://iframetester.com/?url=https://www.wikipedia.org/', {
        waitUntil: ['domcontentloaded', 'networkidle2'],
        timeout: 0
    });

    // Get the iframe
    const elementHandle = await page.$('#iframe-window');

    // Access the frame content of the selected iframe
    const frame = await elementHandle.contentFrame();

    // Evaluate JavaScript variable available and store the output
	const rtlLangs = await frame.evaluate('rtlLangs');

    // Log the output of the variable
	console.log(rtlLangs);

    // Close the browser once you have finished
    browser.close();
};

run();

Once we have access to the frame, we can load JavaScript variables, access the HTML or navigate as you would a normal page.
				<p><strong>Read time:</strong> 5 mins</p>
				<p><strong>Tags:</strong> Node, NPM, Puppeteer</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Login with Puppeteer and re-use cookies for another window</title>
			<link>https://www.mikestreety.co.uk/blog/login-with-puppeteer-and-re-use-cookies-for-another-window/</link>
			<pubDate>Thu, 23 Nov 2023 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/login-with-puppeteer-and-re-use-cookies-for-another-window/</guid>
			<description><![CDATA[
				For a recent project I needed to automate something which was only available in the CMS via a login. To help speed to process up,
I created a script which can login with supplied credentials and store the cookies in a local file. The main process can then use
these cookies to carry out the task rather than needing to login each time.

A working example of this code can be found in this git repository [https://github.com/liquidlight/puppeteer-typo3-translations].


WHAT IS PUPPETEER?

Puppeteer [https://pptr.dev/] is a Node/NPM package which allows you to create & control a headless Chrome instance, allowing you
to do front-end/UI based tasks programmatically. It is hugely powerful and worth investigating if that is your thing. One of the
most common examples is opening a page and taking a screenshot or submitting a form for testing.

In this instance, we are going to login to the CMS and then store the cookies in a file.


LOGIN TO YOUR SITE

Below is the code to login to the site and store the cookies - there is some explanation text afterwards with some more details.

To make this work, you need to install both puppeteer to carry out the work and fs to write the file. If you are using the cookies
later in the same file, this bit isn't required.


INSTALL THE DEPENDENCIES

npm i puppeteer fs --save


CREATING A LOGIN.JS

Save this code in a file (e.g. login.js) and then run it via command line (e.g. node login.js).

I would recommend changing the headless value to false while you are testing, as this opens the browser and allows you to watch
the code execute and spot any issues

// Require packages
const puppeteer = require('puppeteer');
const fs = require('fs');

// Login credentials
const url = '',
    username = '',
    password = '';

// Create a login function
const login = async () => {
    // Create a new puppeteer browser
    const browser = await puppeteer.launch({
        // Change to `false` if you want to open the window
        headless: 'new',
    });

    // Create a new browser page
    const page = await browser.newPage();

    // Go to the URL
    await page.goto(url);

    // Input username (selector may need updating)
    await page.type('input[type=text]', username);
    // Input password (selector may need updating)
    await page.type('input[type=password]', password);
    // Click the submit button
    await page.click('button[type=submit]');

    // Wait for a selector to be loaded on the page -
    // this helps make sure the page is fully loaded so you capture all the cookies
    await page.waitForSelector('main');

    const cookies = JSON.stringify(await page.cookies());
    await fs.writeFileSync('./cookies.json', cookies);

    // Optional - sessions & local storage
    // const sessionStorage = await page.evaluate(() => JSON.stringify(sessionStorage));
    // await fs.writeFileSync('./sessionStorage.json', cookies);

    // const localStorage = await page.evaluate(() => JSON.stringify(localStorage));
    // await fs.writeFileSync('./localStorage.json', cookies);

    // Close the browser once you have finished
    browser.close();
};

// Fire the function
login();

Read through the comments as they should help guide you where things may need altering - the main thing to watch out for this the
field selectors when entering a username & password and the selector for when the page has loaded.

The other thing to watch out for (that this does not cater for) is 2FA. It may be you need to open the browser window and enter it
yourself before proceeding.

You can also choose to store the session and local storage, should your application use this for authentication.


USING THE COOKIES

Once the above script as run, you should have a cookies.json file sitting alongside your login script. If you opted to also
collect the localStorage and sessionStorage then these files will also exist.

Once again you will need puppeteer and fs as dependencies so you can load the cookie file.

Create your secondary script which will utilise the cookies with the following code as a base:

// Load dependencies
const puppeteer = require('puppeteer');
const fs = require('fs');

// Load the cookies into the page passed in
const loadCookie = async (page) => {
    // Load the cookie JSON file
    const cookieJson = await fs.readFileSync('./cookies.json');

    // Parse the text file as JSON
    const cookies = JSON.parse(cookieJson);

    // Set the cookies on the page
    await page.setCookie(...cookies);
}

// Our main function
const run = async () => {
    // Create a new puppeteer browser
    const browser = await puppeteer.launch({
        // Change to `false` if you want to open the window
        headless: 'new',
    });

    // Create a new page in the browser
    const page = await browser.newPage();

    // Load the cookies
    await loadCookie(page);

    // Load your super secure URL
    // await page.goto(https://super.secure/url);
    // Do more work
    // Profit

    // Close the browser once you have finished
    browser.close();
}

// Run it all
run();

From there you can navigate through your system as if you were logged in.
				<p><strong>Read time:</strong> 5 mins</p>
				<p><strong>Tags:</strong> Node, NPM, Puppeteer</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Get your Pocket Casts data using the unofficial API and PHP</title>
			<link>https://www.mikestreety.co.uk/blog/get-your-pocket-casts-data-using-the-unofficial-api-and-php/</link>
			<pubDate>Tue, 03 Oct 2023 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/get-your-pocket-casts-data-using-the-unofficial-api-and-php/</guid>
			<description><![CDATA[
				Pocket Casts is a Android podcast player I've been using for some time. I wanted a method of extracting podcasts I had listened to
to gather some stats and keep a history of my podcasts.

Pocket Casts doesn't offer an official API, unfortunately, but does have some data available on https://api.pocketcasts.com -
which I assume is what the web application uses to get data behind the scenes.

Using hints from the pocketcasts NPM package [https://www.npmjs.com/package/pocketcasts], I backwards engineered the code below
which requires "logging in" and then accessing other endpoints with a Bearer token.

For the code below to work, you need to have access to the web player which, at the time of writing, needs "Pocket Casts Plus" -
the paid-for service


LOGIN

Login is handled by POSTing to the following API endpoint with your email & password

https://api.pocketcasts.com/user/login


This can be achieved using cURL:

// Initialize cURL
$ch = curl_init('https://api.pocketcasts.com/user/login');
// Set the request method to POST
curl_setopt($ch, CURLOPT_POST, true);
// Set the POST data
curl_setopt($ch, CURLOPT_POSTFIELDS, http_build_query($data));
// Set the response format to plain text
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
// Execute the request and get the response
$response = curl_exec($ch);
// Close the cURL handle
curl_close($ch);
// Decode the data
$data = json_decode($response, true);

Your $data variable will then be an array containing a token key - something like the below. This token is what you'll need to
access any of the other endpoints:

{
  "token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.sdfsdfsdfsdfsdfsdfsdfsdf.50z97ZEGJBckZN3vwBJ2u6UPX5Vsfieq4yFpUSDWELY",
  "uuid": "...",
  "email": "email@gmail.com"
}


With this token in hand, you can then request data


LISTENING HISTORY

To get a list of podcasts you've listened to, you can use the following endpoint:

https://api.pocketcasts.com/user/history


Tip: As a hint of what endpoints are available, scan the README [https://github.com/coughlanio/pocketcasts] of the pocketcasts
repo and compare to the list of resources [https://github.com/coughlanio/pocketcasts/blob/master/src/resources.js].

With your $data['token'] in hand, you can request your listening history:

$authorization = "Authorization: Bearer " . $data['token']; // Prepare the authorisation token
// Initialize cURL
$ch = curl_init('https://api.pocketcasts.com/user/history');
// Set the request method to POST
curl_setopt($ch, CURLOPT_POST, true);
// Inject the token into the header
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json', $authorization));
// Set the response format to plain text
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
// Execute the request and get the response
$response = curl_exec($ch);
// Close the cURL handle
curl_close($ch);
// Decode the data
$data = json_decode($response, true);

A lot of the code is similar to that of above, so consider putting it in a function.

You can now access a lot of data on Pocket Casts using the examples above.

I'm not going to be using this as the history doesn't tell you when you actually listened to it, nor does it give more than 100
results on desktop - which is annoying.
				<p><strong>Read time:</strong> 3 mins</p>
				<p><strong>Tags:</strong> PHP, API</p>
			]]></description>
		</item>
		
		
		<item>
			<title>A release process for our NPM and Composer packages</title>
			<link>https://www.mikestreety.co.uk/blog/a-release-process-for-our-npm-and-composer-packages/</link>
			<pubDate>Sun, 01 Oct 2023 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/a-release-process-for-our-npm-and-composer-packages/</guid>
			<description><![CDATA[
				At Liquid Light, we maintain several public [https://github.com/orgs/liquidlight/repositories] and private packages for our TYPO3
installations. Over the last couple of years, we have honed the development and release process of the packages so that 6
developers independently work on our extensions.


WORKFLOW

I'll go into more detail below, but the workflow is:

 * Make an Issue
 * Create a Merge Request
 * Work on the code
 * Update UPCOMING.md
 * Push branch and request peer review
 * Merge with merge commit & test the merged result
 * Release


MAKE AN ISSUE

The issue should contain as much information as possible. We have a few templates which feature some prompts such as how to find
the feature/bug, is it related to a ticket on an internal system or is there a suggested solution. Sometimes, Issues are raised
retrospectively - even if you already have a fix/have fixed it on a branch. For every fix or feature to the project or package, an
Issue should exist to give more context & background.

An example of a template we have can be found below - this is used for Bug reports. The HTML comments aren't rendered in the
issue, but help prompt the author. The checklist items at the end can be deleted depending on the requirements of the issue.

### Summary



### Steps to reproduce

<!-- If required, include expected & actual behaviour -->



### Code in Action

<!-- How do we see the fix? (either link or steps for recreation)
    Also outline what has been done to resolve -->



### More details, logs and/or screenshots

<!-- What should happen -->



## Checks and Tests

- [ ] Approved by peer/developer
- [ ] Lighthouse/perf/console checked
- [ ] Cross browser test
- [ ] Deployed to staging
- [ ] Gulp recompile
- [ ] Content/CMS updates required
- [ ] User permissions for non-admins need confirming
- [ ] Database schema needs updating

/label ~Bug



CREATE A MERGE REQUEST

Unlike the Issue, Merge (or Pull) Requests can be sparse. As a minimum, it should contain the ID of the issue it closes, but we
don't see a need to repeat all the data available in the issue itself. Merge Requests (and the corresponding branches) are
normally created from the built-in Gitlab (or Github) buttons and contain something like Closes #3


WORK ON THE CODE

Once the Issue, Merge Request (and branch) are in place, you can begin work. We have an in-house developed linter and commit using
the Conventional Commits [https://www.conventionalcommits.org/] specification. I would recommend you set out coding styles &
guidelines and consider adding them to a CONTRIBUTING.md file or similar.


UPDATE UPCOMING.MD

A requirement for our Merge Requests on our packages in an update to an UPCOMING.md. This is a file which lives in the root of the
project and follows the format of CHANGELOG.md. We actually have it part of our Gitlab CI on Merge Requests to check for an update
to this file.

The purpose of this file is to build up the CHANGELOG as you go, rather than requiring the developer in charge of creating the
release to look back at the git history.

The CHANGELOG/UPCOMING files should be written for the reader and should not be a copy and paste of the git commit message. It
should include a link to the relevant issue.

The UPCOMING should have the target SemVer [https://semver.org/] version as a h1 (e.g whether it is a Major, Minor or a Patch) -
you shouldn't put the actual target version number as a release may occur before your branch get's merged.

The rest of the document follows the CHANGELOG. An example can be found on our `typo3-shortcodes`` repo on Github
[https://github.com/liquidlight/typo3-shortcodes/blob/4ea84704d1d8353823857d65e69981820c3baa71/UPCOMING.md]


PUSH BRANCH AND REQUEST PEER REVIEW

Once you have finished the work and have tested it, push it up to your target platform of choice (be it Github or Gitlab) and ask
for review. This could be a friend, colleague or a community on the internet.


MERGE WITH MERGE COMMIT & TEST THE MERGED RESULT

Once approved by a peer, it gets merged into the main branch. Before merging, main is rebased into our feature branch if any new
commits exist - this helps prevent merge conflicts and makes the commit history tell more of a story as to when features were
added, rather than specifically being linear in time.

When merging into our main branch, we ensure merge commits
[https://docs.gitlab.com/ee/user/project/merge_requests/methods/#merge-commit] are enabled to help debugging in the future and
tracing back should any issues arise.

Once the code is merged, it is tested with the current main branch to ensure nothing is broken.


RELEASE

Depending on the urgency of the fix, we may release it straight away, but generally we leave it for a few weeks to bed in and be
tested across various systems.

With our releases, we have a single commit which serves as the release commit. No other code changes should happen except to those
relating to releasing the package.

 * Move the contents of UPCOMING.md to CHANGELOG.md, set the version as the h1 and add the date
 * Update any meta files which contain versions (e.g. package.json or composer.json)
 * Commit the change with the changelog changes in the body of the commit (example on Github
   [https://github.com/liquidlight/typo3-shortcodes/commit/89c1ad7bd02f6280914ae2bd2a26ac7cef5226fd])
				<p><strong>Read time:</strong> 4 mins</p>
				<p><strong>Tags:</strong> Git</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Building and Pushing Docker images using Github Actions</title>
			<link>https://www.mikestreety.co.uk/blog/building-and-pushing-docker-images-using-github-actions/</link>
			<pubDate>Thu, 07 Sep 2023 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/building-and-pushing-docker-images-using-github-actions/</guid>
			<description><![CDATA[
				We needed to build a Docker image via Github Actions and push to the Package repository to create a public, shareable image.

Add the following in a .github/workflows folder with a .yml extension. You'll also need to add the following secrets to the
respoitory:

 * DOCKER_USERNAME - the username whose token you will be using
 * DOCKER_TOKEN -an access token with write:packages permission

Once you have pushed the package, you will need to go to the Packages tab on either the organisation and associate it with the
repository. Once that is done, the package appears on the right-hand side of the repository list.


ACTION YAML EXAMPLE

This code was copied (and adapted) from the official Github Docs
[https://docs.github.com/en/actions/publishing-packages/publishing-docker-images] website.

I've added (but left commented out) and example of how to pass in a build argument too.

name: Create and publish a Docker image

on:
  push:
    branches: ['main']

env:
  REGISTRY: ghcr.io
  REPO_NAME: $
  IMAGE_TAG: latest

jobs:
  build-and-push-image:
    runs-on: ubuntu-latest

    permissions:
      contents: read
      packages: write

    steps:
	  # Hack to add an env variable built up of env variables
      - name: Set docker image env var
        run: |
          echo "IMAGE_NAME=$/$:$" >> $GITHUB_ENV

      - name: Checkout repository
        uses: actions/checkout@v3

      - name: Login to GitHub Container Registry
        uses: docker/login-action@v2
        with:
          registry: ghcr.io
          username: $
          password: $

      - name: Build and push Docker image
        uses: docker/build-push-action@v4
        with:
          context: .
           #build-args: |
             #"ARG_KEY=VALUE"
          push: true
          tags: $
				<p><strong>Read time:</strong> 2 mins</p>
				<p><strong>Tags:</strong> Github, Docker</p>
			]]></description>
		</item>
		
		
		<item>
			<title>PHP Ternary and null coalescing operators</title>
			<link>https://www.mikestreety.co.uk/blog/php-ternary-and-null-coalescing-operators/</link>
			<pubDate>Thu, 10 Aug 2023 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/php-ternary-and-null-coalescing-operators/</guid>
			<description><![CDATA[
				I write PHP on a daily basis and often, as with most programming, need to check if something is there or not, or if it is what I
expect.

Beyond if statements, there are ternary operators. Things like the Elvis (?:) operator or the Null coalescing operator (??) can be
used to make your code leaner, cleaner and easier to read.

If you've not come across them before, here's a quick overview.


ELVIS OPERATOR

$output = $value ?: 'default' is the equivalent of writing the following

if ($value) {
	$output = $value;
} else {
	$output = 'default';
}

or as a shorter, oneliner

$output = $value ? $value : 'default';


NULL COALESCING OPERATOR

While the null coalescing operator ($output = $value ?? 'default' ) is slightly different

if (isset($value)) {
	$output = $value;
} else {
	$output = 'default';
}

or

$output = isset($value) ? $value ? 'default';

Null coalescing operators can be chained too which makes them more powerful

$value = $_GET['user'] ?? $_POST['user'] ?? 'nobody';


COMPARISON

They are similar, but the do have their differences and can sometimes trip you up and give you the unexpected. I've drawn up a
table below which should help identify which one you need.

 * $v signifies the variable in the leftmost column
 * A ⚠️ signifies a PHP warning is thrown in 8.2.

$v ?? 'default' $v ?: 'default' $non_var; default ⚠️ default $null = null; default default $booly = '1'; 1 1 $bool = true; 1 1
$empty_string = ''; `` default $string = 'string'; string string $zero_string = '0'; 0 default $zero = 0; 0 default
$array['non_key'] default ⚠️ default $array['null_key'] = null; default default $array['string_key'] = 'string'; string string
$array['zero_key'] = 0; 0 default $array['number_key'] = 2; 2 2 $array['sub_array_array'] = []; Array default $array['sub_array']
= [...]; Array Array $array['sub_array']['non_key'] default default $array['sub_array']['null_key'] = null; default default

The code I used to check can be seen on onlinephp [https://onlinephp.io/c/67e66] - I've also included it below

$null = null;

$booly = '1';

$bool = true;

$empty_string = '';

$string = 'string';

$zero_string = '0';

$zero = 0;

$array = [
	'null_key' => null,
	'string_key' => 'string',
	'zero_key' => 0,
	'number_key' => 2,
	'sub_array_array' => [],
	'sub_array' => [
		'null_key' => null,
		'string_key' => 'string',
	]
];

echo '| $non_var | ' . ($non_var ?? 'default') . ' | ' . ($non_var ?: 'default') . ' |<br>';
echo '| $null | ' . ($null ?? 'default') . ' | ' . ($null ?: 'default') . ' |<br>';
echo '| $booly | ' . ($booly ?? 'default') . ' | ' . ($booly ?: 'default') . ' |<br>';
echo '| $bool | ' . ($bool ?? 'default') . ' | ' . ($bool ?: 'default') . ' |<br>';
echo '| $empty_string  | ' . ($empty_string ?? 'default') . ' | ' . ($empty_string ?: 'default') . ' |<br>';
echo '| $string | ' . ($string ?? 'default') . ' | ' . ($string ?: 'default') . ' |<br>';
echo '| $zero_string | ' . ($zero_string ?? 'default') . ' | ' . ($zero_string ?: 'default') . ' |<br>';
echo '| $zero | ' . ($zero ?? 'default') . ' | ' . ($zero ?: 'default') . ' |<br>';
echo '| $array[non_key] | ' . ($array['non_key'] ?? 'default') . ' | ' . ($array['non_key'] ?: 'default') . ' |<br>';
echo '| $array[null_key] | ' . ($array['null_key'] ?? 'default') . ' | ' . ($array['null_key'] ?: 'default') . ' |<br>';
echo '| $array[string_key] | ' . ($array['string_key'] ?? 'default') . ' | ' . ($array['string_key'] ?: 'default') . ' |<br>';
echo '| $array[zero_key] | ' . ($array['zero_key'] ?? 'default') . ' | ' . ($array['zero_key'] ?: 'default') . ' |<br>';
echo '| $array[number_key] | ' . ($array['number_key'] ?? 'default') . ' | ' . ($array['number_key'] ?: 'default') . ' |<br>';
echo '| $array[sub_array] | ' . ($array['sub_array'] ?? 'default') . ' | ' . ($array['sub_array'] ?: 'default') . ' |<br>';
echo '| $array[sub_array_array] | ' . ($array['sub_array_array'] ?? 'default') . ' | ' . ($array['sub_array_array'] ?: 'default') . ' |<br>';
echo '| $array[sub_array][non_key] | ' . ($array['sub_array']['non_key'] ?? 'default') . ' | ' . ($array['sub_array']['non_key'] ?: 'default') . ' |<br>';
echo '| $array[sub_array][null_key] | ' . ($array['sub_array']['null_key'] ?? 'default') . ' | ' . ($array['sub_array']['null_key'] ?: 'default') . ' |<br>';
echo '| $array[sub_array][string_key] | ' . ($array['sub_array']['string_key'] ?? 'default') . ' | ' . ($array['sub_array']['string_key'] ?: 'default') . ' |<br>';
				<p><strong>Read time:</strong> 8 mins</p>
				<p><strong>Tags:</strong> PHP</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Build different images with multi-stage Docker builds</title>
			<link>https://www.mikestreety.co.uk/blog/build-different-images-with-multi-stage-docker-builds/</link>
			<pubDate>Sun, 06 Aug 2023 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/build-different-images-with-multi-stage-docker-builds/</guid>
			<description><![CDATA[
				I've written before about multi-stage Docker builds
[https://www.mikestreety.co.uk/blog/creating-a-multi-stage-docker-build-to-make-your-images-smaller/] to help you make smaller
images, however today I discovered you can build images out of each of the stages.

The advantage for this is being able to create two, or more, images with incremental features allowing you to used the leanest
image for the function.

The use-case I had was for our Gitlab CI and the images we use for deployment. Our base image needed PHP, Composer and Node,
however we needed an additional image which also included Docker - so we could build a Docker image inside the CI.

Rather than bloat all the base images or use two separate Dockerfile files, I heave lent on multi-stage Docker builds to make all
the images I need.


DOCKERFILE

Let's start with the Dockerfile, this below creates a minimal Linux (Alpine) based image with PHP (Version passed in via a CLI
argument), NPM and tools to use image optimisation tools (such as gulp-imagemin)

###
# Global Arguments
###
ARG PHP_VERSION

###
# Set global component images
###
FROM composer:2 as COMPOSER

FROM php:$PHP_VERSION-cli-alpine3.16

# Copy artifacts from component images
COPY --from=COMPOSER /usr/bin/composer /usr/bin/composer

# Install dependencies
RUN apk add \
	--update \
	--no-cache \
	# Deployment
	bash \
	git \
	openssh \
	rsync \
	# Front-end tools
	nodejs \
	npm \
	# Tools for imagemin
	autoconf \
	automake \
	g++ \
	gcc \
	jpeg \
	libc6-compat \
	libjpeg-turbo-dev \
	libpng-dev \
	libtool \
	make \
	musl-dev \
	nasm \
	tiff \
	zlib \
	zlib-dev

# Create SSH config
RUN mkdir /root/.ssh \
	&& touch /root/.ssh/id_ed25519 \
	&& chmod 700 /root/.ssh; \
	chmod 600 /root/.ssh/id_ed25519;

ENTRYPOINT ["/bin/sh", "-c"]

We can build this, and specify the PHP version when doing so:

 docker build \
	--tag deployment:php8.1 \
	--build-arg PHP_VERSION=8.1 \
	.

If we wanted to create a second image with more applications than the first, we can do this with staged builds.

By giving each stage a name, you can use the --target argument to stop the build at the end of that stage.

Update the FROM to include a name by using the as keyword:

FROM php:$PHP_VERSION-cli-alpine3.16 AS image_baseline

Next, add another stage at the end of the file and use the image_baseline as the image and give it a name (in this example
image_dind {Docker in Docker})

FROM image_baseline AS image_dind
# Install dependencies
RUN apk add \
	--update \
	--no-cache \
	# Deployment
	docker

We can now build a Docker image from both the image_baseline and image_dind stages:

# Build a image_baseline image
docker build \
	--target image_baseline
	--tag deployment:php8.1 \
	--build-arg PHP_VERSION=8.1 \
	.

# Build an image from image_baseline with Docker
docker build \
	--target image_dind
	--tag deployment/docker:php8.1 \
	--build-arg PHP_VERSION=8.1 \
	.

From there, you can build on the image_baseline again or even the image_dind stage. You get all the benefits of a tidy filesystem
along with each stage being cached. You also get to keep your images as small as they need to be - it's a win, win, win.


BONUS GITLAB CI

I use Gitlab CI to generate these Docker files, creating one for each PHP version from 7.4 - 8.2. Rather than repeat lots of code,
I utilise the extends keyword in CI.

This is the `.gitlab-ci.yaml`` file I use to build 8 different Docker files:

image: docker:20.10.24

stages:
  - build

services:
    - docker:20.10.24-dind

.build:
  stage: build
  interruptible: true
  variables:
    COMPOSER_VERSION: "2"
    DOCKER_TLS_CERTDIR: "/certs"
    DOCKER_BUILDKIT: 1
  before_script:
    - echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY --username $CI_REGISTRY_USER --password-stdin
  script:
    # Build baseline image
    - >
      docker build
      --target image_baseline
      --tag $CI_REGISTRY_IMAGE:php${PHP_VERSION}
      --build-arg PHP_VERSION=${PHP_VERSION}
      .
    - docker push $CI_REGISTRY_IMAGE:php${PHP_VERSION}
    # Build dind image: Baseline with docker installed
    - >
      docker build
      --target image_dind
      --tag $CI_REGISTRY_IMAGE/docker:php${PHP_VERSION}
      --build-arg PHP_VERSION=${PHP_VERSION}
      .
    - docker push $CI_REGISTRY_IMAGE/docker:php${PHP_VERSION}

build:7.4:
  extends:
    - .build
  variables:
    PHP_VERSION: "7.4"

build:8.0:
  extends:
    - .build
  variables:
    PHP_VERSION: "8.0"

build:8.1:
  extends:
    - .build
  variables:
    PHP_VERSION: "8.1"

build:8.2:
  extends:
    - .build
  variables:
    PHP_VERSION: "8.2"
				<p><strong>Read time:</strong> 4 mins</p>
				<p><strong>Tags:</strong> Docker, Gitlab CI</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Install private composer packages with CI and Deployer</title>
			<link>https://www.mikestreety.co.uk/blog/install-private-composer-packages-with-ci-and-deployer/</link>
			<pubDate>Thu, 03 Aug 2023 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/install-private-composer-packages-with-ci-and-deployer/</guid>
			<description><![CDATA[
				Installing private composer packages can be a bit like Crufts - you sometimes have to jump through so many hoops and tunnels and
all you get at the end is a belly rub.

However you host your packages, the general theory is the same. I would advise finding a host which can act like a library
endpoint rather than individual Git repos. Something like Github, Gitlab or Bitbucket can do this for you.

This post assumes you have a private composer repository host and are looking to access it using tokens for CI and deployment
purposes. I'll be using Gitlab as my endpoint, but the code can be substituted for any other host.

Setting up authorisation locally can be done by running composer config --auth. This will make a local auth.json file in your repo
(try not to commit this). If you use the same private packages across a few different sites, you can add --global to set the
global auth file. Mine ended up looking something like the below

{
   "gitlab-domains":[
      "private.gitlab.com"
   ],
   "gitlab-token":{
      "private.gitlab.com": "gplat-token"
   }
}

 * private.gitlab.com is the URL to my private Gitlab instance
 * gplat-token is an access token generated with api access

This allows me locally install my private packages - but how do we use it in Gitlab?


GITLAB CI

CI or not, an alternative way of passing in Composer authorisation details is setting a COMPOSER_AUTH environment variable. With
Gitlab CI, setting a environment variable can be done using the variables keyword [https://docs.gitlab.com/ee/ci/variables/]

variables:
  COMPOSER_AUTH: '{"gitlab-domains": ["private.gitlab.com"],"gitlab-token": {"private.gitlab.com": "gplat-token"}}'

You should be able to install your private packages. However, committing the URL and token is not a good idea. Fortunately, Gitlab
has some predefined variables [https://docs.gitlab.com/ee/ci/variables/predefined_variables.html] we can utilise.

First, create a CI/CD variable in the UI [https://docs.gitlab.com/ee/ci/variables/#define-a-cicd-variable-in-the-ui] called
COMPOSER_TOKEN.

We can then use it, along with the predefined $CI_SERVER_HOST variable to build up our COmposer Auth env variable

variables:
  COMPOSER_AUTH: '{"gitlab-domains": ["$CI_SERVER_HOST"],"gitlab-token": {"$CI_SERVER_HOST": "$COMPOSER_TOKEN"}}'


DEPLOYER

If you use PHP Deployer [https://deployer.org/] to push your code live, it will need access to this auth variable in the target
server to install your packages. This can be passed through with the env() function:

set('env', [
	'COMPOSER_AUTH' => getenv('COMPOSER_AUTH'),
]);
				<p><strong>Read time:</strong> 2 mins</p>
				<p><strong>Tags:</strong> Composer, Gitlab CI, Deployer</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Open Door Meetings - How I can bother my colleagues without interrupting them</title>
			<link>https://www.mikestreety.co.uk/blog/open-door-meetings-at-work/</link>
			<pubDate>Sun, 16 Jul 2023 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/open-door-meetings-at-work/</guid>
			<description><![CDATA[
				I was listening to Episode 216 [https://www.codingblocks.net/podcast/better-application-management-with-custom-apps/] of the
Coding Blocks podcast and the conversation of "Open Jim" hours came up. The idea being that a senior developer has some set hours
(I think it was mentioned between 9-11am) where they can be interrupted, bothered and questioned. After that, the developer can
get their head down on some work. No-one enjoys being forced to context switch and this has often been posed as a solution.

Naturally, senior developers are the ones who are asked more of - be it by the sales & project management teams to get quotes or
by other developers for knowledge sharing. No matter how much documentation you have, there will always be a need to talk to
people about something.

The "interruptible hours" solution tries to contain the context switching, but has several flaws; What if your problem occurs 10
minutes after the allotted time period? What if you are waiting for the next day and the senior is ill, or on holiday? What if
there is a critical error to be solved during those hours? What if the person in the queue in front of you takes up all the time?
What if a client meeting is scheduled during the hours?

My other issue with this process is it seems quite elitist; making everyone else in the company wait for you to be free -
regardless of how blocking their question is.

With these issues in mind, I set out to see if I could find a solution which would solve most (if not all) of them and also try
and remove the appearance of "my time is more important than yours". We all have work to be getting on with - there is no reason
why other people should wait 24 hours for something which could, potentially, be solved in 20 minutes or less.

What I settled on was Appointments (we call it "Open Door") - for scheduling these we use Google Calendar's built in appointment
schedule [https://support.google.com/calendar/answer/10729749] but I can imagine you could use a service like cal.com.

With Google Appointments, you get a good amount of customisation which means you can be quite flexible with how you set the
schedule up. I have 15 minute appointments with a 5 minute break between them (should people book them back-to-back). The meetings
come with a Google Meet link and you can add additional questions. For example, I have the following:

 * What would you like to talk about?
 * What is the link to the Merge Request/ticket?

These allow me to be prepared for any upcoming open doors and, in some instances, I can address the problem before the meeting
which saves having the meeting.

One of the best features is you can specify how far in advance they can be booked. On mine, I have this set to an hour meaning I
know I have the next hour free, regardless of what comes up. It also means if someone is blocked, they only need to wait an hour
before we can talk.

The Google Appointments also checks against my calendar, to ensure no-one can book a meeting when I'm in another meeting or Open
Door.

We've had this process running for a while and it seems to be working well. Other directors and developers have set up their own
Open Doors.

Is there another solution or another way this problem can be solved? I've love to hear [https://hachyderm.io/@mikestreety] how
else it has been addressed.
				<p><strong>Read time:</strong> 3 mins</p>
				<p><strong>Tags:</strong> General, Ramblings</p>
			]]></description>
		</item>
		
		
		<item>
			<title>Docker image with Node, PHP and Composer</title>
			<link>https://www.mikestreety.co.uk/blog/docker-image-wth-node-php-and-composer/</link>
			<pubDate>Wed, 12 Jul 2023 00:00:00 GMT</pubDate>
			<guid>https://www.mikestreety.co.uk/blog/docker-image-wth-node-php-and-composer/</guid>
			<description><![CDATA[
				Website deployment strategies are tricky things to get into. There are so many ways and means of deploying your application to the
web that it is hard to pick one. That's why, when you find one you like, you just need to stick with it and tweak it, rather than
trying to re-invent the wheel.

Our TYPO3 websites are deployed using PHP Deployer [https://deployer.org/] - I wrote a blog post about deploying a Lumen app
[https://www.mikestreety.co.uk/blog/automatically-deploying-your-lumen-app-with-php-deployer-and-zero-downtime-so-you-dont-have-to-manually-do-it/]
with it a couple of years ago.

Instead of deploying via local command line, we use Gitlab CI to do the heavy lifting and ensure a single source of truth for our
deployments. Because of this, we need a Docker image which contains the tech we need to build and deploy our application. Our
websites are built on TYPO3, which use PHP and MySQL and composer as a dependency manager. Our front-end assets are built with
Gulp which uses NPM as a dependency manager.

So we need an image with

 * PHP
 * Composer
 * NPM

MySQL isn't required as the application, when building, doesn't actually run or need a database.

Our original image used bullseye as a base image, but when everything was installed it came out at just over 500mb. The new one
(below) is built on alpine, which is a linux OS specifically designed for containers and comes in at around 130mb.


THE DOCKERFILE

# Global Arguments
ARG PHP_VERSION

# Set component images
FROM composer:2 as COMPOSER

# Create base image
FROM php:$PHP_VERSION-cli-alpine3.16

# Copy artifacts from component images
COPY --from=COMPOSER /usr/bin/composer /usr/bin/composer

# Install dependencies
RUN apk add \
	--update \
	--no-cache \
	# Deployment
	bash \
	git \
	rsync \
	# Front-end tools
	nodejs \
	npm \
	# Tools for imagemin
	autoconf \
	automake \
	g++ \
	openssh \
	libc6-compat \
	libjpeg-turbo-dev \
	libpng-dev \
	make \
	nasm

ENTRYPOINT ["/bin/sh", "-c"]

Once I had culled the dependencies to what we need, the resulting file ended up quite tidy. There are some dependencies in there
for gulp-imagemin which, if you don't use, you can remove to make it even smaller.

The only variable this dockerfile takes is PHP_VERSION so you can build images based on what PHP version you would like.


GITLAB CI IMAGE BUILDING

The CI pipeline which builds the image looks like the following (yes, I know 7.4 is deprecated, but we need it for legacy reasons
😉)

image: docker:20.10.24

stages:
  - build

services:
  - docker:20.10.24-dind

.build:
  stage: build
  interruptible: true
  variables:
    COMPOSER_VERSION: "2"
    DOCKER_TLS_CERTDIR: "/certs"
    DOCKER_BUILDKIT: 1
  before_script:
    - echo "$DOCKER_REGISTRY_PASS" | docker login $DOCKER_REGISTRY --username $DOCKER_REGISTRY_USER --password-stdin
  script:
    - >
      docker build
      --tag $CI_REGISTRY_IMAGE:php${PHP_VERSION}
      --build-arg PHP_VERSION=${PHP_VERSION}
      .
    - docker push $CI_REGISTRY_IMAGE:php${PHP_VERSION}

build:7.4:
  extends:
    - .build
  variables:
    PHP_VERSION: "7.4"

build:8.0:
  extends:
    - .build
  variables:
    PHP_VERSION: "8.0"

build:8.1:
  extends:
    - .build
  variables:
    PHP_VERSION: "8.1"

build:8.2:
  extends:
    - .build
  variables:
    PHP_VERSION: "8.2"

This then builds 4 different images based on PHP version.
				<p><strong>Read time:</strong> 3 mins</p>
				<p><strong>Tags:</strong> Docker, Node, Composer, PHP</p>
			]]></description>
		</item>
		

	</channel>
</rss>
