Category Archives: Code

Using ACF Blocks by default in a new post in WordPress

One of my clients has a website using Advanced Custom Fields (ACF) Blocks to style everything in the site, including their News section.

To make a new post in News, they had to load a Pattern of the relevant Blocks. This being a multi-step process, and a bit annoying if you just want to get on with posting your article, I looked for a way to load the blocks into the new post by default.

You can do this by putting this code in your functions.php (with amends explained below)

function site_post_register_template() {
    $post_type_object = get_post_type_object( 'post' );
    $post_type_object->template = array(
        array( 'my-blocks/news-header' ),
        array( 'my-blocks/news-article' ),
        array( 'my-blocks/news-article-related' ),
    );
}
add_action( 'init', 'site_post_register_template' );

In this case, I have three Blocks loading – the specific header, the article, and a strip of related articles. I can load in as many Blocks as I want.

To make this work for you, find where you have saved your blocks and list them out in the same way, in an array for each.

You will have to change ‘my-blocks’ to the prefix of your ACF Blocks. To find that prefix, go to the directory where you store the code for the relevant Block and open the block.json file. Copy the details from the “name” line to be inside the array( '' ), above.

Now, when a new Post is started, it automatically loads in the ACF Blocks and my client can get on with posting their news, not thinking about how they load in the Pattern and where they left my instructions for doing so.

Fixing a Mailgun API unknown domain error

I’m using the Mailgun API for a couple of clients, making sure we don’t keep sending email to someone who has marked their previous message as spam.

It should be quite simple to use as the docs are very clear, but I kept getting an ‘unknown domain’ error in the returned message when I used the API with one of my client’s domains rather than the sandbox domain the Mailgun provides.

The fix was to use the EU address for the API: api.eu.mailgun.net rather than the standard api.mailgun.net. As my clients are in the UK, they are put into the EU servers, rather than the American ones.

I didn’t find a simple suggestion to do that, so I’m writing this so I find it next time this trips me up.

Fixing minor issues moving code from Railo to Lucee ColdFusion

I have a couple of personal projects at a host that offers the ColdFusion web programming language. I have a soft spot for ColdFusion as its my first programming language and I still find it very easy to put together sites in it as it comes with easy to use functionality the other server side language I use a lot, PHP, only gets through frameworks where they’ve been added by others.

Adobe, current owners of the official ColdFusion engine, charge rather a lot for licenses and that has a knock on effect on the price of hosting. So, I use Viviotech, who offer the open source engines that run the ColdFusion language. The popular one of those was Railo and is now Lucee. Viviotech recently shifted my sites onto Lucee, which I’d been meaning to try as Railo is old and now discontinued.

I hit a couple of problems:

Gotcha 1: No CGI.PATH_INFO

In the project, I redirect all requests through a single routing file. Within that file, I was using CGI.PATH_INFO to get the path of the page being requested. That stopped working, which turns out to be because the host is using Nginx and that doesn’t have PATH_INFO turned on by default. There are ways of making it work, but I didn’t want to be doing support requests to do that and it may not have been accepted for my cheap-as-chips hosting package.

Instead, I make the redirect in the .htaccess file send the path through for me.

My redirect went from this:
RewriteRule ^(.*)$ ./routing.cfm/$1 [L]
To this:
RewriteRule ^(.*)$ /routing.cfm?path=$1 [NC,L,QSA]
Which gives me the URL of the page being requested in URL.path (apart from the / that is normally at the start, which I needed in part of my URL detection, so I added it on.)

I re-wrote my code to use the new URL.path (with added /) instead of CGI.PATH_INFO and that got it working again.

Gotcha 2: Saving files needs permissions set

In one of the sites, I get an image from an API that makes screenshots, and save it locally so I don’t have to use the API over and over. That means getting the image using CFHTTP, then saving it using CFFILE.

That worked, but I couldn’t open the files. The fix was to use MODE=”644″ within CFFILE. This set the file permissions so the image file can be read by the world, and show up on a web page.

<cffile
action = "write"
file = "<path><filename>"
output = "#cfhttp.fileContent#"
mode="644">

Improvement: Can read SSL protected RSS feeds

Railo couldn’t read RSS feeds that were protected by Let’s Encrypt SSL certificates and some of the other cheap/free SSL providers. Lucee can.

That’s great, as I made a very basic proxy (not really worthy of the word) to go request the SSL protected RSS feed through a PHP script I had on some other hosting, which would then send it through without SSL. Not great for security (although these are all public posts it is reading.) So the update to Lucee let me remove the ‘proxy’, which has simplified my code and maintenance.

Now I have my sites working again, I’m looking forward to delving into Lucee some more.

Fixing Vagrant VMs after a VirtualBox upgrade

I use virtual machines to split up my web development projects. This lets me have a small Linux server run inside my Mac, allowing me to match the operating system with what the code will run in on a public service, without going all the way to running Linux all the time. As I want an easy life when it comes to setting these up, I use Vagrant to make and manage the virtual machines within VirtualBox, which runs them for me.

Recently I set up an extra Vagrant virtual machine (VM) to hold two projects for a client and keep them separate from some of my own projects, as they needed particular settings within Linux and a different database server to run them than the settings and database for my own projects.

I used a Homestead VM, as I knew the settings were good. Homestead is made for Laravel PHP sites, but is perfectly valid for lots of other PHP projects. My client uses various versions of the Symfony framework, and they work fine within Homestead built boxes.

The new VM worked fine but the existing VM stopped working. I could start it, I could SSH into it, but I couldn’t get any websites from it to show in the browser. Which is a big problem, as that’s all it is there for.

After much faff and investigation, I discovered the problem is a bug in the current version of VirtualBox. I had updated it while setting up the new VM, but unlike previous upgrades, this caused a problem. VirtualBox needs VMs to use IP addresses within the 192.168.56.0/21 range or the “bridge” between MacOS and the virtual machine doesn’t work, so no web pages show.

The IP of the old Homestead box my own projects were in was 192.168.10.10, which was the default when I installed it. That used to work, but now does not. The new VM uses 192.168.56.4, which is within the allowed range, so it worked fine.

To fix the old VM:

I had to edit the Homestead.yaml file. At the top where it says:

ip: "192.168.10.10"

I changed it to:

ip: "192.168.56.10"

I then ran:

vagrant reload --provision

To get vagrant to reconfigure the existing Homestead box to use this changed IP address, without deleting anything inside the VM.

And finally I edited the entries in my hosts file (which is in /etc for me in MacOS) for the websites in the VM, changing their IP from 192.168.10.10 to 192.168.56.10

Once all that was done, the websites in the VM started working again.

This was a very, very frustrating problem. I spent a long time investigating what was happening inside the VM as I presumed that’s where the problem was, eventually stumbling on the solution after searching for wider and wider versions of the problem. Thanks to Tom Westrick whose post on Github got me to the solution.

Fixing a download link downloading the same file when it should be dynamic

One of my clients, the Database of Pollinator Interactions allows researchers to easily search on which insects and mammals pollinate which plants. To make it simple for researchers to gather the references to papers they need to look up, their website allows downloading of search results as a CSV file.

This is powered by a little bit of AJAX, when a searcher clicks the download button, Javascript calls on a PHP script which reads the search filters out of a cookie, compiles the results into a comma separated file, and lets you download it.

However, the site had a bug (no pun intended). If you ran a search and downloaded the results, then ran a different search and downloaded the results, you got the same file of results, even if the search was completely different and the results shown on the page were correct for the search.

This turned out to be because the live server was set up to cache pages where possible, whereas my development server was not. The call to the script that made the file was on a URL that did not change, as it read what it required from a cookie. So the browser thought it was hitting the same URL each time the download button was pressed, so to help speed things up served up the file for download from its cache, rather than requesting a new one from the website.

The fix for this was quite straightforward. In the PHP script that received the call from Javascript, I added these headers at the top of the code:

header("Cache-Control: no-store, no-cache, must-revalidate, max-age=0");
header("Cache-Control: post-check=0, pre-check=0", false);
header("Pragma: no-cache");

Having them all is probably overkill and I should go back and find which one really does the job for them.

This makes the script send headers when the browser requests the URL saying not to cache what is sent back. So the browser will request it fresh each time the URL is called.

Now, when the button is pressed, a new CSV is always requested rather than the browser providing the one it has already received. Problem solved.