Category Archives: Code

Fixing a Mailgun API unknown domain error

I’m using the Mailgun API for a couple of clients, making sure we don’t keep sending email to someone who has marked their previous message as spam.

It should be quite simple to use as the docs are very clear, but I kept getting an ‘unknown domain’ error in the returned message when I used the API with one of my client’s domains rather than the sandbox domain the Mailgun provides.

The fix was to use the EU address for the API: api.eu.mailgun.net rather than the standard api.mailgun.net. As my clients are in the UK, they are put into the EU servers, rather than the American ones.

I didn’t find a simple suggestion to do that, so I’m writing this so I find it next time this trips me up.

Fixing minor issues moving code from Railo to Lucee ColdFusion

I have a couple of personal projects at a host that offers the ColdFusion web programming language. I have a soft spot for ColdFusion as its my first programming language and I still find it very easy to put together sites in it as it comes with easy to use functionality the other server side language I use a lot, PHP, only gets through frameworks where they’ve been added by others.

Adobe, current owners of the official ColdFusion engine, charge rather a lot for licenses and that has a knock on effect on the price of hosting. So, I use Viviotech, who offer the open source engines that run the ColdFusion language. The popular one of those was Railo and is now Lucee. Viviotech recently shifted my sites onto Lucee, which I’d been meaning to try as Railo is old and now discontinued.

I hit a couple of problems:

Gotcha 1: No CGI.PATH_INFO

In the project, I redirect all requests through a single routing file. Within that file, I was using CGI.PATH_INFO to get the path of the page being requested. That stopped working, which turns out to be because the host is using Nginx and that doesn’t have PATH_INFO turned on by default. There are ways of making it work, but I didn’t want to be doing support requests to do that and it may not have been accepted for my cheap-as-chips hosting package.

Instead, I make the redirect in the .htaccess file send the path through for me.

My redirect went from this:
RewriteRule ^(.*)$ ./routing.cfm/$1 [L]
To this:
RewriteRule ^(.*)$ /routing.cfm?path=$1 [NC,L,QSA]
Which gives me the URL of the page being requested in URL.path (apart from the / that is normally at the start, which I needed in part of my URL detection, so I added it on.)

I re-wrote my code to use the new URL.path (with added /) instead of CGI.PATH_INFO and that got it working again.

Gotcha 2: Saving files needs permissions set

In one of the sites, I get an image from an API that makes screenshots, and save it locally so I don’t have to use the API over and over. That means getting the image using CFHTTP, then saving it using CFFILE.

That worked, but I couldn’t open the files. The fix was to use MODE=”644″ within CFFILE. This set the file permissions so the image file can be read by the world, and show up on a web page.

<cffile
action = "write"
file = "<path><filename>"
output = "#cfhttp.fileContent#"
mode="644">

Improvement: Can read SSL protected RSS feeds

Railo couldn’t read RSS feeds that were protected by Let’s Encrypt SSL certificates and some of the other cheap/free SSL providers. Lucee can.

That’s great, as I made a very basic proxy (not really worthy of the word) to go request the SSL protected RSS feed through a PHP script I had on some other hosting, which would then send it through without SSL. Not great for security (although these are all public posts it is reading.) So the update to Lucee let me remove the ‘proxy’, which has simplified my code and maintenance.

Now I have my sites working again, I’m looking forward to delving into Lucee some more.

Fixing Vagrant VMs after a VirtualBox upgrade

I use virtual machines to split up my web development projects. This lets me have a small Linux server run inside my Mac, allowing me to match the operating system with what the code will run in on a public service, without going all the way to running Linux all the time. As I want an easy life when it comes to setting these up, I use Vagrant to make and manage the virtual machines within VirtualBox, which runs them for me.

Recently I set up an extra Vagrant virtual machine (VM) to hold two projects for a client and keep them separate from some of my own projects, as they needed particular settings within Linux and a different database server to run them than the settings and database for my own projects.

I used a Homestead VM, as I knew the settings were good. Homestead is made for Laravel PHP sites, but is perfectly valid for lots of other PHP projects. My client uses various versions of the Symfony framework, and they work fine within Homestead built boxes.

The new VM worked fine but the existing VM stopped working. I could start it, I could SSH into it, but I couldn’t get any websites from it to show in the browser. Which is a big problem, as that’s all it is there for.

After much faff and investigation, I discovered the problem is a bug in the current version of VirtualBox. I had updated it while setting up the new VM, but unlike previous upgrades, this caused a problem. VirtualBox needs VMs to use IP addresses within the 192.168.56.0/21 range or the “bridge” between MacOS and the virtual machine doesn’t work, so no web pages show.

The IP of the old Homestead box my own projects were in was 192.168.10.10, which was the default when I installed it. That used to work, but now does not. The new VM uses 192.168.56.4, which is within the allowed range, so it worked fine.

To fix the old VM:

I had to edit the Homestead.yaml file. At the top where it says:

ip: "192.168.10.10"

I changed it to:

ip: "192.168.56.10"

I then ran:

vagrant reload --provision

To get vagrant to reconfigure the existing Homestead box to use this changed IP address, without deleting anything inside the VM.

And finally I edited the entries in my hosts file (which is in /etc for me in MacOS) for the websites in the VM, changing their IP from 192.168.10.10 to 192.168.56.10

Once all that was done, the websites in the VM started working again.

This was a very, very frustrating problem. I spent a long time investigating what was happening inside the VM as I presumed that’s where the problem was, eventually stumbling on the solution after searching for wider and wider versions of the problem. Thanks to Tom Westrick whose post on Github got me to the solution.

Fixing a download link downloading the same file when it should be dynamic

One of my clients, the Database of Pollinator Interactions allows researchers to easily search on which insects and mammals pollinate which plants. To make it simple for researchers to gather the references to papers they need to look up, their website allows downloading of search results as a CSV file.

This is powered by a little bit of AJAX, when a searcher clicks the download button, Javascript calls on a PHP script which reads the search filters out of a cookie, compiles the results into a comma separated file, and lets you download it.

However, the site had a bug (no pun intended). If you ran a search and downloaded the results, then ran a different search and downloaded the results, you got the same file of results, even if the search was completely different and the results shown on the page were correct for the search.

This turned out to be because the live server was set up to cache pages where possible, whereas my development server was not. The call to the script that made the file was on a URL that did not change, as it read what it required from a cookie. So the browser thought it was hitting the same URL each time the download button was pressed, so to help speed things up served up the file for download from its cache, rather than requesting a new one from the website.

The fix for this was quite straightforward. In the PHP script that received the call from Javascript, I added these headers at the top of the code:

header("Cache-Control: no-store, no-cache, must-revalidate, max-age=0");
header("Cache-Control: post-check=0, pre-check=0", false);
header("Pragma: no-cache");

Having them all is probably overkill and I should go back and find which one really does the job for them.

This makes the script send headers when the browser requests the URL saying not to cache what is sent back. So the browser will request it fresh each time the URL is called.

Now, when the button is pressed, a new CSV is always requested rather than the browser providing the one it has already received. Problem solved.

Getting MySQL data out of a Homestead virtual machine when Vagrant is broken

I’ve now learned how to get data out of a Vagrant run virtual machine when Vagrant itself is broken. Steps below.

I recently upgraded my Mac to use Mojave and this broke my old Vagrant install, which I use for Homestead and a bunch of my Laravel based development websites. It was a bit of an old version of Vagrant, but still annoying. At first I thought VirtualBox, the software for creating virtual machines, just needed updating. Unfortunately an upgrade of VirtualBox was required for it to run in Mojave, but not the solution to my problem.

In general, the virtual machine (VM) breaking wasn’t a problem – I have all the site files as part of the point of using Vagrant is having those in a shared folder on your main file system, not only inside the VM Vagrant sets up to hold the development environment. So I’d lost the development environment, including Apache, MySQL, and some other bits, but not the files of my site and the environment would be easy to set up again as that’s what Vagrant makes simple.

But… I had a bunch of data in two databases within MySQL in the VM that I really wanted to keep. Upgrading Vagrant would mean wiping the data, so I didn’t want to do that. I thought to recover the data and back it up I was going to have to restore a Time Machine backup of the whole computer back to the previous version of the OS – High Sierra.

Fortunately I mentioned the problem to a few friends (AKA I moaned about my situation) and Tom  suggested I mount the VM direct. That inspired me to start poking around more, here is how I got my data out of the broken Vagrant box…

Recovery steps

Open VirtualBox and manually start the machine Vagrant set up by clicking on it and clicking the start icon.

This boots the virtual machine and gives me a command prompt.

At “homestead login:” I needed a username and password, the default for a Vagrant built VM is vagrant and vagrant (thanks to Stefan on Stackoverflow for putting up that one.)

Then I needed to backup my databases on the command line. I’m used to using web based tools for MySQL admin, so had to look this up too:

mysqldump -u homestead -psecret --all-databases > homestead-20190121.sql

Thanks to Jacob for that one.

This gives me a big text file with all the exported data in it, which is great, but the file is still inside the virtual machine, not on my normal file system where I can get at it.

After much thought I remembered what Tom had advised me in the first place – mount the VM as a drive. I took that as the starting point of some Googling and set up a shared folder using this advice.

That involved re-starting the VM and then once I was logged in, I needed to check what that shared folder was called from within the Ubuntu VM and with more searching based on some very old memories from university, I found this command:

df -h

Which lists the share to a folder called “2019 01 January” which I’d set up. It was under /media/sf_2019_01_January

So within the VM I then did:

sudo cp homestead-20190121.sql /media/sf_2019_01_January/

Because the first time I ran it, I didn’t have enough permissions to copy the file. Sudo let me temporarily have more permissions and do the copy.

Checking in the shared folder, I found all my data. I can now use this to restore the databases elsewhere.

To cleanly close down the VM, I used:

shutdown

This has a delay built in, so I now know I should have used:

poweroff

Which would have been a bit quicker.