Category Archives: Code

Fixing a download link downloading the same file when it should be dynamic

One of my clients, the Database of Pollinator Interactions allows researchers to easily search on which insects and mammals pollinate which plants. To make it simple for researchers to gather the references to papers they need to look up, their website allows downloading of search results as a CSV file.

This is powered by a little bit of AJAX, when a searcher clicks the download button, Javascript calls on a PHP script which reads the search filters out of a cookie, compiles the results into a comma separated file, and lets you download it.

However, the site had a bug (no pun intended). If you ran a search and downloaded the results, then ran a different search and downloaded the results, you got the same file of results, even if the search was completely different and the results shown on the page were correct for the search.

This turned out to be because the live server was set up to cache pages where possible, whereas my development server was not. The call to the script that made the file was on a URL that did not change, as it read what it required from a cookie. So the browser thought it was hitting the same URL each time the download button was pressed, so to help speed things up served up the file for download from its cache, rather than requesting a new one from the website.

The fix for this was quite straightforward. In the PHP script that received the call from Javascript, I added these headers at the top of the code:

header("Cache-Control: no-store, no-cache, must-revalidate, max-age=0");
header("Cache-Control: post-check=0, pre-check=0", false);
header("Pragma: no-cache");

Having them all is probably overkill and I should go back and find which one really does the job for them.

This makes the script send headers when the browser requests the URL saying not to cache what is sent back. So the browser will request it fresh each time the URL is called.

Now, when the button is pressed, a new CSV is always requested rather than the browser providing the one it has already received. Problem solved.

Getting MySQL data out of a Homestead virtual machine when Vagrant is broken

I’ve now learned how to get data out of a Vagrant run virtual machine when Vagrant itself is broken. Steps below.

I recently upgraded my Mac to use Mojave and this broke my old Vagrant install, which I use for Homestead and a bunch of my Laravel based development websites. It was a bit of an old version of Vagrant, but still annoying. At first I thought VirtualBox, the software for creating virtual machines, just needed updating. Unfortunately an upgrade of VirtualBox was required for it to run in Mojave, but not the solution to my problem.

In general, the virtual machine (VM) breaking wasn’t a problem – I have all the site files as part of the point of using Vagrant is having those in a shared folder on your main file system, not only inside the VM Vagrant sets up to hold the development environment. So I’d lost the development environment, including Apache, MySQL, and some other bits, but not the files of my site and the environment would be easy to set up again as that’s what Vagrant makes simple.

But… I had a bunch of data in two databases within MySQL in the VM that I really wanted to keep. Upgrading Vagrant would mean wiping the data, so I didn’t want to do that. I thought to recover the data and back it up I was going to have to restore a Time Machine backup of the whole computer back to the previous version of the OS – High Sierra.

Fortunately I mentioned the problem to a few friends (AKA I moaned about my situation) and Tom  suggested I mount the VM direct. That inspired me to start poking around more, here is how I got my data out of the broken Vagrant box…

Recovery steps

Open VirtualBox and manually start the machine Vagrant set up by clicking on it and clicking the start icon.

This boots the virtual machine and gives me a command prompt.

At “homestead login:” I needed a username and password, the default for a Vagrant built VM is vagrant and vagrant (thanks to Stefan on Stackoverflow for putting up that one.)

Then I needed to backup my databases on the command line. I’m used to using web based tools for MySQL admin, so had to look this up too:

mysqldump -u homestead -psecret --all-databases > homestead-20190121.sql

Thanks to Jacob for that one.

This gives me a big text file with all the exported data in it, which is great, but the file is still inside the virtual machine, not on my normal file system where I can get at it.

After much thought I remembered what Tom had advised me in the first place – mount the VM as a drive. I took that as the starting point of some Googling and set up a shared folder using this advice.

That involved re-starting the VM and then once I was logged in, I needed to check what that shared folder was called from within the Ubuntu VM and with more searching based on some very old memories from university, I found this command:

df -h

Which lists the share to a folder called “2019 01 January” which I’d set up. It was under /media/sf_2019_01_January

So within the VM I then did:

sudo cp homestead-20190121.sql /media/sf_2019_01_January/

Because the first time I ran it, I didn’t have enough permissions to copy the file. Sudo let me temporarily have more permissions and do the copy.

Checking in the shared folder, I found all my data. I can now use this to restore the databases elsewhere.

To cleanly close down the VM, I used:

shutdown

This has a delay built in, so I now know I should have used:

poweroff

Which would have been a bit quicker.

ColdFusion Admin “Cron service not available” error

Recently on a client site where we’re using an old version of ColdFusion I tried logging in to the admin area and rather than the login prompt, it was showing me the error message “Cron service not available”

After running around various pages that were all a duplicate of this Adobe forums thread saying the problem was with the <ColdFusion directory>\lib\neo-cron.xml file being corrupt, I tracked down the file on the server and checked it. The file was empty, well, it was full of spaces or tabs and not much else. I tried renaming it and re-starting the ColdFusion service and was still getting the error. Then I tried making an empty file called neo-cron.xml and restarting the service, no difference. I didn’t have a ready backup of the file so thought I was stuck, but tried a copy of the one from my more recent test server version of ColdFusion. Another service restart and it worked.

In case you’re stuck like me, here is the contents of the default neo-cron.xml from CF v11, it worked for my client’s server which is an earlier CF version:

<wddxPacket version='1.0'><header/><data><array length='4'><struct type='coldfusion.server.ConfigMap'></struct><boolean value='false'/><string></string><string>log,txt</string></array></data></wddxPacket>

You will need to re-make your scheduled tasks using the CF Admin, or CFSchedule in code.

Note: Depending on the age of your ColdFusion install, the neo-cron.xml directory could be in <your ColdFusion directory>\lib\ or <your ColdFusion directory>\cfusion\lib\

Good luck with your problem, I hope this helps.

Google Jobs Indexing API Setup Problem

I’ve recently worked with Laura to set up three recruitment sites to automatically notify Google Jobs of new and changed vacancies showing on the sites.

The documentation is relatively good, but when setting up the second I made a mistake that I don’t think is easy to spot, so I’m writing it up here in case it helps someone else, or indeed myself in the future.

The problem was: we had everything set up – new set of keys and JSON for the site from the Indexing API dashboard for the site, had added the special email address you have to set up to Google Search Console, copied the reusable code from the first project, which was working fine and… it wouldn’t work. Sending URLs in through the Indexing API got the error:

response-4829
Google_Service_Exception
Permission denied. Failed to verify the URL ownership.
reason: forbidden

Now, neither of us are daft, we could see there was a permissions problem, but couldn’t track down where it was. After much re-tracing of steps and faff, I compared everything about the existing, working project and the second one we were setting up. In Google Search Console (AKA Webmaster Tools), you have to add a special email address from the Indexing API dashboard. I’d done this incorrectly.

I had set up the Indexing API user for the site via the “New user” button in Google Search Console, then given it Full permissions. This was wrong. I should have used the “Manage property owners” link and the “Add new owner” button.

Google Search Console manage property owners link

I deleted the user I’d added with the “New user” button, added one with the “Manage property owners” link and ended up with a list like this:

List of owners in a site in Google Search Console

The Indexing API then started to work, no problem at all.

The first time I’d set up a project with the Indexing API I’d followed the instructions very closely and had got this step right, the second time I’d got ahead of myself and thought I remembered all the steps, and this bit within Search Console caught me out.

I can understand why Search Console has two different categories of user, allowing multiple owners gives a layer of management in an account which lets anyone shift in and out of ownership. But, it’d be nice if the interface was more clear for a case like this, or that they had a category for API users – maybe this will come in the future as Search Console is developed further.

So, if you’re getting a permission error when using the Indexing API, check how you have the API user set up within Search Console, it could be where your problem lies.

Sending a push notification to your browser or mobile with ColdFusion and Push Engage

Push Engage is a service which lets you easily send push notifications to a browser or mobile phone, using a little code on your website. It’s very easy to set up and they currently have a very generous free account, allowing you to send a notification to up to 2,500 browsers/devices.

I’m using it as part of some alerts in the background of a client’s website. They’re using ColdFusion, so I needed to work out the code to send the alert from them, the API documentation on Push Engage has an example in PHP, but it’s very simple to convert. Here’s a CFHTTP call that will send a notification:

<cfset api_key = “(your API key here)”>

<cfhttp method=”Post”
url=”https://www.pushengage.com/apiv1/notifications”>

<cfhttpparam type=”header”
value=”#api_key#”
name=”api_key”>

<cfhttpparam type=”Formfield”
name=”notification_title”
value=”The text for the alert title”>

<cfhttpparam type=”Formfield”
name=”notification_message”
value=”The smaller text of the message of the notification”>

<cfhttpparam type=”Formfield”
name=”notification_url”
value=”http://www.example.com/”>
</cfhttp>

I’ve already followed their steps for adding Javascript to a page on the website, visiting it using a browser on my computer and my phone and accepting notifications from the site. Now, when I trigger the page with this on, I get a notification a few moments later. Lovely!

Thanks to Dave Child for introducing me to Post Engage.