WordPress Multisite Database Migration

Recently we did a WordPress Multisite Database Migration on Trellis. All is pretty straightforward files wise though we did have to do some extra media work using a volume. The database on the other hand was a bit more problematic.

WP CLI Database Import & URL Replacement

Using wp-cli and aliases for the import and replacement:

# sync-db-prod-to-staging.sh
# chmod +x sync-db-prod-to-staging.sh
read -r -p "Do you solemnly swear that you have had fewer than 2 alcoholic beverages in the last hour and that you would really like to reset your staging database and pull the latest from production? [y/N] " response
if [[ $response =~ ^([yY][eE][sS]|[yY])$ ]]; then
wp @staging db reset --yes &&
wp @production db export -> sql-dump-production.sql &&
wp @staging db import sql-dump-production.sql &&
wp @staging search-replace --network --url=domain.com http://domain.com https://staging.domain.com --skip-columns=guid
fi

things didn’t work out. The database was not imported properly. It was a large database so we decided to just import it.

WP CLI Search & Replace

For the replacing of the urls we tried to use WP CLI however. This as this did work locally on a new Vagrant setup. So we did the following command:

wp search-replace --network --url=domain.com http://domain.com https://staging.domain.com --skip-columns=guid

we got the following error:

2018/04/22 08:04:11 [error] 13636#13636: *92588 FastCGI sent in stderr: "PHP message: WordPress database error Table 'staging_domain_com_staging.wp_blogs' doesn't exist for query SELECT  blog_id FROM wphsm_blogs  WHERE domain IN ( 'staging.domain.com' ) AND path IN ( '/de/', '/' )  ORDER BY CHAR_LENGTH(path) DESC LIMIT 1 made by require('wp-blog-header.php'), require_once('wp-load.php'), require_once('/srv/www/staging.domain.com/releases/20180418081603/web/wp-config.php'), require_once('wp-settings.php'), require('wp-includes/ms-settings.php'), ms_load_current_site_and_network, get_site_by_path, get_sites, WP_Site_Query->query, WP_Site_Query->get_sites, WP_Site_Query->get_site_ids" while reading response header from upstream, client: xx.xxx.xx.xx, server: staging.domain.com, request: "GET /de/europe/country/province/region/flaine/feed HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm-wordpress.sock:", host: "staging.domain.com"

NB Related Trellis issue

Oddly enough the table was there. In another Github issue https://github.com/Chassis/Chassis/issues/324 I read we should use wp core multisite-convert .Or at least that helped him, but we were just cloning the multisite database from production to staging here so why should I use that?

Manual Multisite unique table changes

We did some more research and bumped into a great article by Eric Holmes. So we did some manual database changes in:

  • wp_blogs – this holds record of all your WPMU blogs. Alter the domain field(s).
  • wp_options – here do the normal alterations from home_url and site_url, as if you were transferring a regular WordPress website. If you have multiple blogs, you will need to go into wp_2_options, wp-3_options, etc.
  • wp_site – This defines the multisite domain and path. Change the domain here.
  • wp_sitemeta – Search for the meta key siteurl – change the domain.
  • wp_usermeta – Search for the meta key source_domain – change the domain.

Items listed as recommended by Eric and also partly recommended in the Codex .

Media URLS

Now the staging site seemed to be loading again with latest database from the production site. Only the image paths needed updating so we did a wp search replace using:

wp search-replace http://domain.com https://staging.domain.com --skip-columns=guid

This fixed all further mixed content issues and allowed the site to be loaded properly. WordPress Multisite Database Migration complete!

Digital Ocean Volume for WordPress Media

Adding Digital Ocean Volume for WordPress Media can be very useful. Especially if you have a lot of images and or other media and need a cheap way to store them. Block Storage at Digital Ocean is fast and easy to work with. It allows you to get a cheap Droplet and add cheap storage.

We already wrote about Setting up Trellis on Digital Ocean. Now you can expand upon this with a volume for your media!

Format the Volume

To format the volume I followed Digital Ocean’s instructions. These you will get once a volume has been setup. In a popup you will get the line needed for your volume in your region. Idid a

sudo mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_volume-lon1-01

Add directory to store media

If you have this WordPress setup from scratch and just start a website you can skip this as the media directory is yours to use

If you do not already have the media directory where you store the media you can create one. Most likely you do. We however moved the old sites one inside the wp-content/uploads for our multisite media to a backup location and then recreated it using:

sudo mkdir -p sites

This as we would later sync all that data from the production server.

Mounting the Volume

Next we had to mount that directory so that it would load all the data from the volume instead of the actual Droplet. We did that using:

sudo mount -o discard,defaults /dev/disk/by-id/scsi-0DO_Volume_volume-lon1-01 /srv/www/staging.domain.com/shared/uploads/sites;

And to make the mount permanent do a:

echo /dev/disk/by-id/scsi-0DO_Volume_volume-lon1-01 /srv/www/staging.domain.com/shared/uploads/sites ext4 defaults,nofail,discard 0 0 | sudo tee -a /etc/fstab

Then we checked if the mounting was successful:

cat /proc/mounts |grep staging
/dev/sda /srv/www/staging.domain.com/shared/uploads/sites ext4 rw,relatime,discard,data=ordered 0 0

As you see it was. And this means for most of you that you are done. You will have a Digital Ocean Volume for WordPress Media. We however needed to get the media copied over still.

Rsync Data

If you have this setup from scratch and just start a website you can skip this

Normally if you have a media volume already and need one for staging or another clone of the server you can just clone the volume and set up a new one and attach it. We only had an issue as the production server and volume were in a different region than staging so we had to set up a volume from scratch at both locations and sync the data between them:

ssh -o ForwardAgent=yes web@xxx.xxx.xxx.xx "rsync -aze 'ssh -o StrictHostKeyChecking=no' --progress /srv/www/domain.com/current/web/app/uploads/sites/ web@xx.xxx.xx.xx:/srv/www/staging.domain.com/shared/uploads/sites/"