WooCommerce Checkout Sendgrid Issue

We had a WooCommerce Checkout Sendgrid Issue on one of our Trellis servers. Payments did work, but no feedback was sent to client or very late. No confirmation of successful sale was given. This is very inconvenient obviously so we checked out what was the issue and solution. We soon found out WooCommerce and Sendgrid were not plating nice. Here below the whole discovery process.

Upstream Timed Out

The error we had was:

2018/01/25 08:27:10 [error] 16241#16241: *42582 upstream timed out (110: Connection timed out) while reading response header from upstream, client:, server: domain.com, request: "POST /?wc-ajax=checkout HTTP/2.0", upstream: "fastcgi://unix:/var/run/php-fpm-wordpress.sock", host: "domain.com", referrer: "https://domain.com/checkout/"

Port 110 is the post office protocol port and the ip address an address of a Malta Cable company. Not much to go on early other than that there seems to be a time-out issue and that Nginx could perhaps use some more Ks for its buffer.

Nginx Buffering

So we decided to up the Nginx buffer using:

nginx_fastcgi_buffers: 16 16k
 nginx_fastcgi_buffer_size: 32k

inside group/vars/production/main.yml. This I added and re-provisioned our Trellis server.


The other thing we wondered about if Sendgrid was having issues sending out details after a successful Stripe payment had been made. We were after all using it for outgoing emails using:

Documentation: https://roots.io/trellis/docs/mail/
mail_smtp_server: smtp.sendgrid.net:587
mail_admin: admin@publiqly.com
mail_hostname: publiqly.com
mail_user: publiqly
mail_password: "{{ vault_mail_password }}" # Define this variable in group_vars/all/vault.yml

When we checked Sendgrid we hardly saw any traffic. Something to worry about.

WP Mail Logging & sSMTP Logging

So we decided to install WP Mail Logging to facilitate the checking of all outgoing email. We also activated sSMTP mail logging. This you can do by setting


in ssmtp.conf and then check syslog for any errors.

Mail Logs

And then I thought about the standard mail logs. And when I checked at /var/log/mail.err I found:

Jan 25 08:29:28 domain sSMTP[16416]: Cannot open smtp.sendgrid.net:587
Jan 25 08:33:42 domain sSMTP[16424]: Unable to connect to "smtp.sendgrid.net" port 587.
Jan 25 08:33:42 domain sSMTP[16424]: Cannot open smtp.sendgrid.net:587
Jan 25 09:07:42 domain sSMTP[16603]: Unable to connect to "smtp.sendgrid.net" port 587.
Jan 25 09:07:42 domain sSMTP[16603]: Cannot open smtp.sendgrid.net:587

Well there you go. It seems the connection cannot be made properly. I contacted Sendgrid one this.

Port 587

Found out port 587 like most ports aren’t open on Trellis. This by doing a:

# netstat -ntlp | grep LISTEN
tcp        0      0   *              LISTEN      1500/nginx -g daemo
tcp        0      0*              LISTEN      1343/memcached  
tcp        0      0    *              LISTEN      1500/nginx -g daemo
tcp        0      0    *              LISTEN      23307/sshd      
tcp6      0      0 :::443                    :::*                     LISTEN      1500/nginx -g daemo
tcp6      0      0 :::3306                  :::*                     LISTEN      1618/mysqld     
tcp6      0      0 :::80                     :::*                     LISTEN      1500/nginx -g daemo

So based on a Roots forum search I added:

- type: dport_accept
dport: [587]
protocol: tcp
- type: dport_accept
dport: [587]
protocol: udp

to group_vars/all/security.yaml. Then I re-provisioned those playbooks:

ansible-playbook server.yml --tags "ferm,ssmtp, mail" -e env=production

Ports not the Issue

Then based on the Roots Discourse thread I had running I realized we were talking outgoing port. It is not incoming traffic that is the issue. And that the issue was more with Sendgrid or the way Sendgrid dealt with the incoming requests. SSH and https/http ports are listening for incoming requests. I was recommended to do a telnet test do debug and to use Sendgrid api keys to make the connection work better. So I removed the new port rules. Then I implemented the recommendations.

Telnet check

To do a telnet test you have to get a key and convert it to the appropriate version to do a test with it using telnet securely. So I went to Sendgrid, generated an api key with full access minus billing. Then I converted it to base64 with openssl from the command line using:

echo '<<YOUR_API_KEY>>' | openssl base64

I stored the api key and converted key in KeepassX for later use. When I just ran

telnet smtp.sendgrid.net 587

from the Trellis server in question I got:

telnet smtp.sendgrid.net 587
telnet: Unable to connect to remote host: Connection timed out

Well, and that was the error we had in the logs basically.

DO Ipv6 mail issues?

Then I read Digital Ocean’s port setup. So it seemed it was an ipv6 Digital Ocean port issue. So based on this DO question I edited gai.conf:

nano /etc/gai.conf

and made the appropriate lines look like this:

precedence ::ffff:0:0/96 100

where 10 becomes 100 and the whole line is uncommented. This to run via ipv4. Well, it did not help.

Sendgrid API Plugin

So I installed the Sendgrid API plugin. Adding details in Safari got the Sendgrid settings page reloading like crazy . In Chrome things did work fine as well as a test email using the plugins settings page for this.

Final Test with Sendgrid API

So final test that needed to be done was a new (test) purchase and see if Sendgrid was working and no longer blocking the whole checkout process. I did and the payment worked, a on page and by email confirmation were done right away. And that is amazing news. Sendgrid API all the way!

NB Did have one JS error in the console stil:

TypeError: undefined is not an object (evaluating '$(".woocommerce-billing-fields__field-wrapper").position().left')

but that may be caused by other plugins used on the page and did not seem to interfere. So that can be debugged in time.

Updating Trellis – WordPress LEMP

Updating Trellis can be a challenge initially and there is no one way to do it. Lots of people wrote about it at Roots Discourse and on Github. Most of them require some major git foo. Did write about updating the Trellis server before, but not on how to maintain Trellis itself. Here is my- manual – take on it.

Trellis Repository Update

I first rename the current Trellis folder to trellis-old and git clone the latest Roots Trellis version:

  • mv trellis trellis-old
  • git clone –depth=1 git@github.com:roots/trellis.git && rm -rf trellis/.git

That way I can keep the old copy and have the latest so I can copy over changes I need. I also put trellis-old on the .gitignore list with some other directories and files:


Trellis files to be updated

Then I make all the changes to files in the following directories:

  • group_vars/all
  • group_vars/production
  • group_vars/staging
  • hosts

I skipped group_vars/development as there hardly ever is a need for me there. Don’t do tweaks in development really as Trellis handles this pretty well out of the box with Vagrant.

Common Variables

The group all with common variables alone has:

  • mail.yml,
  • main.yml,
  • vault.yml,
  • users.yml

to updateMail.yml has the mail details so your Trellis server can send out email. Something like:

# Documentation: https://roots.io/trellis/docs/mail/
mail_smtp_server: smtp.sendgrid.net:587
mail_admin: admin@domain.com
mail_hostname: domain.com
mail_user: user
mail_password: "{{ vault_mail_password }}" # Define this variable in group_vars/all/vault.yml

when you are using Sendgrid.

Main has the main vars including some of your own custom ones.I made sure all customizations to PHP settings are added to group_vars/all/main.yml:

php_max_execution_time: 300
php_max_input_vars: 1000
php_memory_limit: 256M
php_post_max_size: 128M

In vault.yml the vault mail password is stored. That is needed for sending out email which is mainly set up in mail.yml. Under users.yml you add the server users and the keys used for which we normally use our own Github ones:


Though users.yml is not hard to set up and admin for admin_user is correct most of the time you do need to make sure all is well and no changes were made.

Staging and Production

Then staging and development have two files each that need updating:

  • vault.yml
  • wordpress_sites.yml

These files do not change much in Trellis, but they contain major details on your WordPress setup so do need to be updated with your customizations properly.

NB Did add php_memory_limit: 512M to production and staging, but I guess that could be moved to group_vars/all as well. Still two files each there. So nine files in total.


Host files for staging and production need their ips updated so they have the ones you added before. This is pretty easy to do and as these files hardly every change you can overwrite them. Example staging hosting file:

# Add each host to the [staging] group and to a "type" group such as [web] or [db].
# List each machine only once per [group], even if it will host multiple sites.

Trellis Server FTP Credentials requested by WordPress

Just today WordPress asked me to enter FTP Credetials to proceed after I adjusted an image using the Jupiter interface for header images. This had never happened before on any Trellis setup of mine.

FTP Credentials Needed

The full error message was on a very basic page with the fields to enter the ftp user and password:

Connection Information

To perform the requested action, WordPress needs to access your web server. Please enter your FTP credentials to proceed. If you do not remember your credentials, you should contact your web host.

 example: www.wordpress.org
 FTP Username
 FTP Password
 This password will not be stored on the server.

And the error came when I tried to update the post or save it. Normally data is just stored in the database so I failed to understand why the FTP credentials were requested in the first place. But apparently the file being run also suffers from this. And so there must be a permission issue.

WordPress Admin Rights

We do allow admins to install plugins on the server and as Ben mentioned at Roots discourse once again that is not recommended. But hey, we needed this working with multiple team members not familiar with tools such as Git, Composer, WP-CLI and the likes. So although I would have preferred to manage the plugins with composer it was simply not possible with this projects.

Error Logs

So decided to check the logs for clues on the need for FTP. The error log showed

PHP message: PHP Warning: Cannot modify header information - headers already sent by (output started at /srv/www/sub.domain.com/releases/20171017052436/web/wp/wp-admin/includes/file.php:1678) in /srv/www/sub.domain.com/releases/20171017052436/web/wp/wp-admin/post.php on line 198
PHP message: PHP Warning: Cannot modify header information - headers already sent by (output started at /srv/www/sub.domain.com/releases/20171017052436/web/wp/wp-admin/includes/file.php:1678) in /srv/www/sub.domain.com/releases/20171017052436/web/wp/wp-includes/pluggable.php on line 1216" while reading upstream, client:, server:sub.domain.com, request: "POST /wp/wp-admin/post.php HTTP/2.0", upstream: "fastcgi://unix:/var/run/php-fpm-wordpress.sock:", host: "sub.domain.com", referrer: "https://sub.domain.com/wp/wp-admin/post.php?post=483&action=edit"

Based in this I checked this WordPress core files for anomalies. I found:

 // Session cookie flag that the post was saved
 if ( isset( $_COOKIE['wp-saving-post'] ) && $_COOKIE['wp-saving-post'] === $post_id . '-check' ) {
 setcookie( 'wp-saving-post', $post_id . '-saved', time() + DAY_IN_SECONDS, ADMIN_COOKIE_PATH, COOKIE_DOMAIN, is_ssl() );

in post.php. In Pluggable line 1216 showed:

if ( !$is_IIS && PHP_SAPI != 'cgi-fcgi' )
 status_header($status); // This causes problems on IIS and some FastCGI setups
header("Location: $location", true, $status);
return true;

Session Issue

And so that seemed to be related to a session issue on saving of the post. Which is correct. Not a whitespace issue that should not be there as is sometimes the case. So I started to wonder. I did do a server update while working so perhaps the session got messy. I logged of and on, tried saving the same post again and things were fine. I also checked rights and permissions for the /srv/www/sub.domain.com/current/web/app/ and did not see something odd really. So perhaps it was just a session issue due to the server update I did doing a 

unattended-upgrades -d

Should have probably not have done these two things at the same time!

Reinstallation Latest WordPress Version

Issue continued however. Once I made a header change the second time and saved the issue returned. So then I decided to reinstall WordPress as it seemed the issue was pointing to WordPress Core files and I could still not find issues with the files it got stuck at. Did not work either

Image Replacement

The image used in the header was an image loaded from the main domain on the same server so thought that may be the issue and uploaded on on the subdomain media manager itself. Did did not seem to be the issue though as the issue remained.


So I decided to upload the theme once again as it was updated recently and that may be the issue. After I overwrote all theme files things seem to be working. But only when I added a line to application.php for direct FS method*.

* Custom Settings
define('DISABLE_WP_CRON', env('DISABLE_WP_CRON') ?: false);
define('DISALLOW_FILE_EDIT', true);
define('FS_METHOD', 'direct');

*(Primary Preference) “direct” forces it to use Direct File I/O requests from within PHP. It is the option chosen by default.

The extra define(‘FS_METHOD’, ‘direct’); in application.php seemed to do the trick.

This makes us think there has been a Jupiter change that requires file manipulation that somehow does not work properly with other methods besides direct file I/O requests. But we have not figured it out yet. Seems unlikely now that this is a file or directory permission issue. Otherwise we would have had other issues and error messages.

ERROR! Trellis no longer supports Ansible Fix

Getting a “ERROR! Trellis no longer supports Ansible” error? You probably upgraded to the latest Trellis from an older version and now you need Ansible or higher. So how do we upgrade this baby? Well in my case the box I just had the issue at runs Ansible via Homebrew. Also I have one older Trellis version for another site so I want to be able to use the older version from time to time. So I do want to go back to version when need be.

Homebrew  Ansible Upgrade

To upgrade my Homebrew installed Ansible I ran:

brew update

and then when I checked the version:

ansible --version

I found out it did upgrade, but not to the next version. Silly me. I would need to remove ansible and then so a brew install ansible@2.4 . But then the next time I needed to run I would have to reverse all using two commands. Sounded like doing it with pip was easier which allows version changes with one command and which is recommended by Roots core members.

Ansible Pip Install and Upgrade

So I removed ansible from Homebrew using:

brew remove ansible
brew uninstall --force ansible

and then I installed it with pip like I had on my MBP:

sudo easy_install pip
sudo pip install ansible --quiet

And that worked well. No upgrade needed now as the latest version was grabbed:

ansible --version
config file = /Users/jasper/webdesign/ianua.imagewize.com/trellis/ansible.cfg
configured module search path = [u'/Users/jasper/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.13 (default, Mar 5 2017, 15:42:57) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)]

Pip Change Ansible Version

And with pip I can also go back to my older version needed for the older Trellis setup using

sudo pip install ansible==

Trellis Build

So then I tried another vagrant up to see if I could avoid the error message:

ERROR! Trellis no longer supports Ansible
Please upgrade to Ansible or higher.
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.

vagrant up did work, but the site failed to load so I did another

vagrant provision

The provisioning took quite some time. Clearly first vagrant up had not really completed the installation of the new packages. But that did the trick and I was able to load the site locally and import the database backup. Yay!

Digital Ocean Monitoring Beta Setup

Just installed Digital Ocean  monitoring beta on one of my Digital Ocean droplets where I have Trellis running for a WordPress client of mine. It is a new way to monitor bandwidth, memory usage and I/O. And also a way to get alerts when your droplet gets hit hard on one of those metrics.

Installation Digital Ocean Monitoring Beta

Setting it up on an existing droplet was quite straightforward. Did have to reboot once. Probably as I did an upgrade running a:

sudo unattended-upgrades -d

just before I got to setting up the monitoring with this easy curl command:

curl -sSL https://agent.digitalocean.com/install.sh | sh

NB Needs to be run as root or using sudo

That ran an ssh script downloading and installing the agent. And it started working pretty quickly afterwards. Just give it like 10-15 minutes.

Monitoring Alert

I also set up a monitoring alert warning me when CPU reached 70% as well as one when memory exceeded 40%.

Here is an example of an alert policy setup screen:

Alert Policy


And here is the list of alerts I set up for one of these droplets:

Digital Ocean Alerts


As you can see setting up an alert is really straight forward. And warnings can be emailed or sent to your Slack account. Really awesome. Though an app that sends push notifications would even be better. There is an API though. Will have to look into that some other time.

Graphs Beta

Graphs beta will start working as soon as you have set up your monitoring on your droplet. As you can see below it is still pretty empty. But that is as I just started using it. I like the layout. Easy to see things from a bird’s eye view so to speak.

Digital Ocean Graphs Beta

Graphs will be good to check out the history when you do get an alert. This to see if there was just a spike or whether the usage has gone and your droplet may need an upgrade.


Moving on the graph of one of the monitors will show you details:

Memory Monitoring Details


Access MariaDB on Trellis LEMP using Sequel Pro

To access MariaDB on Trellis LEMP using Sequel Pro from you local box is easy once you know how. Like with most things in life really. But the main thing is that you need to know the proper way to access the database once you have set up SSH access properly with Sequel Pro (see this article on more on SSH access via Sequel Pro).

No root Access to Database

The issues is that MariaDB with standard setup does not allow root access unless you are root on the system and then you can login without a password as the standard MariaDB setup uses a plugin to check whether you are root and then automatically grants you access. And we normally log in as a non-root. So now what?

Database User & Password

If you are used to accessing your VPS with the root user for the database you will fail getting access. So instead of adding root and password you should add the database user for the database in question and the password there. That way you can avoid the need root ssh-ing into the box which you normally do not want nor should really want to.

Figuring out the Username

To figure out the username you normally just need the name or your project and whether it is development, production or staging. Normally it is example_com based on your chosen example.com project name.

Or – to be really certain – you ssh into your box, change to root, mysql -u root into the database server and then check which user is for the database.

So  you would do

ssh admin@domain.com
sudo su
mysql -u root
use mysql;

NB vagrant ssh for local access but we are accessing production in this example

Check Existing Users in Database

And then you would check for all users to figure the user for the database:

MariaDB [mysql]> select user,plugin FROM user;


| user                | plugin      |


| root                | unix_socket |

| root                |             |

| root                |             |

| root                |             |

| sub_domain_com |             |


5 rows in set (0.01 sec)

As you can see the root has unix_socket access and the only non root user is sub_domain_com (name changed) . That is the user you should use together with the password you added in your vault.yml. So normally is is domain_com as the user.

Database Access Granted

Once you change the database user and add the correct password you can acces the database and make a backup for example. Or do other manipulations like you would normally do in the database with Sequel Pro.

Trellis Let’s Encrypt Expired SSL / TLS certificate – Cause and Solution

Though Trellis VPS setups should auto renew the Let’s Encrypt certificate automatically I had a certificate expire for one of my Trellis sites the other day. Here is how I found out about the Trellis Let’s Encrypt Expired SSL / TLS certificate and how I worked out solving it.

Google Search Console Warning

The reason I found out about it was on time and not by visiting the site itself. I got a warning from the Google Search Console:

Expired SSL/TLS certificate on https://domain.com

To: Webmaster of https://domain.com

Google has noticed that the SSL/TLS certificate for https://imagewize.nl/ has expired. This means that your website is not perceived as secure by some browsers. As a result, many web browsers will block users by displaying a security warning message when your site is accessed. This is done to protect users’ browsing behavior from being intercepted by a third party, which can happen on sites that are not secure.

Recommended Action:

Get a new certificate
To correct this problem, renew or get a new SSL/TLS certificate. This should be from a Certificate Authority (CA) that is trusted by web browsers.

And because Trellis uses HSTS I could not even continue to the site despite the warning I should not normally having an option to continue anyways..

Renew Certificate Manually

I checked Roots Discourse and found out I could use the following command to renew the certificate manually:

ansible-playbook server.yml -e env=production -K --tags letsencrypt

That was not enough though. I had to restart NGINX as well. And when I logged in I was told a reboot was needed as well:

dhc-user@domain.com:~$ sudo service nginx restart
dhc-user@domain.com:~$ sudo reboot

Cause Failed Auto Renew

Perhaps the during an auto renew request the server was not accessible. Did not see errors it was down though. I could also not find errors like:

2016/07/08 02:41:31 [error] 7259#7259: ocsp.int-x3.letsencrypt.org could not be resolved (110: Operation timed out) while requesting certificate status, responder: ocsp.int-x3.letsencrypt.org

as mentioned in a thread at Roots Discourse here. So I did not have issues accessing Let’s Encrypt for the renewal either.

Trellis Cron jobs

I checked for cron jobs next using:

for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l; done

as root, but no cron jobs were shown for any users . And we do need one to run the Let’s Encrypt auto renew of course. Then I mentioned this on Roots Discourse and got this comment by  CFX:

Please ensure your cron job (normally at /etc/cron.d/letsencrypt-certificate-renewal) has the full path to service in its reload command (original PR2). That was apparently my issue too—renewal was just fine but nginx never reloaded after renewal.

This way I learned the location of the cronjobs. And also that the path to reload NGINX could sometimes be the issue and a full path is needed in those cases. So I adjusted the path. And it was confirmed a little later that this was the main cause.


So through the comment by CFX I found out the cron jobs for Trellis are stored at:


and the Let’s Encrypt one is therefore at:


And this is the location for cron jobs I did not check. Now that file contains:

30 4 1,11,21 * * root cd /var/lib/letsencrypt && ./renew-certs.py && /usr/sbin/service nginx reload

This is a perfectly normal cron job. Now it also has the full path to reload NGINX properly. This being the issue I will now have the auto renewal work again like it should be.

NB Reading up on cron jobs here at Ubuntu Wiki it is mentioned too so I found out. It is a place where often one liners for particular users are stored instead of using a crontab.

Custom PHP Settings in Trellis

Often when you set up a Trellis server you find out your PHP settings are not good enough for the WordPress app you are building. Simply because you run a plugin like WooCommerce that needs more than the default 96MB PHP memory. Or because you are a fan of a slider like Revolution Slider that needs a larger upload max. file size or larger max. post size than the standard 25M. So how do you deal with custom PHP settings in Trellis?

Default PHP Values Trellis

The default values for PHP in Trellis are currently:

disable_default_pool: true
 memcached_sessions: false

php_error_reporting: 'E_ALL & ~E_DEPRECATED & ~E_STRICT'
 php_display_errors: 'Off'
 php_display_startup_errors: 'Off'
 php_max_execution_time: 120
 php_max_input_time: 300
 php_max_input_vars: 1000
 php_memory_limit: 96M
 php_mysqlnd_collect_memory_statistics: 'Off'
 php_post_max_size: 25M
 php_sendmail_path: /usr/sbin/ssmtp -t
 php_session_save_path: /tmp
 php_upload_max_filesize: 25M
 php_track_errors: 'Off'
 php_default_timezone: '{{ default_timezone }}'

php_opcache_enable: 1
 php_opcache_enable_cli: 1
 php_opcache_fast_shutdown: 1
 php_opcache_interned_strings_buffer: 8
 php_opcache_max_accelerated_files: 4000
 php_opcache_memory_consumption: 128
 php_opcache_revalidate_freq: 60

php_xdebug_remote_enable: "false"
 php_xdebug_remote_connect_back: "false"
 php_xdebug_remote_host: localhost
 php_xdebug_remote_port: "9000"
 php_xdebug_remote_log: /tmp/xdebug.log
 php_xdebug_idekey: XDEBUG
 php_max_nesting_level: 200

NB You can always check the latest setup for it here at the Trellis repository

This main.yml file is located at roles /php/defaults/main.yml . Based upon it Trellis sets up your PHP.ini values for you.

Setting Custom PHP Settings in Trellis

So what I normally do is that I change three options to have the following values:

php_memory_limit: 256M
php_post_max_size: 32M
php_upload_max_filesize: 32M

This to run WooCommerce well and allow Revolution Slider to work well with larger files and PHP post_max_size. Sometimes I also change the maximum execution time:

php_max_execution_time: 120

to a value double the size. This if I need more time for the execution of certain scripts.

Updating Trellis

Now that you have set your own new values you will also have to re-provision your Trellis server. To do this on production use:

ansible-playbook server.yml -e env=production

To do it locally on your Vagrant box you just have to run

vagrant provision

WordPress FYI

WooCommerce requires at least 256M memory. To obtain this it also states the following code snippet:

define('WP_MEMORY_LIMIT', '256M');

needs to be added to wp-config.php. This besides making sure your server allows this via php.ini or a .htaccess for example. Why? Because you need to tell WordPress to actually use the 256M of RAM. WordPress allocates 40MB for single setups by default only.

Read more about this in the WordPress Codex here.

Changing Site URL with WP-CLI

If you quickly need to change the WordPress site url on a VPS or dedicated server with WP CLI installed or installable WP-CLI is your friend. It is a great command line based option dealing with url changes during site migration. There are of course great plugins to help out too, but believe me with full server access or VPS access a command line tool like wp-cli can deal with the url issues in no time!

Development to Production

If you just need to move the database from development to production as you will go live with the content you can do this. You replace the database on the production server with the one on the development box. Then you use WP-CLI and then change the urls in a jiffy. Just ssh into your production server and run the search-replace command like this:

wp search-replace 'example.dev' 'example.com' --skip-columns=guid

Afterwards all will work well. All content will be loaded properly with the right urls. Images should be loaded too as long as you moved those to the server as well. I do this often for all new Trellis LEMP box setups at Digital Ocean and I can tell you it works really well. A lot better than the WordPress Importer tool from the Dashboard. Though the importer does work quite well from the command line you still need to deal with images.

Dry Run – Better Safe than Sorry

You can also do a dry run first with –dry-run added. This to see what urls will be changed and to make sure you are running the correct command. So the command will then be

wp search-replace 'example.dev' 'example.com' --skip-columns=guid --dry-run

Actually pretty smart to use. Especially if you are talking a lot data and or complicated url. Better safe than sorry. Will safe you restoring it all with or without backup.

Backup, backup!

Do remember it is always good to backup the database before you do this. Just in case you do not fill in the urls properly once you run things live. Sequel Pro is a great OSX database management tool for backups, changes and replacements of databases.

There is more..

You can do way more with wp-cli than just searching and replacing urls. You can set up full WordPress installations, install themes, plugins, add fields, backup setups and more. I will be sure to write some other applications again as soon as I have the time. Stay tuned!

Keeping your Trellis Server Updated

When you manage client servers with Trellis you will every now and then have to update the server. This to keep Ubuntu, NGINX, MariaDB and PHP updated. But also to patch security holes and update the latest Linux libraries. So keeping your Trellis server updated is pretty important. And fortunately with Trellis it is all pretty straight forward.

Local Server Update

When there is a new update of the Ubuntu Bento Box you will see that when you start your Vagrant Box from the command line:

==> default: Removing hosts
==> default: Checking if box 'bento/ubuntu-16.04' is up to date...
==> default: A newer version of the box 'bento/ubuntu-16.04' is available! You currently
==> default: have version '2.2.9'. The latest is version '2.3.0'. Run
==> default: `vagrant box update` to update.

I really love that. No need to ssh into your box to do updates unless you want specific tweaks. Just keep up with the latest Bento Box updates and you should be just fine in the majority of the cases.

You can do a local update using the following command:

vagrant box update

This will then download the latest Bento Box from the Hashicorp Atlas Server and update your Vagrant Box. Bento is a great box based on an encapsulated Hashicorp Packer template for Vagrant Boxes maintained by the Chef team. The repo is here.

When all goes well you will see a message like:

==> default: Successfully added box 'bento/ubuntu-16.04' (v2.3.0) for 'virtualbox'!

If something did go wrong you will see a warning and or errors. Often this means you have to do it all again. From my location that will mean you will be spending another 20-30 minutes. So keep your fingers crossed at all times!

Remote Staging or Production Server Update

To update your live or production server or your staging server – if you use one – you will need to fire the following command:

ansible-playbook server.yml -e env=<environment>

This command is for provisioning a server, but also for updating the Trellis server. It is an Ansible playbook as Trellis is basically a set of playbooks to manage your server locally as well as remotely. To do this, this means you have to open the terminal, go to the trellis folder and type in

ansible-playbook server.yml -e env=production

NB server.yml can be viewed here

for example to update the production box. Do remember the remote site will not be accessible or fully accessible when you do this. So make sure you do this during slow traffic hours. Sometimes the update does not go well. Due to connection issues, issues with the Let’s Encrypt certificates or other issues. Run it again and all should work out well in the end. If things do go well you will see the following:

PLAY RECAP *********************************************************************

imagewize.nl               : ok=102  changed=6    unreachable=0    failed=0   

localhost                  : ok=0    changed=0    unreachable=0    failed=0   

So as you can see it all went well and six packages were changed:

TASK [fail2ban : ensure fail2ban is configured] ********************************
changed: [imagewize.nl] => (item=jail.local)
ok: [imagewize.nl] => (item=fail2ban.local)

TASK [ferm : ensure iptables INPUT rules are added] ****************************
ok: [imagewize.nl] => (item={u'dport': [u'http', u'https'], u'type': u'dport_accept', u'filename': u'nginx_accept'})
changed: [imagewize.nl] => (item={u'dport': [u'ssh'], u'type': u'dport_accept', u'saddr': [u'']})
ok: [imagewize.nl] => (item={u'dport': [u'ssh'], u'seconds': 300, u'hits': 20, u'type': u'dport_limit'})

TASK [users : Setup users] *****************************************************
changed: [imagewize.nl] => (item={u'keys': [u'ssh-rsa 
key== user@imagewize.com', u'https://github.com/jasperf.keys'], u'name': u'web', u'groups': [u'www-data']})
changed: [imagewize.nl] => (item={u'keys': [u'ssh-rsa 
keyw== user@imagewize.com', u'https://github.com/jasperf.keys'], u'name': u'dhc-user', u'groups': [u'sudo']})

TASK [users : Add SSH keys] ****************************************************
changed: [imagewize.nl] => (item=({u'name': u'web', u'groups': [u'www-data']}, u'ssh-rsa A
key== user@imagewize.com'))
changed: [imagewize.nl] => (item=({u'name': u'web', u'groups': [u'www-data']}, u'https://github.com/jasperf.keys'))
changed: [imagewize.nl] => (item=({u'name': u'dhc-user', u'groups': [u'sudo']}, u'ssh-rsa 
key== user@imagewize.com'))
changed: [imagewize.nl] => (item=({u'name': u'dhc-user', u'groups': [u'sudo']}, u'https://github.com/jasperf.keys'))

TASK [mariadb : Add MariaDB MySQL deb and deb-src] *****************************
ok: [imagewize.nl] => (item=deb http://nyc2.mirrors.digitalocean.com/mariadb/repo/10.0/ubuntu trusty main)
ok: [imagewize.nl] => (item=deb-src http://nyc2.mirrors.digitalocean.com/mariadb/repo/10.0/ubuntu trusty main)
TASK [mariadb : Restart MariaDB MySQL Server] **********************************
changed: [imagewize.nl]

RUNNING HANDLER [fail2ban : restart fail2ban] **********************************
changed: [imagewize.nl]

In this case keys were changed, MariaDB upgraded and Fail2Ban configured anew.

Ansible Package Updates

Sometimes you do need quicker updates or just want to take care of other package updates. You may want to patch security holes or update packages you need the latest version for urgently. There is a Roots Discourse thread on updates where this is mentioned here. Swalkinshaw mentions here you can do three things for manual updates:

  • Add apt upgrade as a task for a server wide upgrade – did it before myself but do test locally or on staging first!
  • manually specify version for Ansible tasks being run for provisioning – see apt Ansible Books
  • or add latest=yes to any apt action

Real Guess has some scripts that I may implement in the future for Ubuntu updates here. It uses the mentioned Ansible apt module. But the examples Ansible mentions are also very useful.

Quick Security Updates

I do ran updates manually every now and then. And I sometimes do them on the server. To deal with security issues. Not as deterministic as I would like, but it does do the trick. I will incorporate some commands into tasks as soon as I can and write another blog post on it.

Earlier I had this shown on logging into one of my servers at Digital Ocean:

Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-34-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

System information as of Thu Oct 6 06:49:26 UTC 2016

System load: 0.09 Processes: 119
 Usage of /: 20.8% of 24.58GB Users logged in: 0
 Memory usage: 48% IP address for eth0:
 Swap usage: 1%

Graph this data and manage this system at:

Get cloud support with Ubuntu Advantage Cloud Guest:

95 packages can be updated.
44 updates are security updates.

This was after a general provisioning. So to take care of the security packages I did a:

sudo unattended-upgrades -d
  • add the -d flag to get extra information

which works when the unattend-upgrades package is installed. It is on the Bento box. Running this package will update all unattended security packages – SO Thread. Sometimes including regular packages submitted by security maintainers or so it seems. And when I logged in again I saw:

59 packages can be updated.
6 updates are security updates.

Packages that were updated were:


So quite a few security packages were updated including SSL. New ones popped up and those can be dealt with the same way. This way is also pretty safe and clean. Better to build it into your Trellis workflow of course. I am working on another task for that. However, all in all a good solution to deal with security on your server in a semi automated and clean way. There are ways to run this in the background too, but so far I have preferred to do it this way, especially with the -d flag to keep everything under control.

So there you have it – keeping your Trellis server updated in a nutshell. You got feedback or questions? Leave a comment below and feel free to retweet or post to Facebook.

PS WordPress Core, plugins and theme updates are handled with

./deploy.sh production site.domain

following composer update. Another blog post on that later on.