• How to avoid losing your cron jobs one day

    In my 14 years of working with Linux I’ve never done that… But there comes a time in every man’s life to accidentally issue crontab -r instead of crontab -e.

    Yes, that removes the current crontab. Yes, the buttons "e" and "r" are conveniently located next to each other. No, it doesn’t prompt for any sort of confirmation. And yes, like the old saying goes, *nix is very user-friendly – it’s just choosy about who its friends are.

    What to do:

    • Backups, obviously;
    • If you’re messing with the crontab, chances are, you’ve already edited it recently. Use a terminal emulator with an “instant replay” feature, like iTerm2, to go back in time slightly;
    • Grab the commands from the syslog. On Debian, this is a good starting point:
      grep CRON /var/log/syslog | awk '{$1=$2=$3=$4=$5=$6=$7=""; print $0}' | sort | uniq
    • Add an alias to your favorite shell: alias crontab="crontab -i"

  • proxy_pass based on GET args in nginx

    A bit of a non-trivial task, as it turns out the location directive will not match anything after the ? in the URL. Furthermore, attempting to work around this issue by putting the proxy_pass inside an if block will likely result in an error like this one:
    nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in /etc/nginx/conf.d/domain.com.conf:23

    There is a workaround, it’s not pretty, but it should do the trick (giving you time to hopefully refactor the URLs in your app to something more sensible). In the example below, I needed to match a couple of MD5 hashes and proxy those requests to host B while proxying the rest of the requests to host A.

    location / {
    		error_page 418 = @myredir;
    		if ( $args ~ "c=[0-9a-f]{32}" ) { return 418; }
    		
    		proxy_pass http://host_a:80;
    		proxy_set_header Host yourdomain.com;
    		proxy_set_header X-Real-IP $remote_addr;
    		proxy_connect_timeout 120;
    		proxy_send_timeout 120;
    		proxy_read_timeout 180;
    }
    
    location @myredir {
    		proxy_pass http://host_b:80;
    		proxy_set_header Host yourdomain.com;
    		proxy_set_header X-Real-IP $remote_addr;
    		proxy_connect_timeout 120;
    		proxy_send_timeout 120;
    		proxy_read_timeout 180;
    }

    Note that the error code we are “throwing” is actually a part of a old April fool’s RFC. You can use any other code that is not a valid HTTP response code.

  • Quick tip: emulating plain text files directly in nginx config

    Every once in a while I end up setting up another reverse proxy/rewrite engine with nginx. In order to keep such configs easily portable you could do something like this:

    	location =/robots.txt {
    		add_header              Content-Type text/plain;
    		return 200 "User-agent: *
    Disallow: /
    ";
    	}

    This eliminates the need to distribute an actual robots.txt file with the example above. If you intend to serve HTML instead, obviously change the MIME type to text/html as well.

  • Be careful with innodb_force_recovery

    Recently, I needed to import a large SQL dump into a server running MariaDB 10.1. I was surprised to see the following message pop up in the middle of import:

    ERROR 1036 (HY000) at line 1111: Table 'clients' is read only

    These kinds of messages are typically encountered when DB files are copied directly and filesystem-level permissions are amiss. In my case though, I had a consistent SQL dump that just wouldn’t import. Attempting to do any inserts into the table using the command line client would fail with the same error message. It’s worth noting that the database was a mix of MyISAM and InnoDB tables, and the MyISAM tables imported fine.

    As it turns out, the culprit was the following line in my.cnf:

    innodb_force_recovery=3

    This line was inherited from a previous setup. Recent versions of MySQL switch the InnoDB engine into read-only mode when this value is set to >=4. Apparently, MariaDB took this one step further and switched to read-only at the value of 3.

  • Shell scripting: “while read” loops breaking early

    I stumbled upon this issue a couple of days ago while trying to debug a deployment script. The script would read entries from a CSV file and provision VMs one by one. The issue was that only the first entry from the file would get processed, after which the script would exit normally as if there were no more entries.

    As it turns out, if you are using a “while read” construct to read something from a file and in the loop you have something that reads input from stdin… well, it will eat up the rest of the lines from the file, naturally, causing the while loop to break.

    In my case I had a ssh command in the loop. All it took to fix it was to deny reading from stdin by adding the “ssh -n” flag:

    while IFS="," read newhost newipaddr; do
      ssh -n $newhost "my remote commands"
    done << newhosts.csv