Using Salt with services managed by Monit

September 29, 2015

I use Salt for configuration management on all my newer servers. I’m particularly fond of its watch requisite, and use it thoroughly to handle restarting all my Salt managed services if need be.

This generally works without much fuss. However, when a service that’s managed by Salt is also being managed by Monit, there’s a chicken and egg problem: when Salt automatically restarts a service using its standard method, Monit thinks the service has died, and problems ensue.

I really wanted my Salt service management goodness and Monit, so after a bit of thinking I came up the the idea of creating a psuedo-service. Salt manages the psuedo-service instead of the regular service, and the psuedo-service simply wraps the Monit commands for starting, stopping, and restarting the service. This allows Salt to trigger service actions via Monit.

This approach solved the issue beautifully without much code or effort. Below I’m including the RHEL/CentOS version of the psuedo-service – other platforms would require some minor adjustments, but the idea is the same.

As seen from the line defining the service name, I manage this script in Salt as a template, so that I can install it for multiple services.

Hope this helps anybody who gets stuck on this same issue!

#!/usr/bin/env sh
#
# chkconfig: - 99 12
# description: Start apps, monit wrapper.
#
# This script uses the monit process to handle starting and stopping a
# service. This can be used in server configuration management software to
# manage the service programatically, in the case where it's already being
# managed via monit.
SERVICE={{ service_name }}
PIDFILE=/var/run/${SERVICE}.pid
# Load the function library.
. /etc/init.d/functions
RETVAL=0
rh_status() {
status -p ${PIDFILE} ${SERVICE}
}
rh_status_quiet() {
rh_status >/dev/null 2>&1
}
start() {
echo "Starting ${SERVICE}..."
/usr/bin/monit start ${SERVICE}
}
stop() {
echo "Stopping ${SERVICE}..."
/usr/bin/monit stop ${SERVICE}
}
restart() {
echo "Restarting ${SERVICE}..."
/usr/bin/monit restart ${SERVICE}
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
status)
rh_status_quiet
;;
*)
echo $"Usage: $0 {start|stop|restart|status}"
RETVAL=1
esac
exit $RETVAL

Vagrant crash course for the busy developer

May 12, 2015

Today a friend remarked on his interest to learn Vagrant, and lack of time to do so. It occurred to me that the script I’ve been building to quickly knock up and down Vagrant VMs could be a handy aid along the shortest path for developers to begin their journey into this amazing toolset.

So without further ado, the shortest path I know of for a busy dev to start playing with Vagrant:

  • Install VirtualBox and Vagrant

    …or for Homebrew Cask users, the even easier brew cask install virtualbox vagrant

  • Run the following from CLI:

    vagrant box add bento/centos-7.1 --provider virtualbox
    vagrant box add bento/debian-8.2 --provider virtualbox

    …or pick some other box you like here.

  • Download the script below, stick it in your PATH, make it executable
  • Execute without args for the well-written help

    …or run quick-vagrant.sh -c to spin up your first box…

  • Enjoy :)

I also made the script a bit like a cheat sheet for Vagrant commands – if you search it for ‘vagrant’, you’ll see the first time any command is used, there’s a comment describing what it does. Or, here’s a more neatly formatted cheat sheet.

If you find this helpful getting up to speed, please pass it along to your busy developer friends! If you’d like to suggest improvements to the script, feel free to contact me.

#!/bin/bash
# Spins up a quick Vagrant box. Run without arguments for help.
vagrant_dir="${HOME}/vagrant"
box_dir="temp"
ssh_port="2202"
ssh_pubkey_path="${HOME}/.ssh/id_rsa.pub"
custom_script="${HOME}/bin/custom-server-tools.sh"
vagrant_public_box_url="https://app.vagrantup.com/boxes/search"
usage_short() {
local program=`basename ${0}`
echo "
Usage:
# Help
${program} -h
# Box management
${program} -u [-p <boxes-dir>]
${program} -l [-p <boxes-dir>]
${program} -r [-p <boxes-dir>]
${program} -d [-p <boxes-dir>]
${program} -c [-p <boxes-dir>] [-k <ssh-pubkey-file>] [-s <custom-script>] [box-dir] [ssh-port]
"
}
usage() {
usage_short
echo "This script provides support for developers who need to create, start, stop
and remove lots of Vagrant boxes with ease.
It handles the most common extra tasks around creating a Vagrant box to get it
to a state of immediate use for local development, and provides an interface
to quickly tear things down when no longer needed. It also includes some simple
switches for selectively starting, stopping, and restarting VMs.
Note that you have to have added boxes to your local Vagrant install in order
for them to be available for quick creation. That can be accomplished by
running:
vagrant box add [box path]
Where box path is a full URL to a box, or a relative path to a box hosted in
the public catalog -- this is a very easy place to find all the common distros.
For example, to install this box:
https://app.vagrantup.com/bento/boxes/debian-8.9
You would run:
vagrant box add bento/debian-8.9
Their search page is a great place to start:
${vagrant_public_box_url}
The script only installs the most basic Vagrant config needed to get the box
running. From there, Vagrant-specific customizations can be made.
A created box has these additional janitorial tasks completed:
- Installs vagrant-vbguest plugin on host machine (auto Guest Additions updates)
- SELinux disabled if necessary.
- Sensible local hostname configured.
- Rsync and Vim installed.
- Root SSH access configured with a handy output of client-side SSH config.
- Optional custom script executed if SSH client-side config has been
pre-configured (very handy for loading additional customizations to the
VM).
Arguments:
-h: This help message.
-u: Bring a box up. A list will be provided from ${vagrant_dir}.
-l: Halt a box. A list will be provided from ${vagrant_dir}.
-r: Reload a box. A list will be provided from ${vagrant_dir}.
-d: Delete a box. A list will be provided from ${vagrant_dir}.
-c: Create a box. The box will be created in box-dir inside the
${vagrant_dir} directory.
box-dir: Directory to create the box under ${vagrant_dir}. Default is
'${box_dir}'.
ssh-port: Host port for SSH access. Default '${ssh_port}'.
-m: Select multiple boxes for the action (only works for up/reload/halt).
-p <path>: Override the base directory, default is '${vagrant_dir}'.
-k <filepath>: Path to SSH pubkey to insert into the box's root user
authorized_keys file. Default is '${ssh_pubkey_path}'.
-s <filepath>: Path to a custom script to execute if an SSH pubkey is
installed on the VM. Default is '${custom_script}'. You must have an
entry in .ssh/config where the Host name matches the box-dir name, or the
script will not execute.
CAVEATS:
- Most testing on latest releases of CentOS 6.x/7.x and Debian 7.x/8.x VMs,
should work for any RHEL or Debian variants, YMMV.
- Assumes 64-bit installations.
"
}
create_box() {
local full_path=${vagrant_dir}/${box_dir}
if [ -d "${full_path}" ]; then
echo "${full_path} already exists..."
_confirm_delete_box ${box_dir}
fi
local hostname="${box_dir}.local"
local box_list=`vagrant box list | awk '{print $1}'`
if [ -z "${box_list}" ]; then
echo "
No local boxes found! Only locally installed boxes are available for quick
install. Run 'vagrant box add <box name>' to install a box locally. A great
list of boxes can be found here:
${vagrant_public_box_url}
"
exit 1
fi
PS3="Select box to deploy: "
select box in ${box_list}; do
mkdir -p $full_path
cd $full_path
# All Vagrant boxes must have a configuration file named Vagrantfile in
# the directory the box data will be saved.
# If 'vagrant init' is executed, a default Vagrantfile will be created in
# the directory where the command was executed.
# Here, we roll our own because of the custom SSH port.
cat > ${full_path}/Vagrantfile << EOF
Vagrant.configure(2) do |config|
config.vm.box = "${box}"
# Vagrant usually checks for versioned box updates, this disables the check.
config.vm.box_check_update = false
# Share SSH locally by default
config.vm.network :forwarded_port,
guest: 22,
host: ${ssh_port},
id: "ssh"
# In case the vagrant-vbguest plugin is installed.
config.respond_to?(:vbguest) && config.vbguest.auto_update = false
# Uncomment this and edit as appropriate to add a shared folder.
#config.vm.synced_folder "/full/path/on/host/", "/full/path/on/vm/", owner: "root", group: "root"
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--rtcuseutc", "on"]
# set timesync parameters to keep the clocks better in sync
# sync time every 10 seconds
vb.customize [ "guestproperty", "set", :id, "/VirtualBox/GuestAdd/VBoxService/--timesync-interval", 10000 ]
# adjustments if drift > 100 ms
vb.customize [ "guestproperty", "set", :id, "/VirtualBox/GuestAdd/VBoxService/--timesync-min-adjust", 100 ]
# sync time on restore
vb.customize [ "guestproperty", "set", :id, "/VirtualBox/GuestAdd/VBoxService/--timesync-set-on-restore", 1 ]
# sync time on start
vb.customize [ "guestproperty", "set", :id, "/VirtualBox/GuestAdd/VBoxService/--timesync-set-start", 1 ]
# at 1 second drift, the time will be set and not "smoothly" adjusted
vb.customize [ "guestproperty", "set", :id, "/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold", 1000 ]
end
end
# vi: ft=ruby
EOF
# Start up a configured Vagrant VM.
vagrant up
echo "Upgrading kernel (if necessary)..."
vagrant ssh -- "test -f /usr/bin/yum && sudo yum -y update kernel*"
vagrant ssh -- "test -f /usr/bin/apt-get && sudo apt-get -q update && sudo apt-get -q -y install linux-image-amd64"
# No kernel updates for FreeBSD, freebsd-update can be run manually after
# install.
echo "Ensuring gcc/make/kernel-devel are installed..."
vagrant ssh -- "test -f /usr/bin/yum && sudo yum -y install gcc make kernel-devel"
vagrant ssh -- "test -f /usr/bin/apt-get && sudo apt-get -q -y install gcc make linux-headers-amd64"
echo "Resetting SELinux (if necessary)..."
# Execute an SSH command on the VM. The part after the double dash is
# what gets passed to the VM for execution. By default, it's executed
# as a non-privileged user named 'vagrant'. This user has sudo access.
# If 'vagrant ssh' is run with no arguments, an SSH connection will be
# opened to the box under the default user.
vagrant ssh -- "test -f /etc/selinux/config && sudo sed -i -e 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config"
echo "Setting hostname..."
vagrant ssh -- "test -f /etc/sysconfig/network && sudo sed -i -e 's/^HOSTNAME=.*/HOSTNAME=${hostname}/g' /etc/sysconfig/network"
vagrant ssh -- "test -f /etc/hostname && echo ${hostname} | sudo tee /etc/hostname"
vagrant ssh -- "test -f /etc/rc.conf && su - root -c \"sed -i -e 's/^hostname=.*/hostname=${hostname}/g' /etc/rc.conf\""
# Without these hostfile entries, you can get long delays while DNS tries
# to query for the host.
echo "Configuring /etc/hosts..."
vagrant ssh -- "echo '127.0.0.1 ${hostname} ${box_dir}' | sudo tee -a /etc/hosts"
echo "Activating vagrant-vbguest plugin..."
sed -i.bak "s/config\.vbguest\.auto_update = false$/config.vbguest.auto_update = true/" ${full_path}/Vagrantfile
rm ${full_path}/Vagrantfile.bak
echo "Checking for vagrant-vbguest plugin on host..."
vbguest_exists=`vagrant plugin list | grep vagrant-vbguest`
if [ -z "${vbguest_exists}" ]; then
echo "Installing vagrant-vbguest plugin on host..."
vagrant plugin install vagrant-vbguest
fi
echo "Rebooting server..."
# Restart the server. Shortcut for 'vagrant halt; vagrant up'.
vagrant reload
# Let's make sure there's a way to sync over files, and a basic editor in place.
echo "Installing basic packages..."
vagrant ssh -- "test -f /usr/bin/yum && sudo yum -y install rsync vim-enhanced"
vagrant ssh -- "test -f /usr/bin/apt-get && sudo apt-get -y install rsync vim"
# The fstab entry is nessesary for bash to be able to function in FreeBSD.
vagrant ssh -- "test -f /usr/sbin/pkg && su - root -c '/usr/bin/yes | pkg install rsync vim bash' && su - root -c 'echo \"fdesc /dev/fd fdescfs rw 0 0\" > /etc/fstab'"
# If the script finds a readable file at ${ssh_pubkey_path}, then it will
# copy it to the authorized_keys file for the root user on the VM.
if [ -r ${ssh_pubkey_path} ]; then
echo "Setting up root SSH access..."
local pubkey=`cat ${ssh_pubkey_path}`
vagrant ssh -- "sudo mkdir -m 700 /root/.ssh"
vagrant ssh -- "echo '${pubkey}' | sudo tee -a /root/.ssh/authorized_keys"
ssh_config_exists=`cat ${HOME}/.ssh/config | grep "^Host ${box_dir}$"`
# If the custom script is executable, and a Host entry matching
# ${box_dir} is found in the SSH config, execute the custom script.
if [ -x ${custom_script} ] && [ -n "${ssh_config_exists}" ]; then
echo "Executing ${custom_script}..."
${custom_script} ${box_dir}
fi
fi
break
done
echo "
SSH config.
Add the following to ${HOME}/.ssh/config for quick
root access to the server:
Host ${box_dir}
Hostname localhost
Port ${ssh_port}
User root
HostKeyAlias ${box_dir}
"
}
_delete_box() {
local box_dir=${1}
echo "Removing ${vagrant_dir}/${box_dir} virtual machine..."
cd ${vagrant_dir}/${box_dir}
# Delete the VM. --force overrides the 'Are you sure?' prompt.
vagrant destroy --force
cd ${vagrant_dir}
# Bit of defensive programming here, in case for some freaky reason
# ${box_dir} is empty, we don't want to wipe the entire vagrant dir.
if [ -n "${box_dir}" ]; then
rm -rf ${vagrant_dir}/${box_dir}
fi
echo "Removal complete."
}
_confirm_delete_box() {
local box_dir=${1}
echo -n "Are you sure you want to remove ${vagrant_dir}/${box_dir}? (y/N): "
read KILL_VM
if [ "${KILL_VM}" = "y" ]; then
_delete_box ${box_dir}
else
echo "User cancelled"
exit 0
fi
}
_box_list() {
local all_boxes=`ls -1 ${vagrant_dir} | tr -d "/"`
echo "${all_boxes}"
}
_box_command() {
local cmd="${1}"
shift
local box_list=("$@")
for box_dir in "${box_list[@]}"; do
echo "Performing command '${cmd}' for box '${box_dir}'"
if [ -f "${vagrant_dir}/${box_dir}/Vagrantfile" ]; then
cd ${vagrant_dir}/${box_dir}
vagrant ${cmd}
else
echo "ERROR: ${vagrant_dir}/${box_dir} has no Vagrantfile"
fi
done
}
_check_valid_selection() {
box_list=("$@")
if [ ${#box_list[@]} -eq 0 ] || [ -z "${box_list[0]}" ]; then
echo "ERROR: Invalid selection"
return 1
fi
}
multiselect() {
local -n final_choices=${1}
local action=${2}
local choices=()
local options=()
rebuild_choices() {
local selection_idx="${1}"
local new_array=()
local deleted=
for i in "${choices[@]}"; do
if [[ "${i}" = "${selection_idx}" ]]; then
deleted="1"
else
new_array+=(${i})
fi
done
if [[ -z "${deleted}" ]]; then
new_array+=(${selection_idx})
fi
choices=("${new_array[@]}")
}
get_multiselect_choices() {
get_choice_number() {
local options_idx="${1}"
local choice_num=" "
local count=0
for i in ${choices[@]}; do
((count++))
if [[ "${i}" = "${options_idx}" ]]; then
choice_num="*${count}"
break
fi
done
echo "${choice_num}"
}
menu() {
for i in ${!options[@]}; do
printf "%s %3d) %s\n" "$(get_choice_number $i)" $((i+1)) "${options[i]}"
done
if [[ "$msg" ]]; then
echo "$msg"
fi
}
prompt="Select boxes to ${action}, hit ENTER when all are selected: "
while menu && read -rp "$prompt" num && [[ "$num" ]]; do
[[ "$num" != *[![:digit:]]* ]] &&
(( num > 0 && num <= ${#options[@]} )) ||
{ msg="Invalid option: $num"; continue; }
((num--)); msg=""
rebuild_choices ${num}
done
}
build_select_options() {
for box_dir in $(_box_list); do
options+=("${box_dir}")
done
}
build_final_choices() {
for i in ${choices[@]}; do
final_choices+=("${options[${i}]}")
done
}
build_select_options
get_multiselect_choices
build_final_choices
}
_get_selected_boxes() {
local action="${1}"
local -n arr=$2
if [ "${multiselect}" = "1" ]; then
multiselect arr "${action}"
else
PS3="Select box to ${action}: "
select box_dir in `_box_list`; do
arr=("${box_dir}")
break
done
fi
}
up_box() {
local box_list
_get_selected_boxes "bring up" box_list
_check_valid_selection "${box_list[@]}" && _box_command up "${box_list[@]}"
}
halt_box() {
local box_list
_get_selected_boxes "halt" box_list
_check_valid_selection "${box_list[@]}" && _box_command halt "${box_list[@]}"
}
reload_box() {
local box_list
_get_selected_boxes "reload" box_list
_check_valid_selection "${box_list[@]}" && _box_command reload "${box_list[@]}"
}
delete_box() {
PS3="Select box to delete: "
select box_dir in `_box_list`; do
_check_valid_selection "${box_dir}" && _confirm_delete_box ${box_dir}
break
done
}
action=
multiselect=
while getopts ":hdulrmcp:k:s:" option; do
case ${option} in
h )
usage
exit 0
;;
d )
action="delete_box"
;;
u )
action="up_box"
;;
l )
action="halt_box"
;;
r )
action="reload_box"
;;
c )
action="create_box"
;;
m )
multiselect=1
;;
p )
vagrant_dir=${OPTARG}
;;
k )
ssh_pubkey_path=${OPTARG}
;;
s )
custom_script=${OPTARG}
;;
esac
done
shift $((${OPTIND} - 1))
if [ "${action}" = "create_box" ]; then
if [ -n "${1}" ]; then
box_dir=${1}
shift 1
if [ -n "${1}" ]; then
ssh_port=${1}
shift 1
fi
fi
fi
if [ $# -gt 0 ]; then
usage_short
exit 1
elif [ -z "${action}" ]; then
usage_short
exit 0
else
CWD=`pwd`
${action}
cd ${CWD}
exit 0
fi

General node init script for RHEL/CentOS 5.x/6.x

March 16, 2015

I wanted to have a general RHEL/CentOS init script that I could drop onto servers to start node processes. The features I wanted were:

  • Ability to reuse it for multiple node processes on the same server.
  • Ability to run as non-root user.
  • The classic service start/stop/restart/status functionality.

I had previously written a node init script using forever, but I ended up not liking it. This script still isn’t perfect, but it seems more generally useful to me.

Drop the script in /etc/init.d/[name] and make it executable, the custom configuration in /etc/sysconfig/[name], and use service [name] start to fire it up.

#!/bin/bash
#
# chkconfig: 35 90 12
# description: Start node apps.
#
# This grabs the name of the init script. The weird string match handles
# the case when the init script is called on boot as /etc/rcN/S90[name].
INIT_SCRIPT_BASENAME=`basename $0`
SERVICE=${INIT_SCRIPT_BASENAME#S90}
#############################################################################
# CONFIGURATION BEGIN
#############################################################################
# These options can all be overridden in /etc/sysconfig/[name of init script]
# eg., if your init script is /etc/init.d/foo, then /etc/sysconfig/foo.
# This allows this one init script to be reused multiple times.
# Full path to the app file you want to launch.
APP=/full/path/to/app.js
# Name of the file to log all console output from the node process. You
# should be able to use /dev/null here if you just want to throw it away.
LOGFILE=/full/path/to/log/file
# Any arguments you want to pass to the app.
APP_ARGS=""
# Full path to the node executable. The 'which' program can usually locate
# this for you, but you can also hard code this if necessary.
NODE=`which node`
# User to run the process as.
NODE_USER="node"
# Setting for the NODE_ENV environment variable.
NODE_ENV="development"
# Name of pid/lock files -- these are sensible defaults.
PIDFILE=/var/run/${SERVICE}.pid
LOCKFILE=/var/lock/subsys/${SERVICE}
# How many seconds to wait before checking if the node process has started
# successfully.
START_CHECK_LOOPS=5
STOP_TIMEOUT=10
#############################################################################
# CONFIGURATION END
#############################################################################
DAEMON_ARGS="--check $SERVICE --pidfile $PIDFILE"
# Load the function library.
. /etc/init.d/functions
# Override defaults with custom settings.
if [ -f /etc/sysconfig/${SERVICE} ]; then
. /etc/sysconfig/${SERVICE}
fi
RETVAL=0
rh_status() {
status -p $PIDFILE $SERVICE
}
rh_status_quiet() {
rh_status >/dev/null 2>&1
}
start() {
# Trim the whitespace from the command, needed for pgrep below
local NODE_COMMAND="$(echo -e "$NODE $APP $APP_ARGS" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')"
if [ "$NODE_USER" != "root" ]; then
DAEMON_ARGS="$DAEMON_ARGS --user $NODE_USER"
fi
# Starts the node app as a background process.
echo -n "Starting ${SERVICE}: "
daemon $DAEMON_ARGS NODE_ENV="${NODE_ENV}" $NODE_COMMAND >> $LOGFILE 2>&1 &
# Writing the pidfile on behalf of the node process requires avoiding a race
# condition with the daemon call. Pausing here allows the call to complete
# before the pidfile is written.
sleep 1
# Also, unfortunately, the $! bash variable doesn't contain the proper pid
# needed. This strategy should be pretty safe, since running the same app
# with the same arguments using this init script doesn't make much sense.
# Certain special characters in $APP_ARGS might break this functionality, not
# tested extensively.
local pid=`pgrep -f -n "$NODE_COMMAND"`
if [ -n "$pid" ]; then
echo $pid > $PIDFILE
local count=1
while true; do
rh_status_quiet
RETVAL=$?
if [ $RETVAL -eq 0 ]; then
touch $LOCKFILE && success || failure
break
else
if [ $count -gt $START_CHECK_LOOPS ]; then
rm -f $PIDFILE
failure
break
else
((count++))
fi
fi
sleep 1
done
else
failure
fi
echo
}
stop() {
echo -n "Stopping ${SERVICE}: "
killproc -p $PIDFILE -d $STOP_TIMEOUT $SERVICE -INT
RETVAL=$?
if [ $RETVAL = 0 ]; then
cleanup && success || failure
else
failure
fi
echo
}
restart() {
stop
start
}
cleanup() {
rm -f $PIDFILE && rm -f $LOCKFILE
return $?
}
case "$1" in
start)
rh_status_quiet && exit 0
start
;;
stop)
rh_status_quiet
if [ $? -eq 0 ]; then
stop
else
cleanup && exit 0
fi
;;
restart)
restart
;;
status)
rh_status
;;
*)
echo $"Usage: $0 {start|stop|restart|status}"
RETVAL=1
esac
exit $RETVAL

Installing Jekyll on RHEL/CentOS 5.x and 6.x with modern Ruby and Python versions

January 20, 2015

Jekyll can be run fairly easily on older (5.x and 6.x) versions of RHEL/CentOS, even though the stock versions of the necessary software are too old. The key is to use some simple supporting software to install what you need in a clean way.

The first handy tool is Ruby Version Manager, or RVM. Here’s a handy cheat sheet – use that to install RVM and the latest version of Ruby, set the installed Ruby as the default, and install the Jekyll gem.

Once that’s done, you may run into this error when trying to use Jekyll:

Liquid Exception: Failed to get header

This Jekyll issue indicates that Python greater than 2.6 and less the 3.x needs to be installed in order for syntax highlighting to work. Never fear, pyenv to the rescue!

Use the pyenv installer to get pyenv set up and running on your server, and have a look at the pyenv command reference to install an appropriate version of Python and set it as the default. Note: If you’re running RHEL/CentOS 5.x, you’ll have one more hurdle to cross before using pyenv, see this post for detailed instructions.

If you’d also like to set up your Jekyll site to update from a git push, I’ve detailed one workable approach in this post – if you do use it, just make sure you put the necessary RVM/pyenv environment set up stuff in the .bashrc file of the user the git push command runs as on the server.

Happy Jekyll’ing!

Installing a working pyenv on RHEL/CentOS 5.x

January 19, 2015

RHEL/CentOS 5.x has long since lost its freshness, but some of us are still running servers with it, and can do so for several more years before its end of life.

Perhaps, like me, you have a need to run a more modern version of Python than 5.x installs by default. I recently found pyenv, and it looked to fit my needs perfectly, as I didn’t want to mess with the system version or build custom RPM’s.

Once I installed the build requirements, I used the handy pyenv-installer project to get it up and running, then ran the simple command to install Python 2.7.8. Unfortunately, things hit a bump with this error:

Error message:
subprocess.CalledProcessError: Command '['wget', 'https://pypi.python.org/packages/source/s/setuptools/setuptools-7.0.zip', '--quiet', '--output-document', '/tmp/python-build.20141210170309.3741/Python-2.7.8/setuptools-7.0.zip']' returned non-zero exit status 1

After a bit of digging, I found that this was actually due to a bug in version 1.11 of wget. As far as I could tell, this issue was not fixed upstream until the 1.12 release, and CentOS 5.x is frozen at 1.11.

I decided it was worth building a custom wget RPM package for my 5.x servers to get past this issue. After setting up my RPM build environment, I headed over to rpm.pbone.net to locate a suitable source RPM. wget-1.12-4.fc14.src.rpm ended up suiting my needs – the newer versions of wget RPMs had some build dependencies that were a bit awkward to fulfill, and 1.12 would solve my problem, so…

Per http://wiki.centos.org/HowTos/RebuildSRPM, the –nomd5 switch is needed when installing newer fedora source RPMs:

rpm --nomd5 -ivv wget-1.12-4.fc14.src.rpm

From there it was simply a matter of building the RPM:

cd /usr/src/redhat/SPECS
rpmbuild -bb wget.spec

Then installing it, by finding the wget-1.12-4 RPM file in one of the subdirectories of /usr/src/redhat/RPMS, changing to that directory, and running:

yum --nogpgcheck localinstall wget-1.14-3.x86_64.rpm

After this, pyenv install [version] should work as advertized…Python for the modern age!

Automating sysctl deployments using sysctl.d on RHEL/CentOS 5.x, 6.x, 7.x

January 8, 2015

I’ve fallen in love with automated server deployments in the last year, with my primary weapon being Salt.

One of the corner cases I’ve run into is adding sysctl settings specific to a feature set. For example, when a server needs Redis installed, I want to add the following kernel optimization via sysctl:

    vm.overcommit_memory = 1

It’s sloppy to add this to /etc/sysctl.conf – too hard to maintain in a modular fashion. Wouldn’t it be nice if there was a place we could drop a file with that sysctl setting in it, which would be automatically read on boot? This would enable adding and removing multiple sysctl settings a breeze to automate.

Well, it turns out that RHEL/CentOS does have this support via /etc/sysctl.d. While only RHEL/CentOS 7.x sports the directory out of the box, all three versions provide access to it via init scripts, and anything placed in /etc/sysctl.d will be read on boot, provided that the networking init script’s start action is called (it’s enabled by default).

Unfortunately, this is a bit of an odd placement for triggering a reload of the sysctl settings. I also wanted the ability to only reload the sysctl settings as part of a feature installation on a running server.

The path to get this feature turned out to be pretty short. /etc/init.d/functions contains an apply_sysctl function which handles all the dirty work of completely reloading all sysctl settings, including those placed in /etc/sysctl.d. This extremely short wrapper script does the job:

#! /bin/bash
# Source function library.
. /etc/init.d/functions
apply_sysctl
view raw apply_sysctl.sh hosted with ❤ by GitHub

Armed with that script, I simply use Salt to automatically install it to /usr/local/bin on all servers, and call it any time a file in /etc/sysctl.d is added, removed, or modified.

Nginx logs in JSON format

July 14, 2012

I’ve recently decided that it’s a good idea to output server logs in JSON format. To this end, today I took some time to figure out how to do this for Nginx. The log_format parameter is the one you want to use, I simply added another named format to the http section of nginx.conf, which then allows the named format to be used in any other config file. Here’s what I whipped up – this is just the default main format ported to JSON:

log_format  json  '{'
                    '"remote_addr": "$remote_addr",'
                    '"remote_user": "$remote_user",'
                    '"time_local": "$time_local",'
                    '"request": "$request",'
                    '"status": $status,'
                    '"body_bytes_sent": $body_bytes_sent,'
                    '"http_referer": "$http_referer",'
                    '"http_user_agent": "$http_user_agent",'
                    '"http_x_forwarded_for": "$http_x_forwarded_for"'
                  '}';

I formatted it one row per parameter in the config file, as it’s easier for me to read, but Nginx will concatenate all those separate strings into one line in the log file. Once this is done, use the format in any of places where it’s accepted, for example:

access_log /path/to/file/access.log json;

For more details on all this stuff, check out the online documentation for Nginx’s logging module.

Resend Postfix messages stuck in mail queue to another address

July 6, 2012

Update 2015-02-19: Based on suggestions from David Keegel, I’ve tweaked the script to properly hold/unhold messages, and include the sender’s email address in the message envelope.

Ever had somebody give you a bad email address, and the messages pile up in your Postfix mail queue? Even if you have the right address to send the stuck emails to, Postfix (at least on CentOS 5.x based on all my research) provides no easy way to:

  • Resend all those messages to the new address
  • Remove the old stuck messages

We needed this functionality badly for our business, where we unfortunately get bad email addresses given to us all the time. So, using my marginal bash skills and several hours of my time, I whipped up the script below to automate this process.

Caveat: I’ve tested this script extensively with Postfix on CentOS 5.x, but nowhere else, really. So test first, use at your own risk!

#!/bin/bash
# This script allows sending of messages stuck in the Postfix queue to another
# email address. Useful if the original email address is bad.
if [ $# -ne 2 ]; then
echo "Usage: reroute-queued-email-messages.sh <oldaddress> <newaddress>"
exit 1
fi
OLD_ADDRESS="$1"
NEW_ADDRESS="$2"
MAILQ=`which mailq`
POSTCAT=`which postcat`
POSTSUPER=`which postsuper`
SENDMAIL=`which sendmail`
TAIL=`which tail`
GREP=`which grep`
AWK=`which awk`
SED=`which sed`
# Pulls queue IDs for all messages sent to the original address.
function get_ids_by_address {
$MAILQ | $TAIL -n +2 | $GREP -v '^ *(' | $AWK -v address="$1" 'BEGIN { RS = "" } { if ($8 == address) print $1 }'
}
MESSAGE_IDS=`get_ids_by_address $OLD_ADDRESS`
# Loop through each message ID, output the message, and pipe it through
# sendmail to the new address.
for message_id in $MESSAGE_IDS;
do
# Puts the message on hold.
$POSTSUPER -h $message_id
# Extract the sender.
sender=`$POSTCAT -q $message_id | $GREP -m 1 '^From: ' | $SED '/^From: */!d; s///; q'`
# sed gets rid of the leading queue metadata, grep -v gets rid of the
# metadata markers.
$POSTCAT -q $message_id | $SED -n '/MESSAGE CONTENTS/,$p' | $GREP -v "^\*\*\*.*\*\*\*$" | sendmail -f "$sender" $NEW_ADDRESS
echo "Delivered message ID $message_id to $NEW_ADDRESS"
done
echo -n "Remove old messages from mail queue? [y/N]: "
read delete
if [ "$delete" = "y" ]; then
for message_id in $MESSAGE_IDS;
do
# Deletes the message from the mail queue by queue ID.
$POSTSUPER -d $message_id
echo "Deleted message ID $message_id from queue"
done
else
for message_id in $MESSAGE_IDS;
do
# Takes the message off hold.
$POSTSUPER -H $message_id
done
echo "The following messages were left in the mail queue:"
echo "$MESSAGE_IDS"
fi
exit 0

db-query-assistant package released now available via npm

April 28, 2012

Today I’m happy to announce the official public release of db-query-assistant.

For those who haven’t read my initial post on the module, here is a quick summary:

  • High-level library for node database drivers.
  • Configurable connection pooling.
  • Issue multiple simultaneous queries, and get all results back in a callback when the last query completes.
  • Issue queries in series, getting the results for each previous query back before executing the next one.
  • Issue transactional queries, with automatic rollback on query failure.

This release includes a fairly comprehensive unit test suite for the core library, using the awesome mocha test framework. I decided against unit tests for the drivers, as I feel those would be better served by integration tests (which I may add in the future).

The module has been published to the npm registry, and a simple npm install db-query-assistant will now do the trick.

Sampler API 6.x 1.1 released

March 25, 2012

The 6.x-1.1 release of Sampler API is now out and ready for download. This has a number of important bugfixes, as well as the addition of locking support so the same metric cannot be run again while already running.

Most of the fixes came out of the work I’m doing to get metrics collection fully deployed on drupal.org.

Older posts