Script for generating STIR/SHAKEN certificates (including self-signed)

March 25, 2024

While working on my company’s implemenation of STIR/SHAKEN, I ran into a very basic issue – how do I easily create test self-signed SHAKEN certificates?

While I did find a couple of posts on how to do this, I wanted a nice script that would accomplish the following:

  1. Generate either a self-signed certificate or a key/CSR pair for use with an official certificate authority
  2. Allow easy passing of the certificate subject information (Country, State, Locality, etc.)
  3. Create certificate filenames with a unique identifier, to prevent name collisions (by default, the identifier is a UNIX timestamp)

Below is the script I came up with:

Credit to the original source posts, here and here.

FreeSWITCH Dbh 'fetch' convenience functions for Lua

July 1, 2020

freeswitch.Dbh is the method I’ve chosen for connecting to a database via a Lua script in FreeSWITCH.

The interface is simple to use, and provides all the basics needed to communicate with a database via a configured ODBC driver.

One thing I found lacking, however, were ‘fetch’ type helper functions, that:

  • Returned a multi-row result in a table of tables
  • Returned a single-row result in a table
  • Returned a single-row, single-column result directly.

Below are the helper functions I came up with. You might also be able to extend the dbh object directly with methods – I just chose to pass the dbh object into a standalone function call.

Pacemaker NFSv4 resource agent for Debian

June 5, 2020

If you’re interested in running an NFSv4-only server in a Pacemaker cluster on Debian, you’ll probably find the default nfsserver RA in the Debian-packaged resource agents to be lacking.

It suffers from serveral issues:

  1. The default behavior is to use the NFS init script to handle starting/stopping/monitoring the NFS server. This is broken in the monitor case as the service is reported as active even if it’s been killed.

  2. Starting/stopping the service is dog slow. I’d venture to say at least half of the total cluster failover time in my setup with the default RA was consumed by starting/stopping the NFS daemon and all the related v2/v3 services.

So if you’re willing to go with NFSv4 only, here’s a much better approach:

  1. Follow the Debian NFS server setup, including the adjustments under NFSv4 only.
  2. Drop the below RA in a valid path for Pacemaker to find it, and make it executable.
  3. Configure your resource(s).

CAVEATS:

  • Does not work for v2/v3, and will not manage any of the services that are v2/v3 only.
  • Only tested on Debian Buster, YMMV.

Fencing/STONITH IAM configuration on Google Cloud Platform (GCP)

May 24, 2020

UPDATE 2024-01-28: The fence_gce agent now additionally requires the compute.zoneOperations.list permission in the list.instances IAM role. I have updated the code to reflect this change.

Having spent the better part of a day figuring this out, I thought it might be nice to save others some time.

My specific need was to configure STONITH for a Pacemaker cluster deployed on Google Cloud Platform, using fence_gce.

By far the trickiest part was figuring out Google’s extremely robust permissions system to meet this case in the most secure manner. The goal: to allow each node in a two-node cluster to fence the other node (by reset or poweroff), while restricting each node to only be able to fence its peer, and nothing else.

Below I’ll post some psuedo-code that should give you an idea about how to execute this for your particular needs. I deployed this solution using Terraform, but the concepts should be easily adaptable to other use cases.

In a nutshell:

  1. Create two custom IAM Roles (I called them fencing and list.instances):
    • fencing: permissions to reset or stop a server and get access to its state
    • list.instances: Permissions to list the available instances in a project (this was a small security sacrific, I’d prefer that a node can’t see other nodes this way, but the fencing agent seems to require it, and as of this writing, there’s no clean way on GCP to restrict the nodes listed via the API)
  2. For each instance in a cluster, create a Service Account, and configure the instance run as that service account, granting all the default scopes and the ‘compute-rw’ scope’
  3. At the project level, add an IAM policy binding that binds all the instance service accounts to the list.instances role.
  4. At the instance level, add an IAM policy binding that binds the fencing role to the instance service account for the instance that will fence the instance in question. For example, if you have data1 and data2 instances Then for the data1 instance you’ll bind data2’s service account to the fencing role, and vice-versa. This allows each node to fence its peer, but to have no other permissions for fencing, not even for itself.

Psuedo-code:

Bash script to play a sound file every X seconds

February 2, 2019

As part of my continued efforts to improve awareness of my physical body while I’m hacking away at a keyboard, I got the idea that it would be nice if a pleasant sound played every X seconds to remind me stop slouching, relax, etc.

I wanted it to be simple, easy to start and stop, and easy to configure both the time interval between plays and the file being played.

Finding nothing of that ilk in some quick Google searches, I present the below for your usage and hopefully increased awareness :)

Run without arguments for the help.

Debian start a ramdisk on boot

March 13, 2017

I run some servers that use a ramdisk to increase performance for recording audio files. Since ramdisks evaporate on a reboot, and I have services that depend on the ramdisk to operate properly, I needed a way to:

  • Automatically create the ramdisk on boot
  • Ensure it’s set up prior to the services that depend on it
  • Perform cleanup on the ramdisk when the server is shut down

The set of scripts below accomplish this nicely by leveraging systemd. Here’s a quick rundown of what each does:

  • ramdisk: The script is written as a System V init script, and would be placed at /etc/init/ramdisk. Does the work of:
    • Creating the ramdisk
    • Calling an optionally configured script to perform cleanup prior to tearing down the ramdisk
  • customize: This is an optional file to override the default settings of the first script. Place it at /etc/default/ramdisk with any customizations you like.
  • ramdisk.service: systemd service file that calls the start/stop functions of ramdisk. Place this anywhere systemd recognizes service scripts, and run systemctl daemon-reload.

The nice thing about this setup through systemd is that any other systemd services that depend on the ramdisk being set up can simply include this in their service description:

Requires=ramdisk.service
After=ramdisk.service

ICE/CBP regulations summary for domestic travelers

February 28, 2017

The election of Donald Trump has roused me from my political slumber, to be sure…

I’m trying to stay informed, and also signed up for the excellent Daily Action alerts, which keeps me plugged in to things I can do every day to make my voice heard.

Today’s alert was in reference to ICE/CBP’s request to see the ID of all passengers on a US domestic flight.

According to the expert cited in that article, it’s highly unlikely that these authorities have a legal right to demand you produce ID under these circumstances.

So today my little action of resistance was to print up a convenient half-page summary of the regulations the expert referred to in that article. Two copies per page, should be small enough to fold up and fit in a wallet, purse, or back pocket.

I’ll be carrying mine around, at least for awhile, to remind myself that I don’t want to live in a police state.

HUGE DISCLAIMER:

What I’m sharing is for informational purposes only. I’m not an attorney, and did not deeply vet this information, but simply looked up the US codes referenced in the article and made them a bit easier to carry around.

FreeSWITCH Kickstart, an automated server configuration tool

April 6, 2016

Project background

Several years ago I began the journey to better systematize my server management workflow. At the time I was managing over 20 server instances, almost entirely ‘by hand’, meaning I did things like keeping checklists for the steps to rebuild critical servers, running software updates on each individually, etc. Then, I finally took the time to research server configuration management software, selected Salt as my tool, and completely re-oriented my relationship to dealing with software updates, new server builds, etc.

Managing a server this way means that you don’t change anything about the server directly, but instead write code that makes the changes for you. While this does involve a learning curve (it’s almost always faster to just run a command straight from the command line than write code to run the command for you), the benefits of this approach far outweigh the costs in the long run:

  • Automatically rebuild any server from scratch: As long as you have data backups, it’s no longer a big deal if your server tanks. Rebuilding becomes a glorified version of rebooting your computer when it’s acting weird
  • Consistency in dev -> staging -> production environments: You can be much more sure that the server you’re doing development on is consistent with the server that will run in production. With tools like Vagrant, combined with server configuration management, you can run virtually the identical server on your laptop that you do for your customers.
  • Clear documentation: This cannot be underestimated. The exact steps to re-create your mission critical infrastructure is written in code, which means both certainty about how it works, and ease of transfer of this knowledge to other maintainers.

Even though this approach is well known by hard core sysadmins, I suspect many people who could profit from it still have not taken the leap, either through ignorance or the barrier of the learning curve.

Fast forward to last year, and I’m working on CircleAnywhere, an online group meditation platform, using FreeSWITCH for some of the video conferencing portions. Naturally the server, being mission-critical to the project, was built using a Salt configuration for all the reasons mentioned above. It was a fair amount of work to hammer out the deployment, and it occurred to me that a lot of the heavy lifting I had done was the same heavy lifting that anyone rolling out a new FreeSWITCH server (whether for local development or in production) had to do. So I decided to do just a little bit more work, and repackage my efforts for the good of others. :)

FreeSWITCH Kickstart

Thus was born FreeSWITCH Kickstart, a Vagrant/Salt configuration for automatically deploying a FreeSWITCH server for either development or production. Installation locally via Vagrant or remotely involves only a few simple steps (the most difficult of which is acquiring valid SSL certificates for production builds).

The project strives to provide these three things:

  1. An easy way to get up and running with a local FreeSWITCH development environment
  2. A sane/convenient starting point for rolling a production server.
  3. A nice starter template for admins who want their FreeSWITCH server under Salt server configuration management.

A ton of stuff is done automagically for you – even if you don’t like/need it all, it’s still an awesome way to get going!

I’ve written up a fairly comprehensive features list, so you know what you’re getting.

I hope the community will benefit from this work, and doubly hope that others will offer refinements on this first effort. My intention is to maintain this project for the FreeSWITCH community, keeping it up to date with the installation procedure for the latest code and recommended operating system. If anyone would like to join me in maintainership of the project, I would certainly welcome it. :)

Jester, FreeSWITCH, Lua, and my quest for an awesome scripting toolkit

January 20, 2016

Back in 2010, in collaboration with Star2Star Communications, I wrote a Lua scripting library for FreeSWITCH called Jester. Its most marketable highlight was a profile that offered a complete drop-in replacement for Asterisk’s Comedian Mail.

My deeper motivation for writing it was to try and bring some sanity and consistency to the process of implementing more advanced voice workflows. This motiviation arose from my own experience of the mishmash of dialplan XML and one-off Lua scripts that still to this day forms the foundation of my first major VoIP system. I began to believe that there must be a better way – if the goal was to do any number of fairly common things, like grab data from a database, send an email, talk to a webservice – can’t we basically standardize those into reusable units? So, Jester…

Fast forward to 2016, and my toolkit, through almost complete lack of maintenance, had become legacy code. In addition, the years had revealed both the strengths and weaknesses of my original efforts, and I wanted to do something about it.

In the last few weeks, I took the first big steps to revive and reshape Jester for a new and improved release. A lot of the hard work is now done:

  • Updated the entire codebase to be compatible with all 5.x versions of Lua, which means it will now run on anything from the terrifically old FreeSWITCH 1.0.7 to the very latest code.

  • Complete refactoring/update of the user documentation, now available online and nicely formatted.

  • Preserved the original Asterisk Comedian Mail replica, so it can still be used as a drop in replacement for those moving from Asterisk to FreeSWITCH

  • Good progress on handling some of the original architectural issues

Going forward, I’d love to get some other people on board with the project, particularly those interested in helping to turn Jester into a solid, well-used library for the land beyond the dialplan. I think the biggest reason for the initial flaws was that I didn’t have any other eyes on my designs.

Please do contact me if you’re interested in teaming up.

New release of Luchia, Lua API for CouchDB

December 31, 2015

Many years ago, I wrote Luchia, a Lua API for Apache CouchDB.

The library worked great, and had fair unit/integration test coverage, but after many years it was no longer compatible with the newer versions of Lua.

So, I took a few days recently to revive it and roll an updated release. It’s now compatible with Lua versions 5.1, 5.2, and 5.3, and also sports 100% unit test coverage.

Installation via LuaRocks, Lua’s package manager, is a breeze, just do:

luarocks install luchia

You can check out a summary and API usage in the online documentation.

Happy Luchia’ing. :)

Older posts