While working on my company’s implemenation of STIR/SHAKEN, I ran into a very basic issue – how do I easily create test self-signed SHAKEN certificates?
While I did find a couple of posts on how to do this, I wanted a nice script that would accomplish the following:
Generate either a self-signed certificate or a key/CSR pair for use with an official certificate authority
Allow easy passing of the certificate subject information (Country, State, Locality, etc.)
Create certificate filenames with a unique identifier, to prevent name collisions (by default, the identifier is a UNIX timestamp)
Below is the script I came up with:
Credit to the original source posts, here and here.
The interface is simple to use, and provides all the basics needed to communicate with a database via a configured ODBC driver.
One thing I found lacking, however, were ‘fetch’ type helper functions, that:
Returned a multi-row result in a table of tables
Returned a single-row result in a table
Returned a single-row, single-column result directly.
Below are the helper functions I came up with. You might also be able to extend the dbh object directly
with methods – I just chose to pass the dbh object into a standalone function call.
If you’re interested in running an NFSv4-only server in a
Pacemaker cluster on Debian,
you’ll probably find the default nfsserver RA in the
Debian-packaged resource agents
to be lacking.
It suffers from serveral issues:
The default behavior is to use the NFS init script to handle
starting/stopping/monitoring the NFS server. This is broken in the monitor
case as the service is reported as active even if it’s been killed.
Starting/stopping the service is dog slow. I’d venture to say at least
half of the total cluster failover time in my setup with the default RA was
consumed by starting/stopping the NFS daemon and all the related v2/v3
services.
So if you’re willing to go with NFSv4 only, here’s a much better approach:
UPDATE 2024-01-28: The fence_gce agent now additionally requires the compute.zoneOperations.list permission in the list.instances IAM role. I have updated the code to reflect this change.
Having spent the better part of a day figuring this out, I thought it might be
nice to save others some time.
By far the trickiest part was figuring out Google’s extremely robust permissions
system to meet this case in the most secure manner. The goal: to allow each node
in a two-node cluster to fence the other node (by reset or poweroff), while restricting
each node to only be able to fence its peer, and nothing else.
Below I’ll post some psuedo-code that should give you an idea about how to execute
this for your particular needs. I deployed this solution using
Terraform, but the concepts should be easily adaptable
to other use cases.
In a nutshell:
Create two custom IAM Roles (I called them fencing and list.instances):
fencing: permissions to reset or stop a server and get access to its state
list.instances: Permissions to list the available instances in a project
(this was a small security sacrific, I’d prefer that a node can’t see other
nodes this way, but the fencing agent seems to require it, and as of this
writing, there’s no clean way on GCP to restrict the nodes listed via the API)
For each instance in a cluster, create a Service Account, and configure the
instance run as that service account, granting all the default scopes and
the ‘compute-rw’ scope’
At the project level, add an IAM policy binding that binds all the instance
service accounts to the list.instances role.
At the instance level, add an IAM policy binding that binds the fencing
role to the instance service account for the instance that will fence the
instance in question. For example, if you have data1 and data2 instances
Then for the data1 instance you’ll bind data2’s service account to the
fencing role, and vice-versa. This allows each node to fence its peer,
but to have no other permissions for fencing, not even for itself.
As part of my continued efforts to improve awareness of my physical body
while I’m hacking away at a keyboard, I got the idea that it would be nice if
a pleasant sound played every X seconds to remind me stop slouching, relax,
etc.
I wanted it to be simple, easy to start and stop, and easy to configure both
the time interval between plays and the file being played.
Finding nothing of that ilk in some quick Google searches, I present the below
for your usage and hopefully increased awareness :)
I run some servers that use a ramdisk
to increase performance for recording audio files. Since ramdisks evaporate on
a reboot, and I have services that depend on the ramdisk to operate properly,
I needed a way to:
Automatically create the ramdisk on boot
Ensure it’s set up prior to the services that depend on it
Perform cleanup on the ramdisk when the server is shut down
The set of scripts below accomplish this nicely by leveraging
systemd. Here’s a quick
rundown of what each does:
ramdisk: The script is written as a System V init script, and would be
placed at /etc/init/ramdisk. Does the work of:
Creating the ramdisk
Calling an optionally configured script to perform cleanup prior to tearing
down the ramdisk
customize: This is an optional file to override the default settings of
the first script. Place it at /etc/default/ramdisk with any
customizations you like.
ramdisk.service: systemd service file that calls the start/stop
functions of ramdisk. Place this anywhere systemd recognizes
service scripts, and run systemctl daemon-reload.
The nice thing about this setup through systemd is that any other systemd
services that depend on the ramdisk being set up can simply include this in
their service description:
The election of Donald Trump has roused me from my political slumber, to be
sure…
I’m trying to stay informed, and also signed up for the excellent
Daily Action alerts, which keeps me plugged in to
things I can do every day to make my voice heard.
According to the expert cited in that article, it’s highly unlikely that these
authorities have a legal right to demand you produce ID under these
circumstances.
So today my little action of resistance was to print up a convenient
half-page summary of the regulations
the expert referred to in that article. Two copies per page, should be small
enough to fold up and fit in a wallet, purse, or back pocket.
I’ll be carrying mine around, at least for awhile, to remind myself that I
don’t want to live in a police state.
HUGE DISCLAIMER:
What I’m sharing is for informational purposes only. I’m not an attorney,
and did not deeply vet this information, but simply looked up the US codes
referenced in the article and made them a bit easier to carry around.
Several years ago I began the journey to better systematize my server management workflow. At the time I was managing over 20 server instances, almost entirely ‘by hand’, meaning I did things like keeping checklists for the steps to rebuild critical servers, running software updates on each individually, etc. Then, I finally took the time to research server configuration management software, selected Salt as my tool, and completely re-oriented my relationship to dealing with software updates, new server builds, etc.
Managing a server this way means that you don’t change anything about the server directly, but instead write code that makes the changes for you. While this does involve a learning curve (it’s almost always faster to just run a command straight from the command line than write code to run the command for you), the benefits of this approach far outweigh the costs in the long run:
Automatically rebuild any server from scratch: As long as you have data backups, it’s no longer a big deal if your server tanks. Rebuilding becomes a glorified version of rebooting your computer when it’s acting weird
Consistency in dev -> staging -> production environments: You can be much more sure that the server you’re doing development on is consistent with the server that will run in production. With tools like Vagrant, combined with server configuration management, you can run virtually the identical server on your laptop that you do for your customers.
Clear documentation: This cannot be underestimated. The exact steps to re-create your mission critical infrastructure is written in code, which means both certainty about how it works, and ease of transfer of this knowledge to other maintainers.
Even though this approach is well known by hard core sysadmins, I suspect many people who could profit from it still have not taken the leap, either through ignorance or the barrier of the learning curve.
Fast forward to last year, and I’m working on CircleAnywhere, an online group meditation platform, using FreeSWITCH for some of the video conferencing portions. Naturally the server, being mission-critical to the project, was built using a Salt configuration for all the reasons mentioned above. It was a fair amount of work to hammer out the deployment, and it occurred to me that a lot of the heavy lifting I had done was the same heavy lifting that anyone rolling out a new FreeSWITCH server (whether for local development or in production) had to do. So I decided to do just a little bit more work, and repackage my efforts for the good of others. :)
FreeSWITCH Kickstart
Thus was born FreeSWITCH Kickstart, a Vagrant/Salt configuration for automatically deploying a FreeSWITCH server for either development or production. Installation locally via Vagrant or remotely involves only a few simple steps (the most difficult of which is acquiring valid SSL certificates for production builds).
The project strives to provide these three things:
An easy way to get up and running with a local FreeSWITCH development environment
A sane/convenient starting point for rolling a production server.
A nice starter template for admins who want their FreeSWITCH server under Salt server configuration management.
A ton of stuff is done automagically for you – even if you don’t like/need it all, it’s still an awesome way to get going!
I’ve written up a fairly comprehensive features list, so you know what you’re getting.
I hope the community will benefit from this work, and doubly hope that others will offer refinements on this first effort. My intention is to maintain this project for the FreeSWITCH community, keeping it up to date with the installation procedure for the latest code and recommended operating system. If anyone would like to join me in maintainership of the project, I would certainly welcome it. :)
My deeper motivation for writing it was to try and bring some sanity and consistency to the process of implementing more advanced voice workflows. This motiviation arose from my own experience of the mishmash of dialplan XML and one-off Lua scripts that still to this day forms the foundation of my first major VoIP system. I began to believe that there must be a better way – if the goal was to do any number of fairly common things, like grab data from a database, send an email, talk to a webservice – can’t we basically standardize those into reusable units? So, Jester…
Fast forward to 2016, and my toolkit, through almost complete lack of maintenance, had become legacy code. In addition, the years had revealed both the strengths and weaknesses of my original efforts, and I wanted to do something about it.
In the last few weeks, I took the first big steps to revive and reshape Jester for a new and improved release. A lot of the hard work is now done:
Updated the entire codebase to be compatible with all 5.x versions of Lua, which means it will now run on anything from the terrifically old FreeSWITCH 1.0.7 to the very latest code.
Preserved the original Asterisk Comedian Mail replica, so it can still be used as a drop in replacement for those moving from Asterisk to FreeSWITCH
Good progress on handling some of the original architectural issues
Going forward, I’d love to get some other people on board with the project, particularly those interested in helping to turn Jester into a solid, well-used library for the land beyond the dialplan. I think the biggest reason for the initial flaws was that I didn’t have any other eyes on my designs.
Please do contact me if you’re interested in teaming up.
The library worked great, and had fair unit/integration test coverage, but after many years it was no longer compatible with the newer versions of Lua.
So, I took a few days recently to revive it and roll an updated release. It’s now compatible with Lua versions 5.1, 5.2, and 5.3, and also sports 100% unit test coverage.
Installation via LuaRocks, Lua’s package manager, is a breeze, just do: