Skip to main content

Posts

Python logging with rich - writing to stderr - plain output when writing to file

Rich is a Python library for writing rich text (with color and style) to the terminal, and for displaying advanced content such as tables, markdown, and syntax highlighted code. Rich provides RichHandler , a logging handler for python's logging module which will format and colorize text written by the module. However, RichHandler writes to stdout by default. More specifically, it writes to a rich Console object which, by default, writes to stdout. To make RichHandler write to stderr by default, you must pass in a Console object which has been configured to write to stderr: import logging from rich.console import Console from rich.logging import RichHandler DATEFMT = "%Y-%m- %d T%H:%M:%SZ" FORMAT = " %(message)s " logging . basicConfig( level = "NOTSET" , format = FORMAT, datefmt = DATEFMT, handlers = [RichHandler(console = Console(stderr = True ))], ) logger = logging . getLogger(__name__) logger . i...

Fix python import order on save in vim with ruff and ale

My IDE of choice is vim. I use various tools to perform linting and code formatting, and configure them all with ALE  (the Asynchronous Lint Engine). After using several discrete tools ( black , isort , flake8 , etc) I have settled on using Ruff to do my python code formatting and linting. Here's the relevant fragment of my ALE config in my .vimrc: " ALE config let g :ale_fixers = { \ 'python' : [ 'ruff' , 'ruff_format' ], \} let g :ale_linters = { \ 'python' : [ 'ruff' ], \} let g :ale_python_ruff_use_global = 1 One of the last remaining wrinkles I had was getting Ruff to automatically sort import statements. Sorting imports is performed by the Ruff linter, not the formatter, which is documented here . The fix on the command line is to add an option, like this: ruff check --select I --fix The difficulty I had was getting this to happen in the editor when the file was saved. It turns out, all I needed to do was ...

sudo, pipelines, and complex commands with quotes

We've all run into problems like this: $ echo 12000 > /proc/sys/vm/dirty_writeback_centisecs -bash: /proc/sys/vm/dirty_writeback_centisecs: Permission denied The command fails because the target file is only writeable by root. The fix seems obvious and easy: $ sudo echo 12000 > /proc/sys/vm/dirty_writeback_centisecs -bash: /proc/sys/vm/dirty_writeback_centisecs: Permission denied Huh? It still fails. What gives? The reason it fails is that it is the shell that sets up the re-direction before running the command under sudo. The solution is to run the whole pipeline under sudo. There are several ways to do this: echo 'echo 12000 > /proc/sys/vm/dirty_writeback_centisecs' | sudo sh sudo sh -c 'echo 12000 > /proc/sys/vm/dirty_writeback_centisecs' echo 12000 | sudo tee /proc/sys/vm/dirty_writeback_centisecs This is fine for simple commands, but what if you have a complex command that already includes quotes and shell meta-characters? Here's what I us...

Conditionally running cron tasks based on arbitrary conditions

Volcane recently asked in ##infra-talk on Freenode if anyone knew of " some little tool that can be used in a cronjob for example to noop the the real task if say load avg is high or similar?" I came up with the idea to use nagios plugins. So, for example, to check load average before running a task: /usr/lib64/nagios/plugins/check_load -w 0.7,0.6,0.5 -c 0.9,0.8,0.7 >/dev/null && echo "Run the task here" Substitute the values used for the -w and -c args as appropriate, or use a different plugin for different conditions.

Atomic Deployment of Puppet environments

In a previous post , I described how to describe puppet environments, roles, and profiles, as modules and how to use r10k and librarian-puppet to deploy them. One possible problem with deploying to the puppet environment directory directly is that the librarian-puppet run can take some time and there is a possibility that puppet may attempt to compile a catalogue in an incomplete or inconsistent environment. One way to overcome this is to deploy the environments into a new directory, create a symlink, and move the symlink atomically into place . This would look something like this: cd /etc/puppet/envs# create a new dir under /etc/puppet/envs - I use a timestamp in the name so I know when it was createdNEW_ENV_DIR=$(mktemp --directory envs.$(date -Isec).XXX")cd /etc/puppet# use r10k deploy the environments into the new dirPUPPETFILE_DIR="envs/${NEW_ENV_DIR}" r10k puppetfile install# loop over all the environments and use librarian-puppet to deploy all the roles/pro...

Recursive deployment of puppet environments with r10k and librarian-puppet

By treating roles and profiles as puppet modules, we can use  r10k  and  librarian-puppet to manage the deployment of our puppet code into our puppet environements. I shall assume that puppet is configured to use to use directory environments and that the environment path is $confdir/environments (ie. the default location). I also assume that both r10k and librarian-puppet are installed and in the path. You should also understand and embrace the role-profile-module pattern, first described by Craig Dunn  and subsequently by Adrian Thebo  and Gary Larizza . Quoting Gary: Roles abstract profiles Profiles abstract component modules Hiera abstracts configuration data Component modules abstract resources Resources abstract the underlying OS implementation   I find the following points useful to clarify the purpose of each of the layers in this model: Roles, profiles, and component modules can all be implemented as...

Finding "old" nodes in puppetdb

We're using puppet + puppetdb in an EC2 environment where nodes come and go quite regularly. We have a custom autosign script that uses ec2 security info to validate the nodes before allowing the autosigning. This is all good, but it can leave a lot of "dead" nodes in puppet, eg. if a bunch of nodes are created by an autoscale policy and then terminated. To get rid of these zombie nodes from puppet/puppetdb we can just use: puppet node deactivate <certname1> <certname2> ... <certnameN> We can query puppetdb to get a list of nodes that have not sent puppet reports for, say, 24 hours. The puppetdb query we need is something like this: 'query=[" where $cutoff_date is a date in ISO8601 format, eg. 2015-03-05T13:39:45+0000 We can use date to generate the cutoff date with something like this: $cutoff_date=$(date -d '-1 day' -Isec) We then plug this into the query string and send it with curl as follows: curl --silent -G 'http://l...

Test for undefined fact in puppet with strict_variables

One of my very early frustrations with puppet was that it allows variables to be used when they were undefined. Primarily this bit me by not catching typos in variable names which were often very hard to track down. I was very pleased when Puppetlabs introduced a strict_variables mode which throws an error if a manifest attempts to use an undefined variable. I recently need to check for the existence of a fact. Without strict_variables, this is straight-forward: if $::some_fact {  # do stuff here} If the fact "some_fact" exists, the variable is a non-empty string and evaluates as true in boolean context. If the fact doesn't exist, the variable is an empty string which evaluates as false in boolean context. But, with strict_variables enforced, this throws an error: Error: Undefined variable "::some_fact"; Undefined variable "some_fact" at line ... The solution is to use the getvar function from stdlib : if getvar('::some_fact'...