Facron 0.9 released

Usually, every Thursday I publish another post in my “Knowing your system” saga, but this week I was busy working on facron. I plan to publish the part three on next Monday, it will be about source-based GNU/Linux distributions.

What’s in this release?

Since last time I wrote about it, a few things have changed: the code is more robust and much more cleaner, and several improvements have been made:

  • You’re now able to pass arguments containing spaces if you surround them with quotes or double quotes in the configuration
  • The fanotify flags handling have slightly changed with a new separator, the comma. If you specify
  • $@ is now the dirname of the file, $# the basename ($$ is still the full path)
  • A manual is now provided
  • A systemd service is now provided (supporting the reload action to reload the configuration)
  • You can now pass the --background argument to facron to launch it in background on non-systemd systems.

How do I get it?

The release tarball is available there: https://github.com/Keruspe/facron/downloads.

You must have fanotify included in your kernel (most recent systems should have it by default).

Here are the steps you need to run in order to get it up and running:

wget https://github.com/downloads/Keruspe/facron/facron-0.9.tar.xz
tar xf facron-0.9.tar.xz
cd facron-0.9
./configure --sysconfdir=/etc --with-systemdsystemunitdir=/usr/lib/systemd/system
sudo make install

Then just create your configuration file as I said in the previous post or following the manual instructions (man facron).

When everything is ready, you just have to run

sudo systemctl start facron.service

Or for non-systemd systems

sudo facron --background

If you edit the configuration file, you can reload it without restarting the daemon by running

sudo systemctl reload facron.service

Or for non-systemd systems

sudo kill -USR1 $(pidof facron)

I hope you’ll enjoy it. Feel free to propose new features and/or to contribute!


À lire également

Deploy llama, mistral, openchat or your own model on Clever Cloud

If AI has been in the news recently, it's mainly through large language models (LLMs), which require substantial computing power for training or inference. Thus, it's common to see developers using them via turnkey APIs, as we did in a previous article. But this is changing thanks to more open and efficient models and tools, which you can easily deploy on Clever Cloud.

Our consortium, InfrateX, wins Simpl

The InfrateX Consortium, led by Sopra Steria with NTT Data and including Clever Cloud as a member, has won the significant contract Simpl.
Company Press

Our end-of-year events

The end of the year is already shaping up to be full of events, trade…