If you’re in information technology, you’ll likely agree that logging is important. It helps you monitor a system, troubleshoot issues, and generally provides useful feedback about the system’s state. But it’s important to do logging right.
In this handbook, I’ll explain what the syslog protocol is and how it works. You’ll learn about syslog’s message formats, how to configure rsyslog to redirect messages to a centralized remote server both using TLS and over a local network, how to redirect data from applications to syslog, how to use Docker with syslog, and more.
Table of Contents
- Prerequisites
- Introduction
- What is syslog?
- How to Configure rsyslog to Redirect Messages to a Centralized Remote Server using TLS
- How to Configure rsyslog to Redirect Messages to a Centralized Remote Server Over a Local Network
- Other Possibilities for Log Forwarding
- How to Redirect Data from Applications to syslog
- Conclusion
Prerequisites
In this guide we will discuss syslog and its associated concepts. While I will explain most of the topics we come across, you should have a foundational knowledge about the following:
- using the Linux terminal (such as navigating the directory tree, creating and editing files, changing file permissions, etc.)
- a basic understanding of networks (domain name, host, IP address, TLS/SSL, TLS certificate, private/public key, and so on).
Introduction
Every system/application might provide its logs in different formats. If you have to work with many such systems and maintain them, it’s important to deal with logs in a centralized, manageable, and scalable way.
First of all, it’s useful to gather all the logs from the applications on your machine into one place for later processing.
Having collected all the logs in one place, you can now move on to processing them. But what if your machine is just a single node out of a group of servers? In this case, local log processing gives you insights about this single node but certainly not all of them.
Now you may very well want to transfer all the gathered logs to a central server which parses all the records, discovers any issues and inconsistencies, fires alerts, and finally stores the logs for future forensic analysis.
Note the convenience of having a central point of access to all your logs. You don’t have to run around from machine to machine, searching for appropriate information and manually overlaying different log files.
So, to achieve the above, you can leverage the syslog protocol and use a very popular syslog daemon called rsyslog to collect all the logs and forward them to a remote server for further processing in a secure and reliable fashion.
And that’s exactly the example that I want to present in this tutorial to showcase a common and important use case of syslog. I’ll give those who are not familiar with it a first taste of the problems it can solve.
So we’ll explore this scenario, with examples of redirecting logs from host applications, Docker containers, and Node.js and Python clients, in this article.
But first, you need to understand the terminology around syslog as it’s often shrouded in myths, mystery, and filled to the brim with confusion. Well, maybe I’m being overly dramatic here, but you get my point: terminology is important.
What is syslog?
Nowadays there is a lot of uncertainty as to what the word syslog actually refers to, so let’s clear it up:
Syslog protocol
Syslog is a system logging protocol that is a standard for message logging. It defines a common message format to allow systems to send messages to a logging server, whether it is set up locally or over the network. The logging server will then handle further processing or transport of the messages.
As long as the format of your messages is compliant with this protocol, you just have to pass them to a logging server (or, put differently, a logging daemon which we will talk about shortly) and forget about them.
The transport of the messages, rotation, processing, and enrichment will, from that point on, be handled by the logging server and the infrastructure it connects to. Your application does not have to know or deal with any of that. Thus we get a decoupled architecture (log handling is separated from the application).
But the main point of the syslog protocol is, of course, standardization. First of all, it is much easier to parse the logs when all the applications adhere to some common standard that generates the logs in the same format (or more or less the same, but let’s not jump the gun just yet).
If your logs have a common format, it’s first of all easy to filter the records by a particular time window or by the respective log levels (also referred to as severity levels. For example: info, warning, error, and so on).
Secondly, you may have a lot of different applications that implement a logtransport themselves. In that case, you’d have to spend quite some time skimming through the docs, figuring out how to configure file logging, log rotation, or log forwarding for every application instead of just configuring it once in your syslog server and expecting all your applications to simply submit their logs to it.
Syslog daemons
Now that you understand that syslog is a protocol that specifies the common format of log messages produced by the applications, we can talk a bit about syslog daemons. They’re essentially logging servers or log shippers designed to take in log records, convert them to syslog format (if they are not already converted), and handle data transformations, enrichment, and transport to various destinations.
One of the original older implementations of a syslog daemon for Linux was referred to simply as syslog (leading to much confusion) or sysklogd. Later, more modern and commonly used implementations such as rsyslog or syslog-ng emerged. These were also made for Linux specifically.
But if you are interested in a cross platform syslog daemon, which can also be used on MacOS, Android, or even Windows, you can take a look at nxlog.
In the later sections of this handbook, we will see multiple practical example of working with syslog. For this we will use rsyslog, which is a lightweight and highly performant syslog daemon with a wide range of features. It typically comes preinstalled on many Linux distributions (both Debian- and RedHat-based).
Rsyslog, like many other syslog daemons, listens to a /dev/log
unix socket by default. It forwards the incoming data to a /var/log/syslog
file on Debian-based distributions or to /var/log/messages
on RedHat-based systems.
Some specific log messages are also stored in other files in /var/log
, but the bottom line is that all of this can, of course, be configured to suit your needs.
Now you know about the true meaning of syslog and syslog daemons. But there is one important caveat. In many cases (and during the course of this guide as well) saying syslog colloquially refers to the syslog daemon as well as the infrastructure around it (Unix sockets, files in /var/log
, as well as other daemons if the messages are forwarded across the network). So, saying “publish a message to syslog” means sending a message to the /dev/log
Unix socket where it will be intercepted and processed by a syslog daemon according to its settings.
Syslog message formats
Syslog defines a certain format (structure) of the log records. Applications working with syslog should adhere to this format when logging to /dev/log
. From there, syslog daemons will pick up the messages, and parse and process them according to their configuration.
There are two types of syslog formats: the original old BSD format which came from the early versions of BSD Unix systems and became a standard with RFC3164 specification, as well as a newer one from RFC5424.
RFC3164 format
This format consists of the following 3 parts: PRI, HEADER (TIMESTAMP, HOSTNAME), MSG (TAG, CONTENT). Here is a more concrete example (taken directly from RFC3164, by the way):
<34>Oct 11 22:14:15 mymachine su: 'su root' failed for lonvick on /dev/pts/8
Let’s see what’s going on here:
<34>
(PRI) – priority of the log record which consists of the facility level multiplied by 8 plus the severity level. We will talk about facilities and severity levels soon, but in the example above we get: a facility number 4 (34 // 8 = 4) and a critical severity level (34 % 8 = 2).Oct 11 22:14:15
(TIMESTAMP) – timestamp in local time without the year and a millisecond or a timezone portion. It follows a format string “Mmm dd hh:mm:ss”mymachine
(HOSTNAME) _ hostname, IPv4, or IPv6 address of the machine that the message originates from.su
(TAG) – Name of the program or process that generated the message. Any non-alphanumeric character terminates the TAG field and is assumed to be a starting part of the next (CONTENT) field. In our case, it is a colon (“:”) character. But it could also have been just a space, or even square brackets with the PID (process id) inside, such as “[123]”.: 'su root' failed for lonvick on /dev/pts/8
(CONTENT) – An actual message of the log record.
As you can see, RFC3164 doesn’t provide a lot of structural information, and has some limitations and inconveniences such as a restricted timestamp or certain variability and uncertainty (for example, in the delimiters after the TAG field). Also, the RFC3164 format stipulates that only ASCII encoding is supported.
All the above is actually the result of RFC3164 not being a set-in-stone strict standard, but rather a best-effort generalization of some syslog implementations that already existed at the time.
RFC5424 format
RFC5424 presents an upgraded and more structured format which deals with some of the problems found in RFC3164.
It consists of the following parts: HEADER (PRI, VERSION, TIMESTAMP, HOSTNAME, APP-NAME, PROCID, MSGID), STRUCTURED DATA (SD-ELEMENTS (SD-ID, SD-PARAM)), MSG. Below is an example:
<34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 - BOM'su root' failed for lonvick on /dev/pts/8
<34>
(PRI) – priority of the log record. Combination of severity and facility. Same as for RFC3164.1
(VERSION) – version of the syslog protocol specification. This number is supposed to be incremented for any future specification that makes changes to the HEADER part.2003-10-11T22:14:15.003Z
(TIMESTAMP) – a timestamp with year, sub-second information, and timezone portions. It follows the ISO 8601 standard format as described in RFC3339 with some minor restrictions, like not using leap seconds, always requiring the “T” delimiter, and upper casing every character in the timestamp. NILVALUE (“-”) will be used if the syslog application cannot obtain system time (that is, it doesn’t have access to the time on the server).https://mymachine.example.com/
(HOSTNAME) – FQDN, hostname, or the IP address of the log originator. NILVALUE may also be used when the syslog application does not know the originating host name.su
(APP-NAME) – device or application that produced the message. NILVALUE may be used when the syslog application is not aware of the application name of the log producer.-
(PROCID) – implementation-dependent value often used to provide a process name or process ID of the application that generated the message. A NILVALUE should be used when this field is not provided.ID47
(MSGID) – field used to identify the type of message. Should contain NILVALUE when not used.-
(STRUCTURED DATA) – provides sections with key-value pairs conveying additional metadata about the message. NILVALUE should be used when structured data is not provided. Examples: “[exampleSection@32473 iut=”3″ eventSource=”Application” eventID=”1011″][exampleSection2@32473 class=”high”]”. In practice the STRUCTURED DATA part was rarely used and the metadata information was usually put into the MSG part that many applications structure as a JSON.BOM'su root' failed for lonvick on /dev/pts/8
(MSG) – actual message of the log record. The “BOM” at the beginning is an unprintable character which signifies that the rest of the payload is UTF8 encoded. If this character is not present, then other encodings like ASCII can be assumed by the syslog daemons.
RFC5424 has a more convenient format used for a timestamp and much more structural parts which you can use for specifying all sorts of metadata for the log messages.
Also, the specification was made to be extendable with the VERSION field. Even though I am not aware of any particular syslog specification extensions that make use of the latter and increment the version, the possibility is always there.
Finally, the new format supports UTF8 encoding and not just ASCII.
Notice that if a message directed to /dev/log
does not follow one of the described syslog formats, it will still be processed by daemons such as rsyslog. Rsyslog will try to parse such records according to either its defaults or custom templates.
Templates are a separate topic on their own that needs its own article, so we will not be focusing on them now.
By default, rsyslog will treat such messages as unstructured data and process them as they are. It will try filling in the gaps like a timestamp field, severity level, and so on to the best of its ability and in accordance with its default parameters (for example, timestamp will become current time, log level will be “user”, and severity level will be “info”).
When inspecting the log records in /var/log/messages
or /var/log/syslog
(depending on the system) you will probably see a different format from those described above. For rsyslog it looks like this:
Feb 19 10:01:43 mymachine systemd[1]: systemd-hostnamed.service: Deactivated successfully.
This is just the format that rsyslog uses to display ingested messages that it saved to disk and not a standard syslog format. You can find this format in the rsyslog.conf file or in the official documentation under the name RSYSLOG_TraditionalFileFormat
. But you can alway configure how rsyslog outputs its messages yourself using templates.
One important aspect to understand is that rsyslog processes messages as they come. It receives and immediately forwards them to the specified destinations or saves them locally to files, such as /var/log/messages
. Once the messages are fully processed, rsyslog does not retain any metadata about them apart from what it stored to log files.
This means that if records in /var/log/messages
are stored in the traditional rsyslog format presented above, they will not keep, for example, their initial PRI value. While PRI and other data are accessible to rsyslog internally when processing and routing messages, not all of this information is by default stored in the log files.
Syslog log levels
Syslog supports the following log levels referred to as severity levels as per syslogs’s terminology:
These levels allow you to categorize messages by the severity (importance) criteria, with emergency being the highest level.
Syslog facilities
Syslog facilities represent the origin of a message. You can often use them for filtering and categorizing log records by the system that generated them.
Note that syslog facilities (as well as severity levels, actually) are not strictly normative, so different facilities and levels may be used by different operating systems and distributions. Many details here are historically rooted and not always utility-based.
Note that the syslog protocol specification defines only the codes for facilities. The keywords may be used by syslog daemons for readability.
Now that we have seen the list of all the existing facilities, pay attention to the ones such as “security”, “authpriv”, “log audit”, or “log alert”. It is possible for an application to log to different facilities depending on the nature of the message.
For example, an application might typically log to the “user” facility, but once it receives an important alert, it might log to facility 14 (log alert). Or in case of some authentication/authorization notice, it may direct it to “auth” facility, and so on.
If you have a custom application and are wondering which facility would be best suited for it, you can use the “user” facility (code 1) or custom local facilities (codes 16-22).
The ultimate difference between user and local facilities is that the former is a more general one, which aggregates the logs from different user applications. But be aware that other software might just as well use one of the local facilities on your machine.
How to Configure rsyslog
to Redirect Messages to a Centralized Remote Server using TLS
Let’s now look at a practical example that I mentioned at the beginning. This might not appear to be the most basic use case – especially for those who are not familiar with syslog daemons – but it’s quite a common scenario. I hope it will help you learn a lot of useful things along the way.
Now I’ll walk you through the steps you’ll need to take to forward the syslog data from one server to another that will play the role of a centralized log aggregator. In this example, we will be sending logs as they flow in, using the TCP protocol with certificates for encryption and identity verification.
In the following examples, I assume that you have a centralized server for accepting the syslog data and one or more exporting servers that forward their syslog messages to that central accepting node. I’ll also assume that all the servers are discoverable by their respective domain names and are running Debian-based or RedHat-based Linux distributions.
So, let’s dive in and get started.
Update rsyslog
As rsyslog typically comes preinstalled on most common Linux distros, I won’t cover the installation process here. Just make sure your rsylogd is up-to-date enough to take advantage of its wide range of features.
Run the following command across all your servers:
rsyslogd -v
And ensure that the version in the output is 6 or higher.
If this is not the case, run the following commands to update your daemon:
For Debian-based distributions:
sudo apt-get update
sudo apt-get install --only-upgrade rsyslog
sudo systemctl restart rsyslog
For RedHat-based:
sudo yum update rsyslog
sudo systemctl restart rsyslog
Or use dnf
instead of yum
for CentOS8/RHEL8:
Install dependencies
To handle the secure forwarding of the messages over the network using TLS, we will need to install the rsyslog-gnutls
module. If you prefer to compile rsyslog from source, you will have to specify a respective flag when building. But if you use package managers you can simply run the following for every server:
For Debian-based distributions:
sudo apt-get update
sudo apt-get install rsyslog-gnutls
sudo systemctl restart rsyslog
For RedHat based:
sudo yum install epel-release
sudo yum install rsyslog-gnutls
sudo systemctl restart rsyslog
Configure the exporting rsyslog server
Now, we will create a rsyslog configuration file for the nodes that are going to be exporting their logs to the central server. In order to do so, create the configuration file in the config directory of rsyslog:
sudo touch /etc/rsyslog.d/export-syslog.conf
Ensure that the file is readable by the syslog
user on Debian-based distributions (chown syslog:adm /etc/rsyslog.d/export-syslog.conf
or chmod 644 /etc/rsyslog.d/export-syslog.conf
). Note that on RedHat-based distros like CentOS rsyslog runs under root, so there shouldn’t be any permissions issues.
Now open the created file and add the following configuration:
# Set certificate files
global(
DefaultNetstreamDriverCAFile="<path_to_your_ca.pem>"
DefaultNetstreamDriverCertFile="<path_to_your_cert.pem>"
DefaultNetstreamDriverKeyFile="<path_to_your_private_key.pem>"
)
# Set up the forwarding action for all messages
*.* action(
type="omfwd"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverPermittedPeers="<domain_name_of_your_accepting_central_server>"
StreamDriverAuthMode="x509/name"
target="<domain_name_or_ip_of_your_accepting_central_server>" port="514" protocol="tcp"
action.resumeRetryCount="100" # you may change the queue and retry params as you see fit
queue.type="linkedList" queue.size="10000"
)
The above configuration will forward all the messages that are digested by rsyslog to your remote server. In case you want to achieve a more fine-grained control, refer to the subsection below.
Only forward logs generated by certain programs
If you want to forward messages for a certain program only, you can specify the following condition instead of *.*
before action
in the configuration above:
if $programname == '<your_program_name>' then
# ...right here goes your action and all the rest
If you want to specify more than one program name, add multiple conditions using or
:
if ($programname == '<your_program_name1>' or $programname == '<your_program_name2>' $programname == '<your_program_name3>') then
# ...right here goes your action and all the rest
For more information refer to the RainerScript documentation for rsyslog here.
Specify correct domain names and certificate paths in your configuration
Now, let’s go back to our configuration file. It will use TLS (as you can see in StreamDriverMode="1"
) and forward all the data to target
on port 514, which is a default port for syslog.
To make this configuration valid, you will need to replace <domain_name_of_your_accepting_central_server>
and <domain_name_or_ip_of_your_accepting_central_server>
with the respective domain name of your central accepting server (for example: my-central-server.company.com
) as well as specify correct paths to certificates in the global
section.
Note that, since on Debian-like distros rsyslog typically runs under the syslog
user, you will have to ensure that the certificates themselves and all the directories in their path are readable and accessible by this user (for directories this means that both “r” and “x” permission bits must be set).
On RedHat-based systems, on the other hand, rsyslog often runs as root, so there is no need to tweak the file permissions.
To check under which user your rsyslog runs, run the following:
sudo ps -aux | grep rsyslog
And look at the left side at the username executing rsyslogd.
If you don’t have SSL certificates yet, read the next two subsections about installing certs with Let’s Encrypt and providing access to rsyslog. If you already have all the needed certificates and permissions, you can safely skip these steps.
Install certbot certificates
First, you’ll need to install certbot. For Debian-based systems, run the following:
sudo apt-get install certbot
If you get an error that the package is not found, run sudo apt-get update
and try again.
For RedHat-based systems:
sudo yum install epel-release
sudo yum install certbot
Ensure that no server is running on port 80 and then run certbot in standalone mode, specifying your domain name with the -d
flag to get your SSL certificates:
sudo certbot certonly --standalone -d <your_domain_name>
# For example: sudo certbot certonly --standalone -d my-sever1.mycompany.com
Follow the prompts of certbot, and in the end you will receive your SSL certificates that will be stored at /etc/letsencrypt/live/<your_domain_name>/
.
Confirm that there are no problems during the certificate renewal process like this:
sudo certbot renew --dry-run
Certificates will be automatically renewed by certbot, so you don’t have to worry about manually updating them every time. If you installed certbot as described above, it will use a systemd timer or create a cron job to handle renewals.
Give access to certificates to rsyslog
If you are running a Debian-based system, then, as mentioned above, you have to grant the syslog
user the necessary privileges to access certbot certificates and keys. This is because the /etc/letsencrypt/live
directory with certbot-generated files is restricted to root user only.
So, we will copy the certificates and keys over to standard certs and keys locations. For Debian-based distributions, these are /etc/ssl/certs
and /etc/ssl/private
, respectively. Then we’ll change the permissions of these files.
First, create a group that will have access to SSL certificates:
sudo groupadd sslcerts
Add the syslog
user to this group:
sudo usermod -a -G sslcerts syslog
Add permissions and ownership for the /etc/ssl/private
directory to the created group:
sudo chown root:sslcerts /etc/ssl/private
sudo chmod 750 /etc/ssl/private
Now, we’ll create a script that will move certificate files from Let’s Encrypt’s live directory to a /etc/ssl
. Run the following:
sudo touch /etc/letsencrypt/renewal-hooks/deploy/move-ssl-certs.sh
sudo chmod +x /etc/letsencrypt/renewal-hooks/deploy/move-ssl-certs.sh
Note that by creating the script in the /etc/letsencrypt/renewal-hooks/deploy
directory, it will automatically run after every certificate renewal. This way, you won’t have to worry about manually moving certificates and granting permissions every time they expire.
Open the created file and add the following content, replacing <your-domain-name>
with the domain of your machine which corresponds to the directory created by certbot in /etc/letsencrypt/live
:
#!/bin/bash
# Define the source and destination directories
DOMAIN_NAME=<your-domain-name>
LE_LIVE_PATH="/etc/letsencrypt/live/$DOMAIN_NAME"
SSL_CERTS_PATH="/etc/ssl/certs"
SSL_PRIVATE_PATH="/etc/ssl/private"
# Copy the full chain and private key to the respective directories
cp "$LE_LIVE_PATH/fullchain.pem" "$SSL_CERTS_PATH/$DOMAIN_NAME-letsencrypt-fullchain.pem"
cp "$LE_LIVE_PATH/cert.pem" "$SSL_CERTS_PATH/$DOMAIN_NAME-letsencrypt-cert.pem"
cp "$LE_LIVE_PATH/privkey.pem" "$SSL_PRIVATE_PATH/$DOMAIN_NAME-letsencrypt-privkey.pem"
# Set ownership and permissions
chown root:sslcerts "$SSL_CERTS_PATH/$DOMAIN_NAME-letsencrypt-fullchain.pem"
chown root:sslcerts "$SSL_CERTS_PATH/$DOMAIN_NAME-letsencrypt-cert.pem"
chown root:sslcerts "$SSL_PRIVATE_PATH/$DOMAIN_NAME-letsencrypt-privkey.pem"
chmod 644 "$SSL_CERTS_PATH/$DOMAIN_NAME-letsencrypt-fullchain.pem"
chmod 644 "$SSL_CERTS_PATH/$DOMAIN_NAME-letsencrypt-cert.pem"
chmod 640 "$SSL_PRIVATE_PATH/$DOMAIN_NAME-letsencrypt-privkey.pem"
Now, execute the created script to actually move the certificates to /etc/ssl and give permissions to the syslog
user:
sudo /etc/letsencrypt/renewal-hooks/deploy/move-ssl-certs.sh
Lastly, go into the rsyslog configuration file /etc/rsyslog.d/export-syslog.conf
and change the certificate paths accordingly:
# Set certificate files
global(
DefaultNetstreamDriverCAFile="/etc/ssl/certs/<your_domain_name>-letsencrypt-fullchain.pem"
DefaultNetstreamDriverCertFile="/etc/ssl/certs/<your_domain_name>-letsencrypt-cert.pem"
DefaultNetstreamDriverKeyFile="/etc/ssl/private/<your_domain_name>-letsencrypt-privkey.pem"
)
Note that even though rsyslog typically runs as root on RedHat-based distributions, you may find that it’s not the case for your system.
If it’s not, you can do the same permission manipulations as we did above. But the default recommended location for SSL certificates and keys might differ. For CentOS it’s /etc/pki/tls/certs
and /etc/pki/tls/private
. But you can also alway choose completely different locations if need be.
Configure accepting rsyslog server
Let’s now configure a central server that will accept the logs from the rest of the machines.
If you haven’t acquired SSL certificates for your server, refer to the section on installing certbot certificates.
If your server is Debian-based, refer to the section on giving access to certificates to rsyslog.
Now, similar to configuring the exporting server, create a rsyslog configuration file:
sudo touch /etc/rsyslog.d/import-syslog.conf
Open the file and add the following:
# Set certificate files
global( DefaultNetstreamDriverCAFile="/etc/ssl/certs/<your_domain_name>-letsencrypt-fullchain.pem"
DefaultNetstreamDriverCertFile="/etc/ssl/certs/<your_domain_name>-letsencrypt-cert.pem"
DefaultNetstreamDriverKeyFile="/etc/ssl/private/<your_domain_name>-letsencrypt-privkey.pem"
)
# TCP listener
module(
load="imtcp"
PermittedPeer=["<your_peer1>","<your_peer2>","<your_peer3>"]
StreamDriver.AuthMode="x509/name"
StreamDriver.Mode="1"
StreamDriver.Name="gtls"
)
# Start up listener at port 514
input(
type="imtcp"
port="514"
)
Note that you need to replace PermittedPeer=["<your_peer1>","<your_peer2>","<your_peer3>"]
with an array of the domain names of your export servers, for example: PermittedPeer=["export-server1.company.com","export-server2.company.com","export-server3.company.com"]
.
Also don’t forget to double check and change your certificate paths in the global
section as needed. Again, if you are on a RedHat-based system, you may simply reference certificates in Let’s Encrypt’s live directory because of the root permissions.
Ensure firewall is not blocking your traffic
Make sure that the firewall on your central server doesn’t block incoming traffic on port 514. For example, if you are using iptables
:
To check the rule already exists:
sudo iptables -C INPUT -p tcp --dport 514 -j ACCEPT
If previous command exits with an error, you can define an accepting rule with:
sudo iptables -A INPUT -p tcp --dport 514 -j ACCEPT # rules will apply immediately
iptables-save > /etc/iptables/rules.v4 # or use `iptables-save > /etc/sysconfig/iptables` for RedHat-based distributions
Restart rsyslog
Now, after you’ve added the appropriate configurations to all your servers, you have to restart rsyslog on all of them, beginning with your central accepting node:
sudo systemctl restart rsyslog
You can check if there are any errors after the rsyslog restart by executing the following:
sudo systemctl status rsyslog
sudo journalctl -u rsyslog | tail -100
The first command above will display the status of rsyslog, and the second one will output the 100 last lines of rsyslog’s logs. If you misconfigured something and your set up didn’t work, you should find helpful information there.
Test the configuration
In order to test whether your syslog redirection worked, issue the following command on the central node to start watching for new data coming in to syslog:
For Debian-based systems:
tail -f /var/log/syslog
For RedHat-based:
tail -f /var/log/messages
After that, go to each of your export nodes and run:
logger "Hello, world!"
You should see a “Hello, world!” message from each export server popping up in the syslog of your accepting machine.
If everything worked, then congrats! You have now successfully setup and verified syslog redirection over the network.
Note: press Ctrl+C to exit from the tail -f
command executed on the central node.
In a later section, we will consider the same scenario but without certificates in case all your servers are located in a trusted local network. After that, we will finally explore how to redirect actual data from different applications to syslog.
How to store remote logs in a separate file
But wait a second – before moving on, let’s consider a small useful modification to our setup.
Let’s say you want to set up your central rsyslog in such a way that it will redirect remote traffic to a separate file instead of the typical /var/log/syslog
or /var/log/messages
.
To do this, make the following changes to your /etc/rsyslog.d/import-syslog.conf
:
Add a ruleset property to the input
object:
input(
type="imtcp"
port="514"
ruleset="remote"
)
Then add the following line at the bottom of the file:
ruleset(name="remote") {
if $hostname == '<your_remote_hostname>' then {
action(type="omfile" file="/var/log/remote-logs.log")
stop
}
}
Change <your_remote_hostname>
accordingly. You can also define multiple hostnames with an or
as we have seen before. Also feel free to change the path to the output file (that is, /var/log/remote-logs.log
) to suit your needs.
After that, restart rsyslog.
Performance considerations
Rsyslog is a very light and performant tool for managing and forwarding your logs over the network. Still, performing a TCP protocol and a TLS handshake to validate the certificates for every log message (or a batch of messages) comes with its costs.
In the next section, you’ll learn how to perform TCP and UDP forwarding without TLS certificates. This will typically be a more performant way, but you should only use it in a trusted local network.
As for UDP, even though it’s more performant than TCP, you should use it only if potential data losses are acceptable.
If you don’t need a near real time log delivery, you might be better off storing all your logs in a single file (you can do this with rsyslog or by employing other tools or techniques). Then you can schedule a separate script, which will transfer this file to a central server when it reaches a certain size or when certain time intervals elapse.
In any case, before employing a particular solution, make sure you do a benchmark focusing on load testing your system to discover which approach works best for you.
How to Configure rsyslog
to Redirect Messages to a Centralized Remote Server Over a Local Network
If your scenario doesn’t involve communications over an untrustworthy network, you might decide not to use certificates for forwarding your syslog records. I mean, TLS handshakes are costly after all!
Also, the configuration in this case which we’ll discuss now will get quite a bit simpler. It will involve fewer steps, as our main concern when setting up syslog forwarding with TLS were SSL certificates and their file permissions.
Export the server setup
To configure your exporting server to forward syslog data using TCP without encryption, login to every exporting server and create an rsyslog configuration file:
sudo touch /etc/rsyslog.d/export-syslog.conf
Open this file and add the following configuration, replacing <your_remote_server_hostname_or_ip>
with the hostname or IP of your central node, which must be discoverable on your network:
*.* action(
type="omfwd"
target="<your_remote_server_hostname_or_ip>"
port="514"
protocol="tcp"
action.resumeRetryCount="100"
queue.type="linkedList"
queue.size="10000"
)
If you want to use the UDP protocol, you can simply change protocol=”tcp”
to protocol=”udp”
.
In case you are now wondering whether we could use UDP to forward the traffic in our previous example with certificates, then the answer is no. This is because a TLS handshake works over TCP but not UDP. At least it was originally designed this way, and even though there might exist certain variations and protocol modifications in the wild, they are very tricky and definitely out of the scope of this handbook.
Note that there is an alternative simpler but less flexible syntax for writing the above configuration.
For forwarding over TCP:
*.* @@<your_remote_server_hostname>:514
For forwarding over UDP:
*.* @<your_remote_server_hostname>:514
I am showing you these syntax variations because you may encounter them in other articles. These variations replicate the syntax of sysklogd daemon (yes, one of the first syslog daemon implementations which rsyslog is a backwards compatible fork of).
Accept the server setup
Create an rsyslog configuration file on your accepting server:
sudo touch /etc/rsyslog.d/import-syslog.conf
Open the file and add the following contents:
module(load="imtcp") # Load the imtcp module
input(type="imtcp" port="514") # Listen on TCP port 514
module(load="imudp") # Load the imudp module for UDP
input(type="imudp" port="514") # Listen on UDP port 514
Legacy syntax alternatives of the config file for the receiving server would be the following:
For TCP:
$ModLoad imtcp # Load the imtcp module for TCP
$InputTCPServerRun 514 # Listen on TCP port 514
For UDP:
$ModLoad imudp # Load the imudp module for UDP
$UDPServerRun 514 # Listen on UDP port 514
Note, though, that even though you might sometimes encounter the legacy rsyslog syntax of receiving messages, it is not compatible with sysklogd.
To set up a listener in rsyslogkd, you would have to set a different special variable in the configuration file as described here. But going into details about syslogkd is outside the scope of this article.
Also, I don’t recommend that you use the old syntax (neither does the author of rsyslog). It is presented purely for educational purposes, so that you know what it is in case you encounter it.
You can read more about rsyslog’s configuration formats here.
In case you use a firewall, check that its settings allow incoming UDP or TCP connections on port 514.
Restart rsyslog and test
Go to every machine starting with the accepting server and restart rsyslog. Then check that there are not errors in its logs:
sudo systemctl restart rsyslog
sudo journalctl -u rsyslog | tail -100
Other Possibilities for Log Forwarding
Rsyslog is a very powerful tool with a lot more functionality than we have covered so far. For example, it supports direct forwarding of the logs to Elasticsearch, which is a very performant log storage system. But that’s a separate topic which deserves its own article.
For now, I will just give you a taste of how an example rsyslog-to-elasticsearch configuration might look like:
# Note that you will have to install "rsyslog-elasticsearch" using your package manager like apt or yum
module(load="omelasticsearch") # Load the Elasticsearch output module
# Define a template to constract a JSON message for every rsyslog record, sine Elasticsearch works with JSON
template(name="plain-syslog"
type="list") {
constant(value="{")
constant(value=""@timestamp":"") property(name="timereported" dateFormat="rfc3339")
constant(value="","host":"") property(name="hostname")
constant(value="","severity":"") property(name="syslogseverity-text")
constant(value="","facility":"") property(name="syslogfacility-text")
constant(value="","syslogtag":"") property(name="syslogtag")
constant(value="","message":"") property(name="msg" format="json")
constant(value=""}n")
}
# Redirect all logs to syslog-index of Elasticsearch which listens on localhost:9200
*.* action(type="omelasticsearch"
server="localhost:9200"
searchIndex="syslog-index"
template="plain-syslog")
For more information refer to the docs.
How to Redirect Data from Applications to syslog
So far we have covered configuring a syslog daemon. But how do we actually push logs from real applications into a syslog?
Ideally, it would be best if your application already had a syslog integration and could be configured to send the logs to syslog directly.
But what if this is not the case? Well, it certainly is a pity, because manually redirecting stdout
and stderr
to syslog might come with its challenges and inconveniences. But don’t worry, it’s not that complicated! At least sort of.
Let’s take a look at different scenarios:
Standalone host application and syslog
First of all, let’s consider that you already have an application running locally on your host machine (no containerization). There are multiple ways to redirect its logs in this case.
Instead of using general example commands and constantly repeating “change <you_app_blah_blah> in the command above accordingly” (which I am quite tired of at this point), I am going to install a real application and show the redirection with concrete examples.
The application of choice will be Mosquitto broker, since I am very fond of MQTT, but you can use whatever, as it’s just an example.
Oh, and by the way, Mosquitto does provide a direct integration with the syslog. It just requires a small change (log_dest syslog
) in its configuration file. But we will not be looking into this, since our assumption is that we are working with an application incapable of interfacing with the syslog directly.
Here’s how to install the broker on Debian-based systems:
sudo apt-get update
sudo apt-get install mosquitto
And here’s RedHat-based installation:
sudo yum install epel-release
sudo yum install mosquitto
After the installation, Mosquitto might be automatically run in the background, so I stop it with sudo systemctl stop mosquitto
.
Redirecting logs to syslog when running in foreground
You can run Mosquitto in the foreground and redirect all its logs to syslog using “info” level and local0 facility:
sudo mosquitto -c /etc/mosquitto/mosquitto.conf -v 2>&1 | sudo logger -t mosquitto -p local0.info
- The
-c
option specifies a non-default Mosquitto configuration file and may be omitted. -v
specifies a verbose mode which produces more output.- The
-t
flag provided to the “logger” command is a TAG representing the programname.
Note that the default facility of the logger tool is user
and default severity level is notice
.
Forwarding all the output into syslog with a common severity level is good and all, but it would make more sense to be able to distinguish at least between info and error messages.
To be able to distinguish those, you will have to write a custom bash script. Below you can see an example. Note that the strange looking part “> >(…)” is a Bash substitution feature.
#!/bin/bash
# Define your application command here
command="/usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf -v"
programname="mosquitto"
# Use process substitution to handle stdout and stderr separately
{
$command 2> >(while read line; do logger -p local0.error -t "$programname" "$line"; done)
> >(while read line; do logger -p local0.info -t "$programname" "$line"; done)
}
To run the above script, just save it to a file, give it execute permissions with sudo chmod +x /path/to/your_script.sh
, and run it with sudo ./your_script.sh
.
Something to be aware of is that starting Mosquitto is not the most suitable command for the example above, since it redirects all its logging output to stderr by default. So you will only see messages with “error” severity in the syslog log files.
Now, here is an example of a bash script in case you want to determine severity level by parsing each application’s log record from stdout or stderr and base a severity level off of some specific substrings in each record (for example, “ERROR”, “WARN”, “INFO”, and so on):
#!/bin/bash
# Define your application command here
command="/usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf -v"
programname="mosquitto"
# Execute command and pipe its stdout and stderr to a while loop
$command 2>&1 | while read line; do
# Determine the severity level based on the content of the line
if [[ "$line" == *"Error:"* ]]; then
logger -t "$programname" -p user.err "$line" # Forward error messages as errors
elif [[ "$line" == *"Warning"* ]]; then
logger -t "$programname" -p user.warning "$line" # Forward warning messages as warnings
else
logger -t "$programname" -p user.info "$line" # Forward all other messages as info
fi
done
Redirecting logs to syslog when running in background with systemctl
Many applications run as daemons (in the background). Oftentimes they can be started and managed using systemctl
process management tool or similar.
If you want to redirect the logs of an application that runs as a systemctl daemon to syslog, follow the example below.
Here are the steps you’ll need to perform when running Mosquitto broker in background:
Step 1: create a custom sh script:
sudo touch /usr/local/bin/mosquitto_with_logger.sh
Step 2: open the file and add the following content:
#!/bin/bash
/usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf -v 2>&1 | logger -t mosquitto
Step 3: add execute permissions to the script:
sudo chmod +x /usr/local/bin/mosquitto_with_logger.sh
Step 4: create a systemd unit file:
sudo touch /etc/systemd/system/mosquitto_syslog.service
Step 5: open the file and add the following:
[Unit]
Description=Mosquitto MQTT Broker with custom logging
After=network.target
[Service]
ExecStart=/usr/local/bin/mosquitto_with_logger.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
Step 6: reload systemd, enable the custom service to be run on system startup, and finally start it:
sudo systemctl daemon-reload
sudo systemctl enable mosquitto_syslog.service
sudo systemctl start mosquitto_syslog.service
Redirecting logs from existing log files
In case your application only logs to a file and you want to redirect these logs to syslog, see the following rsyslog configuration file that you can place in /etc/rsyslog.d/
with a .conf
file extension:
module(load="imfile" PollingInterval="10") # Load the imfile module
# For info logs
input(type="imfile"
File="/path/to/your/app-info.log"
Tag="myapp"
Severity="info"
Facility="local0")
# For error logs
input(type="imfile"
File="/path/to/your/app-error.log"
Tag="myapp"
Severity="error"
Facility="local0")
# you can put your actions that will forward the data here
# or rely on the actions from the original rsyslog.conf file that imports this file
The configuration above assumes that you have separate files for info and error logs.
In principle, you could also forward all the contents from a single file by either assigning a common severity or by trying to parse out each new line in the file and guess its intended log level. This would require us to use rsyslog’s rulesets similar to the following:
module(load="imfile" PollingInterval="10") # Load the imfile input module
# Template for formatting messages with original severity and facility
template(name="CustomFormat" type="string" string="<%PRI%>%TIMESTAMP% %HOSTNAME% %msg%n")
# Monitor a specific logfile
input(type="imfile"
File="/path/to/your/logfile.log"
Tag="myApp"
Ruleset="guessSeverity")
# Ruleset to parse log entries and guess severity
ruleset(name="guessSeverity") {
# Use property-based filters to check message content and route accordingly
if ($msg contains "Error:") then {
action(type="omfile"
File="/var/log/errors.log" # Specify the log file for error messages
Template="CustomFormat"
)
} else if ($msg contains "Warning:") then {
action(type="omfile"
File="/var/log/warnings.log" # Specify the log file for warning messages
Template="CustomFormat"
)
} else {
action(type="omfile"
File="/var/log/info.log" # Specify the default log file for other messages
Template="CustomFormat"
)
}
}
You should ensure that /path/to/your/logfile.log
exists before applying the above configuration.
We used rulesets above which is another nice feature of rsyslog. You can read more on this in the official documentation.
However, the above configuration explicitly sets the destination for processed messages directing them to different files depending on their severity. If you want to forward the messages to the standard /var/log/messages
or /var/log/syslog
, you will have to specify it explicitly (and amend/add more templates to reflect the appropriate severity levels).
But what if you have many other rules in your main rsyslog config file? You don’t want to repeat them in the ruleset above and just want to parse out the severity level and pass the record on to rsyslog’s main configuration to handle the rest?
Unfortunately, I didn’t find a nice way of doing this. There is just one hacky approach to resubmit your record back to rsyslog using the logger
utility and the omprog
module. I will show this approach anyway for completeness, and since it’s a good way to see more rsyslog features. But be aware that it involves some overhead, since you’ll basically be invoking rsyslog twice for every record.
To resubmit a record back to rsyslog, include the omprog
module:
module(load="omprog")
And change the actions inside the if-else tree to the following:
action(type="omprog"
Template="CustomFormat" # Optional property to format the message
binary="/usr/bin/logger -t myApp -p local0.error"
)
By the way, don’t forget to make sure that the log files are accessible to the user under which rsyslog runs.
I recommend that you keep all the parsing and log redirection logic in rsyslog config files. But if you don’t want to do so, and instead want to create a separate rsyslog configuration in your specific usecase, below you can find a bash script to do what we have done above.
The script tails a log file, parses each record to assign an appropriate severity level, and forwards these records to syslog:
#! /bin/bash
tail -F /path/to/log-file.log | while read line; do
if [[ "$line" == *"Error:"* ]]; then
logger -p local0.err "$line" # Forward error messages as errors
else
logger -p local0.info "$line" # Forward other messages as info
fi
done
If you want to test whether the above script works, just create a log-file.log
, run the script, and then issue echo "Error: this is a test error message" >> log-file.log
in a separate terminal. After that you should see the error message in the rsyslog log file /var/log/messages
or /var/log/syslog
.
Running the script above directly will block your terminal and wait for its compilation. So for practical scenarios, you’ll want to dispatch it to the background using, for example, setsid
or other tools.
One important thing before we move on is that when testing the above scripts and configurations, be aware that your rsyslog might have a deduplication feature on. If this is the case, duplicate messages might not get processed. This is a legacy feature but chances are that it’s still in your configuration (mainly on Debian-based systems). Read more here.
In addition, you can drop unwanted messages.
Docker and syslog
Default logs generated to stdout/stderr by the applications running in Docker containers are stored in files under /var/lib/docker/containers (the exact path may depend on your system).
To access the logs for a particular container you can use docker logs <container name or container id>
. But what if you want to redirect the stdout and stderr of your containerized applications into syslog directly? Then there are again multiple options. Below I will be using a Mosquitto broker container as an example.
Configuring a single Docker container
If you are starting a container using a docker run
command, refer to the example below:
docker run -it -d -p 1883:1883 -v /etc/mosquitto/mosquitto.conf:/mosquitto/config/mosquitto.conf --log-driver=syslog --log-opt syslog-address=udp://192.168.0.1:514 --log-opt tag="docker/{{.Name}}/{{.ID}}" eclipse-mosquitto:2
In this example:
--log-driver=syslog
specifies that the syslog driver should be used.--log-opt syslog-address=udp://192.168.0.1:514
specifies the protocol, address, and port of your syslog server. If you have a syslog server running locally and just want your logs to appear under/var/log
on your local machine, then you can omit this option.--log-opt tag="docker-{{.Name}}-{{.ID}}"
sets a custom TAG field for the logs from this container.{{.Name}}
will resolve to the container name, while{{.ID}}
to the container id. Note that you shouldn’t use slashes (“/”) here as rsyslog will not parse them and truncate the TAG parts which follow. But it will work with hyphens (“-”). This also implies that rsyslog tries its best to parse all the possible message formats and it might not always be what you expect. You can read more here.- The rest of the flags like
-it -d
or-p
and-v
are container specific flags which specify the mode of the container, mapped ports, volumes and so on. You can read more about them in detail in this article.
Configuring a Docker service through docker-compose file
If you are using docker-compose instead of executing docker run
directly, here is an example docker-compose.yml
file:
version: '3'
services:
mosquitto:
image: eclipse-mosquitto:2
logging:
driver: syslog
options:
syslog-address: "udp://192.168.0.1:514"
tag: "docker/{{.Name}}/{{.ID}}"
ports:
- 1883:1883
- 8883:8883
- 9001:9001
volumes:
- ./mosquitto/config:/mosquitto/config
Pay attention to directives driver
, syslog-address
, and tag
, which are similar to the docker-run example.
Configuring a default for every container through the Docker daemon
If you don’t want to specify log driver options in every docker-compose file or every time you use a “docker run” command, you can set the following configuration in /etc/docker/daemon.json
which will apply it globally.
{
"log-driver": "syslog",
"log-opts": {
"syslog-address": "udp://192.168.0.1:514",
"tag": "docker/{{.Name}}/{{.ID}}"
}
}
After that, restart docker with sudo systemctl restart docker
or sudo service docker restart
.
Enabling applications inside Docker to log to syslog directly
If you have an application which is able to forward its logs to syslog directly (such as Mosquitto) and you want to use it in a container, then you will have to map the local /dev/log
to /dev/log
inside the container.
For that, you can use the volumes
section of docker-compose.yml
or the -v
flag of the docker run
command.
How to Use Logging Libraries for Your Programming Language to log to syslog
Now, what if you are developing an application or need to create some custom aggregation script which forwards messages from certain apps or devices to syslog?
To give a simple real world example, you might want to build a control console for your IoT devices.
Let’s say you have a bunch of devices that connect to an MQTT broker. Whenever those devices generate log messages, they publish them to a certain MQTT topic.
In this case you might want to create a custom script that subscribes to this topic, receives messages from it and forwards them to syslog for further storage and processing. This way you will gather all your logs in one place with the ability to further visualize and manage them with tools such as Splunk, Elastic stack, and so on or run any statistics or reports on them.
Below, I am going to show you how to fire messages from your Node.js or Python application to syslog. This will enable you to implement your custom applications that work with syslog.
Note that the example scenario above gave a practical use case for what we will see in this section. But we will not explore it further, since it would require a bit more time and effort and might lead us off of the main point of this guide.
But if you are interested in managing something like the above, you can easily extend the scripts I show below by using the MqttJS library and connecting to the broker with Node.js as described here or using Paho MQTT Python client as shown in this tutorial.
Node.js client
Unfortunately, there aren’t many popular, maintainable, well-proven libraries for syslog logging with Node.js. But one good option you can use is a flexible general purpose library for logging called winston. It’s quite usable in both small and larger scale projects.
When installing winston, you will also have to additionally install a custom transport called winston-syslog
:
npm install winston winston-syslog
Here is a usage example:
const winston = require('winston');
require('winston-syslog').Syslog;
const logger = winston.createLogger({
levels: winston.config.syslog.levels,
format: winston.format.printf((info) => {
return `${info.message}`;
}),
transports: [
new winston.transports.Syslog({
app_name: 'MyNodeApp',
facility: 'local0',
type: 'RFC5424',
protocol: 'unix', // Use Unix socket
path: '/dev/log', // Path to the Unix socket for syslog
})
]
});
// Log messages of various severity levels
// When using emerg level you might get some warnings in your terminal
// But don't panic - this is expected, since it's the most sever level
logger.emerg('This is an emerge message.');
logger.alert('This is an alert message.');
logger.crit('This is a crit message.');
logger.error('This is an error message.');
logger.warning('This is a warning message.');
logger.notice('This is a notice message.');
logger.info('This is an informational message.');
logger.debug('This is a debug message.');
logger.end();
Note that if you remove the format
property from the object passed to createLogger
, you will see a JSON payload consisting of a message and severity level for messages in syslog. That’s the default format of records parsed by winston-syslog
.
Python client
In case of Python, you don’t even have to install any third party dependencies as Python already comes with two quite capable libraries: syslog and logging. You can use either one.
The former is tailored to working with syslog specifically, while the latter can also handle other log transports (stdout, file, and so on). It can also often be seamlessly extended to work with syslog for existing projects.
Here is an example of using the “syslog” library:
import syslog
# Log an single info message
# Triggers an implicit call to openlog() with no parameters
syslog.syslog(syslog.LOG_INFO, "Message an info message.")
# You can also set the facility
syslog.openlog(ident="MyPythonApp", facility=syslog.LOG_LOCAL0)
# messages with different severity levels and LOG_LOCAL0 facility
syslog.syslog(syslog.LOG_EMERG, "This is an emerge message.")
syslog.syslog(syslog.LOG_ALERT, "This is an alert message.")
syslog.syslog(syslog.LOG_CRIT, "This is a critical message.")
syslog.syslog(syslog.LOG_ERR, "This is an error message.")
syslog.syslog(syslog.LOG_WARNING, "This is a warning message.")
syslog.syslog(syslog.LOG_NOTICE, "This is an notice message.")
syslog.syslog(syslog.LOG_INFO, "This is an informational message.")
syslog.syslog(syslog.LOG_DEBUG, "This is a debug message.")
# Close the log if necessary (usually handled automatically at program exit)
syslog.closelog()
And here is an example of using the “logging” library. Note that “logging” has a predefined set of log levels which does not fully align with syslog severity levels (for example, some levels like “crit”, “emerg”, and “notice” are missing by default). You can, however, extend it when needed but we will keep it simple here. For more information refer here:
import logging
import logging.handlers
# Create a logger
logger = logging.getLogger('MyPythonApp') # Set application name
logger.setLevel(logging.INFO) # Set the default log level
# Create a SysLogHandler
syslog_handler = logging.handlers.SysLogHandler(address='/dev/log', facility=logging.handlers.SysLogHandler.LOG_LOCAL0)
# Optional: format the log message
# Set a format that can be parsed by rsyslog.
# The one presented below is a simplification of RFC3164
# Note that PRI value will be prepended to the beginning automatically
formatter = logging.Formatter("%(name)s: %(message)s")
syslog_handler.setFormatter(formatter)
# Add the SysLogHandler to the logger
logger.addHandler(syslog_handler)
# Log messages with standard logging levels
logger.debug('This is a debug message.')
logger.info('This is an informational message.')
logger.warning('This is a warning message.')
logger.error('This is an error message.')
logger.critical('This is a critical message.')
Alternatively, you can use a non-standard “loguru” library or any other. Built-in libraries are quite powerful and sufficient for most of the use cases. But if you are already using a library like loguru in your project, you can extend it to work with syslog:
from loguru import logger
import logging
import logging.handlers
class SyslogHandler:
def __init__(self, appname, address='/dev/log', facility=logging.handlers.SysLogHandler.LOG_USER):
self.appname = appname
self.handler = logging.handlers.SysLogHandler(address=address, facility=facility)
self.loglevel_map = {
"TRACE": logging.DEBUG,
"DEBUG": logging.DEBUG,
"INFO": logging.INFO,
"SUCCESS": logging.INFO,
"WARNING": logging.WARNING,
"ERROR": logging.ERROR,
"CRITICAL": logging.CRITICAL,
}
def write(self, message):
# Extract the log level, message text, and other necessary information.
loglevel = self.loglevel_map.get(message.record["level"].name, logging.INFO)
logmsg = f"{self.appname}: {message.record['message']}"
# Create a log record that the standard logging system can understand.
record = logging.LogRecord(name=self.appname, level=loglevel, pathname="", lineno=0, msg=logmsg, args=(), exc_info=None)
self.handler.emit(record)
def flush(self):
pass
# Configure Loguru to use the above defined SyslogHandler
appname = "MyPythonApp"
logger.add(SyslogHandler(appname), format="{message}")
# Now you can log messages and they will appear be directed to syslog
logger.info("This is an informational message sent to syslog.")
Conclusion
In this handbook, you learned all about syslog. We clarified the confusing terminology, explored its use cases, and saw a lot of usage examples.
The main points to take away are:
- Syslog is a protocol describing the common format of message exchange between applications and syslog daemons. The latter take on message parsing, enrichments, transport, and storage.
- People commonly colloquially refer to the infrastructure of syslog daemons, their configuration, log storage files, and accepting sockets as “syslog”. “Redirect logs to syslog” means redirect the logs to the
/dev/log
socket where they will be picked up by a syslog daemon, processed, and saved according to its configuration. - There are two standard syslog formats: the obsolete RFC3164 and a newer RFC5424.
- Some well known syslog daemons include: sysklogd (Linux), rsyslog (Linux), syslog-ng (Linux), and nxlog (cross-platform).
- Rsyslog and other log daemons can forward logs from one server to another. You can use this to create a log collecting infrastructure with a central server processing all the logs coming from the rest of the nodes.
- Even though it incurs some overhead, it’s important to forward the logs using the TLS protocol in case they are transported over an untrustworthy network.
- The rsyslog daemon is a lightweight and powerful tool with many features. It can collect messages from different sources, including files and network. It can process this data using customizable templates and rulesets, and then either save it to disk or forward it elsewhere. Rsyslog can also directly integrate with Elasticsearch, among other capabilities.
- It is possible to forward the logs of an application to syslog even if it does not provide a native integration. You can do this for standalone host applications, containerized systems, or through an aggregation script written in a programming language of your choice.
- Output of the standalone apps (stdout and stderr) can be captured and redirected to the
logger
Linux utility tool. Docker provides a separate syslog driver for logging, while many programming languages have dedicated logging libraries.
Thanks for reading, and happy logging!
This article is very informative. I learned a lot about Syslog that I didn’t know before. I would recommend this article to anyone who is interested in learning more about Syslog.
I disagree with some of the things that the author says in this article. I think that Syslog is a very useful tool, but it can be difficult to configure. I would recommend that the author do more research on Syslog before writing another article about it.
This article is very well-written, but it’s a bit too technical for me. I would recommend this article to someone who is already familiar with Syslog.
This article is very well-written, but it’s a bit too short. I would recommend this article to someone who is short on time.
This article is very well-written, but it’s a bit too boring. I would recommend this article to someone who is very interested in Syslog.
This article is very well-written, but it’s a bit too boring. I would recommend this article to someone who is very interested in Syslog.
This article is very well-written, but it’s a bit too short. I would recommend this article to someone who is short on time.
I disagree with some of the things that the author says in this article. I think that Syslog is a very useful tool, but it can be difficult to configure. I would recommend that the author do more research on Syslog before writing another article about it.
This article is very well-written, but it’s a bit too long. I would recommend this article to someone who has a lot of time to read.
This article is very well-written, but it’s a bit too boring. I would recommend this article to someone who is very interested in Syslog.
This article is very well-written, but it’s a bit too difficult to understand. I would recommend this article to someone who is already familiar with Syslog.
This article is very well-written, but it’s a bit too technical for me. I would recommend this article to someone who is already familiar with Syslog.
This is a great article! It’s very well-written and easy to understand. I’ve been using Syslog for a while now, and I’ve found it to be a very valuable tool. I would recommend this article to anyone who is interested in learning more about Syslog.
This article is not very good. It’s full of errors and it’s difficult to understand. I would not recommend this article to anyone.
This article is very well-written, but it’s a bit too long. I would recommend this article to someone who has a lot of time to read.