Red Hat (RHEL) System Logs

Ship system log files from Red Hat Enterprise Linux (RHEL) to Logstash

Configure Filebeat to ship logs from Red Hat Systems to Logstash and Elasticsearch.

Install Integration

Please click on the Install Integration button to configure your stack for this source.

Install Filebeat

To get started you will need to install filebeat. To do this you have two main options:

  • Choose the AMD / Intel file (x86_64) or
  • Choose the ARM file (aarch64)

You can tell if you have a PC with an ARM CPU architecture by opening the Terminal application and running the arch command. If it displays arm64 you have ARM architecture.

To successfully install filebeat you will need to have root access.

If you have an x86_64 system download and install filebeat using the following commands:

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.15.2-x86_64.rpm
sudo rpm -vi filebeat-8.15.2-x86_64.rpm    

If you have an aarch64 system download and install filebeat using the following commands:

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.15.2-aarch64.rpm
sudo rpm -vi filebeat-8.15.2-aarch64.rpm

Enable The System Module

There are several built in filebeat modules you can use. You will need to enable the system module.

Change directory to the location where filebeat was extracted and run the following commands:

sudo filebeat modules list
sudo filebeat modules enable system

Navigate to the modules.d folder, copy the snippet below and replace the contents of the system.yml module file:

# Module: system
# Docs: https://www.elastic.co/guide/en/beats/filebeat/8.12.2/filebeat-module-system.html
 
- module: system
  # Syslog
  syslog:
    enabled: true
 
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
 
# Authorization logs
  auth:
    enabled: true
 
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

Update your configuration file

The configuration file below is pre-configured to send data to your Logit.io Stack via Logstash.

Copy the configuration file below and overwrite the contents of filebeat.yml (this file can be found in the folder where you installed Filebeat in the first step).

Filebeat modules offer the quickest way to begin working with standard log formats. If you opt to configure Filebeat manually rather than utilizing modules, you'll do so by listing inputs in the filebeat.inputs section of filebeat.yml. These inputs detail how Filebeat discovers and handles input data.

###################### Logit.io Filebeat Configuration ########################
# ============================== Filebeat inputs ==============================
filebeat.inputs:
- type: filestream
  enabled: true
  id: my_unique_id
  paths:
    # REQUIRED CHANGE TO YOUR LOGS PATH
    - /var/log/*.log
  fields:
    type: logfile
    
# ============================== Filebeat modules ==============================
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  #reload.period: 10s
 
# ================================== Outputs ===================================
# ------------------------------ Logstash Output -------------------------------
output.logstash:
    hosts: ["@logstash.host:@logstash.sslPort"]
    loadbalance: true
    ssl.enabled: true
 
# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

Validate Configuration

sudo ./filebeat -e -c filebeat.yml --strict.perms=false

If the yml file is invalid, filebeat will print an error loading config file error message with details on how to correct the problem. If you have issues starting filebeat see "How To Diagnose No Data In Stack" below to troubleshoot.

Start filebeat

To start Filebeat, run:

sudo chown root filebeat.yml 
sudo chown root modules.d/system.yml
sudo chown root module/system/syslog/manifest.yml
sudo chown root module/system/auth/manifest.yml
sudo ./filebeat -e

You'll be running filebeat as root, so you need to change ownership of the configuration file and any configurations enabled in the modules.d directory, or run filebeat with --strict.perms=false as shown above.

Read more about how to change ownership (opens in a new tab).

Check Logit.io for your logs

Data should now have been sent to your Stack.

View My Data

If you don't see take a look at How to diagnose no data in Stack below for how to diagnose common issues.

How to diagnose no data in Stack

If you don't see data appearing in your stack after following this integration, take a look at the troubleshooting guide for steps to diagnose and resolve the problem or contact our support team and we'll be happy to assist.

(Optional) Update Logstash Pipelines

All Logit.io stacks come pre-configured with popular Logstash Pipelines. We would recommend that you add system specific filters if you don't already have them, to ensure enhanced dashboards and modules work correctly.

Edit Pipelines

Edit your Logstash Pipelines by choosing Stack > Settings > Logstash Pipelines.

if [fileset][module] == "system" {
  if [fileset][name] == "auth" {
    grok {
      match => { "message" => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2(: %{GREEDYDATA:[system][auth][ssh][signature]})?",
                "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} user %{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]}",
                "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: Did not receive identification string from %{IPORHOST:[system][auth][ssh][dropped_ip]}",
                "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sudo(?:\[%{POSINT:[system][auth][pid]}\])?: \s*%{DATA:[system][auth][user]} :( %{DATA:[system][auth][sudo][error]} ;)? TTY=%{DATA:[system][auth][sudo][tty]} ; PWD=%{DATA:[system][auth][sudo][pwd]} ; USER=%{DATA:[system][auth][sudo][user]} ; COMMAND=%{GREEDYDATA:[system][auth][sudo][command]}",
                "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} groupadd(?:\[%{POSINT:[system][auth][pid]}\])?: new group: name=%{DATA:system.auth.groupadd.name}, GID=%{NUMBER:system.auth.groupadd.gid}",
                "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} useradd(?:\[%{POSINT:[system][auth][pid]}\])?: new user: name=%{DATA:[system][auth][user][add][name]}, UID=%{NUMBER:[system][auth][user][add][uid]}, GID=%{NUMBER:[system][auth][user][add][gid]}, home=%{DATA:[system][auth][user][add][home]}, shell=%{DATA:[system][auth][user][add][shell]}$",
                "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{DATA:[system][auth][program]}(?:\[%{POSINT:[system][auth][pid]}\])?: %{GREEDYMULTILINE:[system][auth][message]}"] }
      pattern_definitions => {
        "GREEDYMULTILINE"=> "(.|\n)*"
      }
      remove_field => "message"
    }
    date {
      match => [ "[system][auth][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
    geoip {
      source => "[system][auth][ssh][ip]"
      target => "[system][auth][ssh][geoip]"
    }
  }
  else if [fileset][name] == "syslog" {
    grok {
      match => { "message" => ["%{SYSLOGTIMESTAMP:[system][syslog][timestamp]} %{SYSLOGHOST:[system][syslog][hostname]} %{DATA:[system][syslog][program]}(?:\[%{POSINT:[system][syslog][pid]}\])?: %{GREEDYMULTILINE:[system][syslog][message]}"] }
      pattern_definitions => { "GREEDYMULTILINE" => "(.|\n)*" }
      remove_field => "message"
    }
    date {
      match => [ "[system][syslog][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

Red Hat Logging Overview

Red Hat Enterprise Linux (RHEL) is the most popular commercial Linux distribution used in public cloud environments.

Red Hat Enterprise Linux is often compared to CentOS. The main difference between the two Linux distributions is that RHEL offers a much more comprehensive level of technical support to their users.

Red Hat Enterprise Linux generates a near overwhelming amount of log files under the /var/log/ directory. Just under /var/log/messages alone there are mail, cron, daemon, kern & authentication logs.

Below are some of the most notable log directories that you'll commonly encounter.

If you are using custom-built Kernels then you'll likely need to analyse the logs contained under /var/log/kern.log when it comes to troubleshooting your application.

The log messages found under /var/log/secure are relevant for monitoring the security of your Linux distribution as they contain authentication events, login attempts & authorisation log events.

You may also wish to consult /var/log/setroubleshoot/ to discover issues related to the security context of logs files created under this directory.

With over 25 different log directories anyone would quickly find analysing their Red Hat system overwhelming without a log file analyser as part of a centralised log management solution (opens in a new tab).

Our HA (highly available) Red Hat log file analyser can be used to completely centralise and manage your log file data across Red Hat & any additional applications, servers & programming languages for a single source of truth for monitoring across your organisation.

If you need any assistance with analysing your Red Hat logs we're here to help. Feel free to reach out by contacting the Logit.io support team via live chat & we'll be happy to help you start analysing your data.