CentOS System Log Files

Ship system log files from CentOS to Logstash

Configure Filebeat to ship logs from Centos Systems to Logstash and Elasticsearch.

Install Integration

Please click on the Install Integration button to configure your stack for this source.

Install Filebeat

To get started you will need to install filebeat. To do this you have two main options:

To successfully install filebeat and set up the required Windows service you will need to have administrator access.

If you have chosen to download the zip file:

  • Extract the contents of the zip file into C:\Program Files.
  • Rename the extracted folder to filebeat
  • Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select Run As Administrator).
  • From the PowerShell prompt, run the following commands to install filebeat as a Windows service:
cd 'C:\Program Files\filebeat'
.\install-service-filebeat.ps1

If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example:

PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1

For more information about Powershell execution policies see here (opens in a new tab).

If you have chosen to download the filebeat.msi file:

  • double-click on it and the relevant files will be downloaded.

At the end of the installation process you'll be given the option to open the folder where filebeat has been installed.

  • Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select Run As Administrator).
  • From the PowerShell prompt, change directory to the location where filebeat was installed and run the following command to install filebeat as a Windows service:
.\install-service-filebeat.ps1

If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example:

PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1

For more information about Powershell execution policies see here (opens in a new tab).

Enable The System Module

There are several built in filebeat modules you can use. You will need to enable the system module.

Change directory to the location where filebeat was extracted and run the following commands:

sudo filebeat modules list
sudo filebeat modules enable system

Navigate to the modules.d folder, copy the snippet below and replace the contents of the system.yml module file:

# Module: system
# Docs: https://www.elastic.co/guide/en/beats/filebeat/8.12.2/filebeat-module-system.html
 
- module: system
# Syslog
syslog:
    enabled: true
 
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
 
# Authorization logs
auth:
    enabled: true
 
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

Update your configuration file

The configuration file below is pre-configured to send data to your Logit.io Stack via Logstash.

Copy the configuration file below and overwrite the contents of filebeat.yml (this file can be found in the folder where you installed Filebeat in the first step).

Filebeat modules offer the quickest way to begin working with standard log formats. If you opt to configure Filebeat manually rather than utilizing modules, you'll do so by listing inputs in the filebeat.inputs section of filebeat.yml. These inputs detail how Filebeat discovers and handles input data.

###################### Logit.io Filebeat Configuration ########################
# ============================== Filebeat inputs ==============================
filebeat.inputs:
- type: filestream
  enabled: true
  id: my_unique_id
  paths:
    # REQUIRED CHANGE TO YOUR LOGS PATH
    - /var/log/*.log
  fields:
    type: logfile
    
# ============================== Filebeat modules ==============================
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  #reload.period: 10s
 
# ================================== Outputs ===================================
# ------------------------------ Logstash Output -------------------------------
output.logstash:
    hosts: ["@logstash.host:@logstash.sslPort"]
    loadbalance: true
    ssl.enabled: true
 
# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

Validate Configuration

sudo ./filebeat -e -c filebeat.yml --strict.perms=false

If the yml file is invalid, filebeat will print an error loading config file error message with details on how to correct the problem. If you have issues starting filebeat see "How To Diagnose No Data In Stack" below to troubleshoot.

Start filebeat

To start Filebeat, run:

sudo chown root filebeat.yml 
sudo chown root modules.d/system.yml
sudo chown root module/system/syslog/manifest.yml
sudo chown root module/system/auth/manifest.yml
sudo ./filebeat -e

You'll be running filebeat as root, so you need to change ownership of the configuration file and any configurations enabled in the modules.d directory, or run filebeat with --strict.perms=false as shown above.

Read more about how to change ownership (opens in a new tab).

Check Logit.io for your logs

Data should now have been sent to your Stack.

View My Data

If you don't see take a look at How to diagnose no data in Stack below for how to diagnose common issues.

How to diagnose no data in Stack

If you don't see data appearing in your Stack after following the steps, visit the Help Centre guide for steps to diagnose no data appearing in your Stack or Chat to support now.

(Optional) Update Logstash Pipelines

All Logit.io stacks come pre-configured with popular Logstash Pipelines. We would recommend that you add system specific filters if you don't already have them, to ensure enhanced dashboards and modules work correctly.

Edit Pipelines

Edit your Logstash Pipelines by choosing Stack > Settings > Logstash Pipelines.

if [fileset][module] == "system" {
  if [fileset][name] == "auth" {
    grok {
      match => { "message" => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2(: %{GREEDYDATA:[system][auth][ssh][signature]})?",
                "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} user %{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]}",
                "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: Did not receive identification string from %{IPORHOST:[system][auth][ssh][dropped_ip]}",
                "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sudo(?:\[%{POSINT:[system][auth][pid]}\])?: \s*%{DATA:[system][auth][user]} :( %{DATA:[system][auth][sudo][error]} ;)? TTY=%{DATA:[system][auth][sudo][tty]} ; PWD=%{DATA:[system][auth][sudo][pwd]} ; USER=%{DATA:[system][auth][sudo][user]} ; COMMAND=%{GREEDYDATA:[system][auth][sudo][command]}",
                "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} groupadd(?:\[%{POSINT:[system][auth][pid]}\])?: new group: name=%{DATA:system.auth.groupadd.name}, GID=%{NUMBER:system.auth.groupadd.gid}",
                "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} useradd(?:\[%{POSINT:[system][auth][pid]}\])?: new user: name=%{DATA:[system][auth][user][add][name]}, UID=%{NUMBER:[system][auth][user][add][uid]}, GID=%{NUMBER:[system][auth][user][add][gid]}, home=%{DATA:[system][auth][user][add][home]}, shell=%{DATA:[system][auth][user][add][shell]}$",
                "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{DATA:[system][auth][program]}(?:\[%{POSINT:[system][auth][pid]}\])?: %{GREEDYMULTILINE:[system][auth][message]}"] }
      pattern_definitions => {
        "GREEDYMULTILINE"=> "(.|\n)*"
      }
      remove_field => "message"
    }
    date {
      match => [ "[system][auth][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
    geoip {
      source => "[system][auth][ssh][ip]"
      target => "[system][auth][ssh][geoip]"
    }
  }
  else if [fileset][name] == "syslog" {
    grok {
      match => { "message" => ["%{SYSLOGTIMESTAMP:[system][syslog][timestamp]} %{SYSLOGHOST:[system][syslog][hostname]} %{DATA:[system][syslog][program]}(?:\[%{POSINT:[system][syslog][pid]}\])?: %{GREEDYMULTILINE:[system][syslog][message]}"] }
      pattern_definitions => { "GREEDYMULTILINE" => "(.|\n)*" }
      remove_field => "message"
    }
    date {
      match => [ "[system][syslog][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

CentOS Overview

CentOS is a Linux distribution & computing platform that is often compared with Red Hat Enterprise Linux (RHEL).

The main difference between the two is that CentOS lacks the high level of technical support that is supplied as part of the RHEL package.

CentOS is built off of the Red Hat Enterprise Linux's open source code base, explaining their commonalities and relative compatibility.

Due to this similarity their library versions are identical. This means that binaries that work on RHEL will work on CentOS. If you're using their administration tools you might notice incompatibilities between the two distributions as minor patches are updated at different rates.Large patches & major releases for CentOS are released sporadically & infrequently compared to other Linux distributions (such as Ubuntu & Debian).

CentOS users are typically individuals & businesses that don't require strong levels of support, certification & training to use this enterprise class Linux distribution successfully.

Our built in CentOS log file analyser (opens in a new tab) is included in our log management platform (opens in a new tab) and is built upon the open source tools Elasticsearch, Logstash & Kibana to ease the processing of large amounts of Linux server data for troubleshooting & root cause analysis.

Logit.io can be used to centralise your Linux log data & alerts (opens in a new tab) on errors to monitor your operating system (OS). The platform can also be used to view logs (opens in a new tab) within Kibana for detailed visualisations & reporting.

If you need any assistance with analysing your CentOS logs we're here to help. Feel free to reach out by contacting the Logit.io support team via live chat & we'll be happy to help you get started.