March 12, 2024
This is a writeup on how to setup Security alerts via slack, using Graylog indexer, auditbeat and filebeat
all examples are using Fedora based hosts (Centos 7 and Rocky 9)
This shows how to send log events to Graylog indexer using Filebeat and Auditbeat agents.
I initially tried to do this with Rsyslog and AuditD but found the setup too complicated, especially with Rsyslog archaic syntax
also, Auditbeat parses the log in a structured way so you get meaningful data in Graylog once its indexed, vs unstructured log via auditd+rsyslog (ie, you dont need to use graylog extractors to get meaningful fields and data, filebeat and auditbeat do this for you automatically)
this example shows how to setup alerts that notify your Slack channel
- SSH brute force attemps
- Sudo elevation by a non-sysadmin
- checksum changes to key files like /etc/passwd, /etc/shadow
- new package is installed or removed from host
— -
# Prereqs
to run Graylog, setup an EC2 instance (or a physical host) — should have minimum of 16GB RAM, here I’m using 8CPU, 32G RAM instance,
- create new EBS volume to hold all GL data and configs, mount to mountpoint “/graylog” (I’m using 500GB volume for about 60 endpoints, XFS filesystem)
- install Docker Engine (and docker compose)
# Part 1 — Graylog install
install and configure Graylog (I’m using a docker compose instance)
copy this docker-compose.yaml to /graylog
version: "3.8"
services:
mongodb:
image: "mongo:5.0"
volumes:
- ./mongo:/data/db
networks:
- graylog
restart: "on-failure" opensearch:
image: "opensearchproject/opensearch:2.4.0"
environment:
- "OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g"
- "bootstrap.memory_lock=true"
- "discovery.type=single-node"
- "action.auto_create_index=false"
- "processbuffer_processors=10"
- "plugins.security.ssl.http.enabled=false"
- "plugins.security.disabled=true"
ulimits:
memlock:
hard: -1
soft: -1
nofile:
soft: 65536
hard: 65536
volumes:
- ./opensearch:/usr/share/opensearch/data
networks:
- graylog
restart: "on-failure" graylog:
hostname: "server"
image: "${GRAYLOG_IMAGE:-graylog/graylog:5.1.6}"
depends_on:
opensearch:
condition: "service_started"
mongodb:
condition: "service_started"
entrypoint: "/usr/bin/tini -- wait-for-it opensearch:9200 -- /docker-entrypoint.sh"
environment:
GRAYLOG_NODE_ID_FILE: "/usr/share/graylog/data/config/node-id"
GRAYLOG_PASSWORD_SECRET: "${GRAYLOG_PASSWORD_SECRET:?Please configure GRAYLOG_PASSWORD_SECRET in the .env file}"
GRAYLOG_ROOT_PASSWORD_SHA2: "${GRAYLOG_ROOT_PASSWORD_SHA2:?Please configure GRAYLOG_ROOT_PASSWORD_SHA2 in the .env file}"
GRAYLOG_HTTP_BIND_ADDRESS: "0.0.0.0:9000"
GRAYLOG_HTTP_EXTERNAL_URI: "http://localhost:9000/"
GRAYLOG_ELASTICSEARCH_HOSTS: "http://opensearch:9200"
GRAYLOG_MONGODB_URI: "mongodb://mongodb:27017/graylog"
GRAYLOG_SERVER_JAVA_OPTS: "-Xms16g -Xmx16g"
GRAYLOG_TIMEZONE: "America/New_York"
TZ: "America/New_York"
networks:
- graylog
ports:
- "5044:5044/tcp" # Beats
- "1514:1514/udp" # Syslog
- "1514:1514/tcp" # Syslog
- "5555:5555/tcp" # RAW TCP
- "5555:5555/udp" # RAW TCP
- "9000:9000/tcp" # Server API
- "12201:12201/tcp" # GELF TCP
- "12201:12201/udp" # GELF UDP
volumes:
- ./graylog:/usr/share/graylog/data/data
- ./journal:/usr/share/graylog/data/journal
restart: "on-failure"volumes:
mongo:
opensearch:
graylog:
journal:networks:
graylog:
driver: bridge
generate new password and .env file
cd /graylog
echo -n "battery-h@rse-stapler7" | shasum -a 256
67d9b149741194a86dae5abd84ecf0b978ce6836b3df825a6b393f20a4aaeb9c -## copy pw to .env file
GRAYLOG_PASSWORD_SECRET="battery-h@rse-stapler7"
GRAYLOG_ROOT_PASSWORD_SHA2=67d9b149741194a86dae5abd84ecf0b978ce6836b3df825a6b393f20a4aaeb9c## secure .env file
chmod 600 .env
chown root:root .env
add a systemd script for graylog
vi /etc/systemd/system/graylog.service
[Unit]
Description=Graylog service with docker compose
PartOf=docker.service
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/graylog
ExecStart=/bin/docker compose up -d --remove-orphans
ExecStop=/bin/docker compose down
[Install]
WantedBy=multi-user.target
add a backup cron to backup your GL configs (searches, alerts, etc)
will back up Mongo data to /graylog/mongo/graylog.archive
0 2 * * 5 /bin/docker exec -ti graylog-mongodb-1 mongodump --db=graylog --archive=./data/db/graylog.archive
to restore Mongo data,
docker exec -ti graylog-mongodb-1 mongorestore --archive=./data.db/graylog.archive
— -
# Part 2 — Filebeat and Auditbeat
on each endpoint, install Filebeat and Auditbeat
here Im using rpm version 8.8.2 for both filebeat and auditbeat
Configure Filebeat
configure Filebeat to send syslogs to your Graylog input (/etc/filebeat/filebeat.yml)
if you want to index a custom log path, add it as a Filestream
you can also add Processors to drop events if they match a certain criteria, ie, drop any event that doesnt have the string “error” etc
#filebeat.inputs:
#- type: filestream
# id: <hostname>
# enabled: true
# paths:
# - /path/to/custom/log
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
#exclude_files: ['/var/log/some_log_to_exclude']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#============================= Filebeat modules ===============================
#filebeat.config.modules:
# Glob pattern for configuration loading
#path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
#reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
filebeat.modules:
- module: system
enabled: true
syslog.enabled: true
auth.enabled: true
## Drop meaningless events
processors:
- drop_event.when:
or:
- contains.message: INFO
- contains.message: DEBUG
- contains.message: "audit: audit_lost="
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: error
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
output.logstash:
hosts: ["<graylog hostname or IP>:5044"] # beats graylog input
once configured, restart filebeat service
Configure Auditbeat
edit /etc/auditbeat/auditbeat.yml
here I’m adding specific Audit rules to audit all syscalls and file modifications like /etc/passwd, /etc/shadow, etc as well as login/logout events
all the data gets sent to TCP input on Graylog side
auditbeat.modules:
- module: auditd
# Load audit rules from separate files. Same format as audit.rules(7).
#audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ]
audit_rules: |
## Log all user commands
-a exit,always -F arch=b64 -S execve -k commands
-a exit,always -F arch=b32 -S execve -k commands
## Collect information on kernel module loading and unloading
-w /usr/sbin/insmod -p x -k modules
-w /usr/sbin/rmmod -p x -k modules
-w /usr/sbin/modprobe -p x -k modules
-a always,exit -F arch=b64 -S init_module -S delete_module -k modules
## Record attempts to alter logon and logout events
-w /var/log/tallylog -p wa -k logins
-w /var/log/lastlog -p wa -k logins
## Record attempts to alter time-date
-a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time-change
-a always,exit -F arch=b32 -S adjtimex -S settimeofday -S stime -k time-change
-a always,exit -F arch=b64 -S clock_settime -k time-change
-a always,exit -F arch=b32 -S clock_settime -k time-change
-w /etc/localtime -p wa -k time-change
## Record modification of User/Group information
-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity
-w /etc/gshadow -p wa -k identity
-w /etc/shadow -p wa -k identity
-w /etc/security/opasswd -p wa -k identity
## Record network changes
-a always,exit -F arch=b64 -S sethostname -S setdomainname -k system-locale
-a always,exit -F arch=b32 -S sethostname -S setdomainname -k system-locale
-w /etc/issue -p wa -k system-locale
-w /etc/issue.net -p wa -k system-locale
-w /etc/hosts -p wa -k system-locale
-w /etc/sysconfig/network -p wa -k system-locale
-w /etc/sysconfig/network-scripts/ -p wa -k system-locale
## Record login/logout events
-w /var/run/faillock/ -p wa -k logins
-w /var/run/utmp -p wa -k session
-w /var/log/wtmp -p wa -k logins
-w /var/log/btmp -p wa -k logins
## Record changes to file permissions and attributes
-a always,exit -F arch=b64 -S chmod -S fchmod -S fchmodat -F auid>=1000 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b32 -S chmod -S fchmod -S fchmodat -F auid>=1000 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b64 -S chown -S fchown -S fchownat -S lchown -F auid>=1000 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b32 -S chown -S fchown -S fchownat -S lchown -F auid>=1000 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b64 -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=1000 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b32 -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=1000 -F auid!=4294967295 -k perm_mod
## Record unauthorized file access attempts
-a always,exit -F arch=b64 -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -k access
-a always,exit -F arch=b32 -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -k access
-a always,exit -F arch=b64 -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -k access
-a always,exit -F arch=b32 -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -k access
## Unauthorized access attempts.
-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access
-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access
## Record mount attempts
-a always,exit -F arch=b64 -S mount -F auid>=1000 -F auid!=4294967295 -k mounts
-a always,exit -F arch=b32 -S mount -F auid>=1000 -F auid!=4294967295 -k mounts
## Record file deletions and renaming
-a always,exit -F arch=b64 -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete
-a always,exit -F arch=b32 -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete
## Record changes to system adminstration scope
-w /etc/sudoers -p wa -k scope
-w /etc/sudoers.d/ -p wa -k scope
## Record sudoers actions
-w /var/log/sudo.log -p wa -k actions
- module: file_integrity
paths:
- /bin
- /usr/bin
- /sbin
- /usr/sbin
- /etc
- module: system
datasets:
- host # General host information, e.g. uptime, IPs
- login # User logins, logouts, and system boots.
- package # Installed, updated, and removed packages
- process # Started and stopped processes
# - socket # Opened and closed sockets
- user # User information
# How often datasets send state updates with the
# current state of the system (e.g. all currently
# running processes, all open sockets).
state.period: 12h
# Enabled by default. Auditbeat will read password fields in
# /etc/passwd and /etc/shadow and store a hash locally to
# detect any changes.
user.detect_password_changes: true
# File patterns of the login record files.
login.wtmp_file_pattern: /var/log/wtmp*
login.btmp_file_pattern: /var/log/btmp*
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
setup.template.enabled: true
setup.template.overwrite: true
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
setup.dashboards.enabled: false
output.logstash:
hosts: ["<graylog IP or hostname>:5044"] # beats graylog input
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- drop_event.when:
or:
- equals.user.name: netdata
- equals.user.name: monit
- equals.process.name: monit
- equals.process.name: pmdaproc
#- add_host_metadata: ~
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: error
under Processors section, Im dropping any event that matches a certain field value,
for example I dont want any events logged which are generated by a custom process called Monit, or any events by Netdata (since its a monitoring agent and is very chatty) — this cuts down on TCP connections and graylog disk storage
restart auditbeat, if it complains about audit daemon already running, stop auditd service
— -
# Part 3 — Configure Graylog
## Configure Input
in GL, go to System/Configurations > Inputs > launch new input (Beats)
create new Input “Beats”, give it a port, here Im using 5044
bind_address: 0.0.0.0
charset_name: UTF-8
no_beats_prefix: false
number_worker_threads: 8
override_source: <empty>
port: 5044
recv_buffer_size: 1048576
tcp_keepalive: false
tls_cert_file: <empty>
tls_client_auth: disabled
tls_client_auth_cert_file: <empty>
tls_enable: false
tls_key_file: <empty>
tls_key_password:********
sanity check your Input to see if events are coming into it from your endpoint beats agents
go to Search and press Play button, check to see if you have log messags coming in
once you see data, move on to next step
— -
## Indexes
once data is in, lets create an Index so that we can use Graylogs rollover policies — we want to remove all syslog data after X amount of time, otherwise your EBS volume will get full very quickly
we will create 2 new indexes, filebeat-index and auditbeat-index
Filebeat index will have shorter rotation since we dont care about log messages, this is mostly used for alerting
Auditbeat index will be stored for 6 months in case we need to check audit logs
go to System > Indices > Create index set, configure the rollover parameters
— -
## Streams
to break apart data logically, create a stream (which is basically a filter for your log data)
Filebeat stream
create a 1st stream called “filebeat” to move any message that comes into Beats Input to a stream called “filebeat”
go to Streams > create new stream
Title = filebeat
index set = filebeat-index
check on “remove matches from default stream”
add a new stream rule which tells GL to push this event to “filebeat stream” if Input matches “Beats” input
to get the GUID of your Beats Input, go to System > Inputs
you should see events coming into your “filebeat” stream now,
Auditbeat stream
do the same for Auditbeat — create a new stream, for stream Rule add
beats_type = auditbeat
make sure Auditbeat stream sends events to auditbeat-index
SUDO stream
this stream will show all SUDO elevation attempts on your endpoints,
one way to use this is to alert if any user or service tries to elevate to root thats not supposed to have root access
create new stream
Title: sudo
description: sudo elevation attempts
index-set: syslog-index
check on “remove from default stream”
add a Stream Rule
application_name must match exactly sudo
start the stream and do a search on it, you should see all sudo events coming in
the message we are looking for is the sudo elevation message,
mreider : TTY=pts/7 ; PWD=/home/mreider ; USER=root ; COMMAND=/bin/su
this contains lots of info, ie who is sudoing, what command are they running, where are they suoding from? what user are they sudoing to?
to parse this message in a more structured manner, lets create an Extractor to extract specific fields from this message
— -
Configure Slack alerts
> Package install or removal alert
to setup an alert whenever a package is installed or removed, you will parse the auditbeat stream for specific event types
install a package on a test server, test the Search Query from Search console, you should see hits on a package that was installed
create new Notification: “package_installed_or_removed”,
Filter & Aggregation search:
auditbeat_event_action: package_installed OR auditbeat_event_action: package_removed
custom message:
[Graylog]
${if backlog}
${foreach backlog message}
${message.message}
source: ${message.source}
${end}
${end}
Timestamp: ${event.timestamp}
create a new Event Definition — “Package installation or removal”, stream = auditbeat
on Notification tab, add your notification from previous step
generate some events by installing or removing a yum package,
your slack channel should show the alert on any pkg install or removal
— -
> File Integrity alert
This event will alert on any core Linux security file changes, including deletion, modification, creation
create a new Event definition that matches this
auditbeat_event_module: file_integrity AND auditbeat_event_type: change AND auditbeat_file_path: ("/etc/passwd" OR "/etc/passwd-" OR "/etc/shadow" OR "/etc/shadow-" OR "/etc/gshadow" OR "/etc/gshadow-" OR "/etc/group" OR "/etc/sudoers")
this searches for all key security files that were modified in any way
stream: auditbeat
notification:
${if backlog}
${foreach backlog message}
${message.timestamp} :: ${message.source}
---------------------------------
CORE SECURITY FILE was changed!
file changed: ${message.fields.auditbeat_file_path}
mtime: ${message.fields.auditbeat_file_mtime}
event type: ${message.fields.auditbeat_event_action}
file mode: ${message.fields.auditbeat_file_mode}
UID: ${message.fields.auditbeat_file_uid}
GID: ${message.fields.auditbeat_file_gid}
---------------------------------
${end}
${end}
this will produce the following alert:
> SSH brute force alert
this event will produce SSH brute force attempt alerts on anyone trying to ssh to a server unsuccessfully
create new Event definition
search query:
message:"Invalid user" AND message: "from" AND NOT "error"
and notification
[Graylog]
SSH Brute Force attempt !!
${if backlog}
${foreach backlog message}
Message: ${message.message}
Source: ${message.source}
${end}
${end}
the alert will look like this
— -
> Sudo elevation alert
this alert is for any user that elevates to root who is not authorized. This means there is a hole in your sudo file config or permissions.
create new Event definition, with search, replace user1, user2 with your administrator account names
auditbeat_process_executable: "/usr/bin/su" AND auditbeat_auditd_data_op: "PAM:session_open" AND auditbeat_user_effective_name: root AND NOT auditbeat_user_audit_name: ("user1" OR "user2")
Notification syntax:
${if backlog}
${foreach backlog message}
${message.timestamp} :: ${message.source}
---------------------------------
SUDO ELEVATION BY NON-SYSADMIN!!!
${message.fields.auditbeat_user_audit_name} > ${message.fields.auditbeat_user_effective_name}
${end}
${end}
final alert looks like this
— -
You should now have a good overview of your entire infra, with meaningful alerts on security and any other metric you deem valuable.
You can extend this to cover any usecase based on events youre getting from your beats.
Optional: Extractors
you can use field extractors to generate new fields from your log events based on some condition. This is useful if youre getting events in raw text form and need to extract certain fields that you can use in your queries.
for example, if ingesting sudo events, can generate custom fields from unstructured data
create a new sudo_from_user extractor (Regex) that will take this message and generate a new filed called “sudo_from_user”.
extractor will use Regex to match a message and pull out a value using this regex,
^\s*([^:\s]+)\s*:
it will only attempt to extract if the message contains a string “COMMAND=”
save the extractor, you should now see your new field “sudo_from_user” show up in events,
create additional extractor for sudo_to_user
regex = USER=([^;\s]+)
store as field = sudo_
create a 3rd extractor to get sudo_cmd
regex = COMMAND=([^;\s]+.*)
you should now have 3 custom fields on all your sudo events
- sudo_from_user
- sudo_to_user
- sudo_cmd
to update an Extractor go to System/Inputs > Inputs > Manage extractors