Skip to content

sentrilite/sentrilite-cli

Repository files navigation

Sentrilite CLI : A command-line tool to collect logs and events from Sentrilite agents running on multiple nodes.

Sentrilite-CLI Architecture Workflow

CLI Workflow Architecture Description

  • AWS, Azure, GCP: Public cloud environments with Sentrilite agents deployed
  • On-Prem: On-premises infrastructure with Sentrilite agent
  • Linux Servers: Individual Linux servers with Sentrilite agents
  • Kafka: Distributed streaming platform for real-time event streaming
  • Fluentd: Unified logging layer for data collection and forwarding
  • SIEM: Security Information and Event Management systems for security monitoring
  • Prometheus: Open-source monitoring and alerting toolkit for metrics collection
  • Grafana: Open-source analytics and visualization platform for metrics and logs
  • Sentrilite Command Line Agent: CLI tool that collects eBPF kernel traces, events, and alerts from all Sentrilite agents via WebSocket, with support for streaming to Kafka/Fluentd and integration with SIEM systems
  • Data Flow: Sentrilite agents stream telemetry data to the CLI agent, which can forward data to Kafka, Fluentd, or SIEM systems for further processing and analysis

Installation

Using Make

make 

The binary will be created as sentrilite-cli.

Install to System

make install

This installs the binary to /usr/local/bin/sentrilite-cli.

Usage

./sentrilite-cli --nodes <node_file> [--log <output>] [--siem <siem_type>] [--stream <stream_type>]

Parameters

  • --nodes (required): Path to CSV file containing node information in format: node_ip,group
  • --log (optional): Output destination:
    • stdout - Output to console
    • dir:<path> - Output to directory (e.g., dir:/var/log/sentrilite)
    • s3://bucket/path - Output to S3 bucket (e.g., s3://my-bucket/logs/)
    • If not specified, defaults to ./log directory with files named node_ip_group.json
  • --siem (optional): SIEM integration type:
    • splunk - Format for Splunk HEC
    • wazuh - Format for Wazuh
    • elastic - Format for Elasticsearch
  • --stream (optional): Stream events to external systems:
    • fluentd - Stream to Fluentd
    • kafka - Stream to Kafka

Node File Format

Create a CSV file with node information:

10.0.0.1,production
10.0.0.2,production
10.0.0.3,staging
10.0.0.4,development

Output Formats

Raw Json Log Format

{
    "timestamp": "2025-11-08T20:57:54Z",
    "node_ip": "10.0.0.1",
    "group": "aws",
    "data": {
        "arg1": "init",
        "cmd": "/proc/self/fd/6",
        "comm": "runc",
        "cpu": 0,
        "ip": "10.0.0.1",
        "msg_type": 4,
        "msg_type_str": "EXECVE",
        "pid": 2402803,
        "ppid": 0,
        "risk_level": 3,
        "tags": [
            "privilege-escalation",
            "privileged-user-activity"
        ],
        "timestamp": 1762635474.9498317,
        "triggered_rule": {
            "match_key": "user",
            "match_values": [
                "root"
            ],
            "risk_level": 3,
            "tags": [
                "privileged-user-activity"
            ]
        },
        "uid": 0,
        "user": "root"
    }
}

Raw Json Alert Format

{
    "arg1": "init",
    "cmd": "/proc/self/fd/6",
    "comm": "runc",
    "cpu": 1,
    "ip": "127.0.0.1",
    "msg_type": 4,
    "msg_type_str": "EXECVE",
    "pid": 2406918,
    "ppid": 0,
    "risk_level": 3,
    "tags": [
        "privilege-escalation",
        "privileged-user-activity"
    ],
    "timestamp": 1762636194.94153,
    "triggered_rule": {
        "match_key": "user",
        "match_values": [
            "root"
        ],
        "risk_level": 3,
        "tags": [
            "privileged-user-activity"
        ]
    },
    "uid": 0,
    "user": "root"
}

Usage Examples

Default output (JSON files in ./log directory)

./sentrilite-cli --nodes nodes.csv

Output to stdout

./sentrilite-cli --nodes nodes.csv --log stdout

Output to custom directory

./sentrilite-cli --nodes nodes.csv --log dir:/var/log/sentrilite

Output to S3

./sentrilite-cli --nodes nodes.csv --log s3://my-bucket/sentrilite-logs/

With Splunk integration

./sentrilite-cli --nodes nodes.csv --siem splunk

With Wazuh integration

./sentrilite-cli --nodes nodes.csv --siem wazuh

Stream to Fluentd

./sentrilite-cli --nodes nodes.csv --stream fluentd

Stream to Kafka

./sentrilite-cli --nodes nodes.csv --stream kafka

Combine streaming with log output

./sentrilite-cli --nodes nodes.csv --log dir:/var/log/sentrilite --stream fluentd

SIEM Configuration

When using --siem, create a configuration file named <siem_type>.conf in the current directory.

Splunk Configuration (splunk.conf)

{
  "hec_endpoint": "https://splunk.example.com:8088/services/collector",
  "hec_token": "your-hec-token",
  "index": "sentrilite",
  "source": "sentrilite-agent",
  "sourcetype": "json"
}

Wazuh Configuration (wazuh.conf)

{
  "manager_host": "wazuh-manager.example.com",
  "manager_port": 1514,
  "protocol": "tcp"
}

Elasticsearch Configuration (elastic.conf)

{
  "endpoint": "https://elasticsearch.example.com:9200",
  "index": "sentrilite-logs",
  "username": "elastic",
  "password": "your-password"
}

Stream Configuration

When using --stream, create a configuration file named <stream_type>.conf in the current directory.

Fluentd Configuration (fluentd.conf)

{
  "host": "fluentd.example.com",
  "port": 24224,
  "tag": "sentrilite.events"
}

Kafka Configuration (kafka.conf)

{
  "brokers": ["kafka1.example.com:9092", "kafka2.example.com:9092"],
  "topic": "sentrilite-events"
}

Note:

  • Kafka streaming currently buffers messages and logs them. Full Kafka producer implementation using github.com/IBM/sarama is planned for future releases.
  • Fluentd streaming uses TCP connection and sends JSON-formatted messages with tag, timestamp, and record fields.
  • Streaming works alongside regular log output - you can use both --log and --stream options simultaneously.

Requirements

  • Go 1.21 or higher
  • Access to nodes on port 8765 (WebSocket)
  • For S3 output: AWS credentials configured (via environment variables or IAM role)
  • For SIEM integrations: Appropriate configuration files

Environment Variables

AWS (for S3 output)

export AWS_ACCESS_KEY_ID=your-access-key
export AWS_SECRET_ACCESS_KEY=your-secret-key
export AWS_REGION=us-east-1

Or use IAM roles if running on EC2.

Running as a Daemon

Use the provided run.sh script to run the CLI as a daemon:

# Start daemon
./run.sh start --nodes nodes.csv

# Start with custom log output
./run.sh start --nodes nodes.csv --log stdout

# Start with SIEM integration
./run.sh start --nodes nodes.csv --siem splunk

# Start with streaming to Fluentd
./run.sh start --nodes nodes.csv --stream fluentd

# Start with streaming to Kafka
./run.sh start --nodes nodes.csv --stream kafka

# Start with both log output and streaming
./run.sh start --nodes nodes.csv --log dir:/var/log/sentrilite --stream fluentd

# Stop daemon
./run.sh stop

# Restart daemon
./run.sh restart --nodes nodes.csv

# Check status
./run.sh status

The daemon will:

  • Run in the background
  • Write logs to sentrilite-cli.log
  • Store PID in sentrilite-cli.pid
  • Create ./alerts directory for alert files
  • Create ./log directory for log files (if using default output)

Alerts Collection

The CLI uses a separate WebSocket connection for alerts collection. This dedicated connection:

  • Queries each node for alerts every 1 minute using the get_alerts command
  • Can wait up to 5 minutes for responses or until the server closes the connection
  • Collects all alerts before saving to prevent data loss
  • Deduplicates alerts - only adds alerts that don't already exist in the file
  • Stores alert data in a persistent file:
./alerts/<node_ip>.<group>.alerts.json

Each node has a single persistent alerts file that accumulates all unique alerts over time. The deduplication logic compares alerts based on:

  • PID
  • Command (cmd/comm)
  • Timestamp (exact match required - same alert at different times are saved separately)
  • K8s pod UID (if present)
  • Risk level, tags, user, IP address, and other fields

Note: When using --stream, alerts are still collected via the separate connection, but events are streamed in real-time to the configured stream destination.

Notes

  • The CLI connects to all nodes concurrently
  • Each node has two separate WebSocket connections:
    • Main connection: For regular event streaming (logs, events, traces)
    • Alerts connection: Dedicated connection for alerts collection with longer timeouts
  • Each node connection runs in a separate goroutine
  • Log files are created as <node_ip>.<group>.<timestamp>.json in the specified directory
  • Alert files are created as <node_ip>.<group>.alerts.json in the ./alerts directory (persistent, deduplicated)
  • Log file timestamp format: YYYYMMDDTHHMMSS (e.g., 20250115T143022)
  • Alerts are queried every 1 minute automatically via the dedicated alerts connection
  • The alerts connection can wait up to 5 minutes for responses or until the server closes it
  • For S3 output, logs are buffered and uploaded in batches of 100 entries
  • For Kafka streaming, messages are buffered and flushed in batches of 100 entries
  • WebSocket connections are maintained until interrupted (Ctrl+C) or daemon is stopped
  • When using --stream, events are streamed in real-time while still being saved to log files (if --log is specified)

License

This project is licensed under the MIT License.

About

A command-line tool to collect logs and events from Sentrilite agents running on multiple nodes.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published