Sentrilite CLI : A command-line tool to collect logs and events from Sentrilite agents running on multiple nodes.
- AWS, Azure, GCP: Public cloud environments with Sentrilite agents deployed
- On-Prem: On-premises infrastructure with Sentrilite agent
- Linux Servers: Individual Linux servers with Sentrilite agents
- Kafka: Distributed streaming platform for real-time event streaming
- Fluentd: Unified logging layer for data collection and forwarding
- SIEM: Security Information and Event Management systems for security monitoring
- Prometheus: Open-source monitoring and alerting toolkit for metrics collection
- Grafana: Open-source analytics and visualization platform for metrics and logs
- Sentrilite Command Line Agent: CLI tool that collects eBPF kernel traces, events, and alerts from all Sentrilite agents via WebSocket, with support for streaming to Kafka/Fluentd and integration with SIEM systems
- Data Flow: Sentrilite agents stream telemetry data to the CLI agent, which can forward data to Kafka, Fluentd, or SIEM systems for further processing and analysis
make The binary will be created as sentrilite-cli.
make installThis installs the binary to /usr/local/bin/sentrilite-cli.
./sentrilite-cli --nodes <node_file> [--log <output>] [--siem <siem_type>] [--stream <stream_type>]--nodes(required): Path to CSV file containing node information in format:node_ip,group--log(optional): Output destination:stdout- Output to consoledir:<path>- Output to directory (e.g.,dir:/var/log/sentrilite)s3://bucket/path- Output to S3 bucket (e.g.,s3://my-bucket/logs/)- If not specified, defaults to
./logdirectory with files namednode_ip_group.json
--siem(optional): SIEM integration type:splunk- Format for Splunk HECwazuh- Format for Wazuhelastic- Format for Elasticsearch
--stream(optional): Stream events to external systems:fluentd- Stream to Fluentdkafka- Stream to Kafka
Create a CSV file with node information:
10.0.0.1,production
10.0.0.2,production
10.0.0.3,staging
10.0.0.4,development{
"timestamp": "2025-11-08T20:57:54Z",
"node_ip": "10.0.0.1",
"group": "aws",
"data": {
"arg1": "init",
"cmd": "/proc/self/fd/6",
"comm": "runc",
"cpu": 0,
"ip": "10.0.0.1",
"msg_type": 4,
"msg_type_str": "EXECVE",
"pid": 2402803,
"ppid": 0,
"risk_level": 3,
"tags": [
"privilege-escalation",
"privileged-user-activity"
],
"timestamp": 1762635474.9498317,
"triggered_rule": {
"match_key": "user",
"match_values": [
"root"
],
"risk_level": 3,
"tags": [
"privileged-user-activity"
]
},
"uid": 0,
"user": "root"
}
}{
"arg1": "init",
"cmd": "/proc/self/fd/6",
"comm": "runc",
"cpu": 1,
"ip": "127.0.0.1",
"msg_type": 4,
"msg_type_str": "EXECVE",
"pid": 2406918,
"ppid": 0,
"risk_level": 3,
"tags": [
"privilege-escalation",
"privileged-user-activity"
],
"timestamp": 1762636194.94153,
"triggered_rule": {
"match_key": "user",
"match_values": [
"root"
],
"risk_level": 3,
"tags": [
"privileged-user-activity"
]
},
"uid": 0,
"user": "root"
}./sentrilite-cli --nodes nodes.csv./sentrilite-cli --nodes nodes.csv --log stdout./sentrilite-cli --nodes nodes.csv --log dir:/var/log/sentrilite./sentrilite-cli --nodes nodes.csv --log s3://my-bucket/sentrilite-logs/./sentrilite-cli --nodes nodes.csv --siem splunk./sentrilite-cli --nodes nodes.csv --siem wazuh./sentrilite-cli --nodes nodes.csv --stream fluentd./sentrilite-cli --nodes nodes.csv --stream kafka./sentrilite-cli --nodes nodes.csv --log dir:/var/log/sentrilite --stream fluentdWhen using --siem, create a configuration file named <siem_type>.conf in the current directory.
{
"hec_endpoint": "https://splunk.example.com:8088/services/collector",
"hec_token": "your-hec-token",
"index": "sentrilite",
"source": "sentrilite-agent",
"sourcetype": "json"
}{
"manager_host": "wazuh-manager.example.com",
"manager_port": 1514,
"protocol": "tcp"
}{
"endpoint": "https://elasticsearch.example.com:9200",
"index": "sentrilite-logs",
"username": "elastic",
"password": "your-password"
}When using --stream, create a configuration file named <stream_type>.conf in the current directory.
{
"host": "fluentd.example.com",
"port": 24224,
"tag": "sentrilite.events"
}{
"brokers": ["kafka1.example.com:9092", "kafka2.example.com:9092"],
"topic": "sentrilite-events"
}Note:
- Kafka streaming currently buffers messages and logs them. Full Kafka producer implementation using
github.com/IBM/saramais planned for future releases. - Fluentd streaming uses TCP connection and sends JSON-formatted messages with tag, timestamp, and record fields.
- Streaming works alongside regular log output - you can use both
--logand--streamoptions simultaneously.
- Go 1.21 or higher
- Access to nodes on port 8765 (WebSocket)
- For S3 output: AWS credentials configured (via environment variables or IAM role)
- For SIEM integrations: Appropriate configuration files
export AWS_ACCESS_KEY_ID=your-access-key
export AWS_SECRET_ACCESS_KEY=your-secret-key
export AWS_REGION=us-east-1Or use IAM roles if running on EC2.
Use the provided run.sh script to run the CLI as a daemon:
# Start daemon
./run.sh start --nodes nodes.csv
# Start with custom log output
./run.sh start --nodes nodes.csv --log stdout
# Start with SIEM integration
./run.sh start --nodes nodes.csv --siem splunk
# Start with streaming to Fluentd
./run.sh start --nodes nodes.csv --stream fluentd
# Start with streaming to Kafka
./run.sh start --nodes nodes.csv --stream kafka
# Start with both log output and streaming
./run.sh start --nodes nodes.csv --log dir:/var/log/sentrilite --stream fluentd
# Stop daemon
./run.sh stop
# Restart daemon
./run.sh restart --nodes nodes.csv
# Check status
./run.sh statusThe daemon will:
- Run in the background
- Write logs to
sentrilite-cli.log - Store PID in
sentrilite-cli.pid - Create
./alertsdirectory for alert files - Create
./logdirectory for log files (if using default output)
The CLI uses a separate WebSocket connection for alerts collection. This dedicated connection:
- Queries each node for alerts every 1 minute using the
get_alertscommand - Can wait up to 5 minutes for responses or until the server closes the connection
- Collects all alerts before saving to prevent data loss
- Deduplicates alerts - only adds alerts that don't already exist in the file
- Stores alert data in a persistent file:
./alerts/<node_ip>.<group>.alerts.json
Each node has a single persistent alerts file that accumulates all unique alerts over time. The deduplication logic compares alerts based on:
- PID
- Command (cmd/comm)
- Timestamp (exact match required - same alert at different times are saved separately)
- K8s pod UID (if present)
- Risk level, tags, user, IP address, and other fields
Note: When using --stream, alerts are still collected via the separate connection, but events are streamed in real-time to the configured stream destination.
- The CLI connects to all nodes concurrently
- Each node has two separate WebSocket connections:
- Main connection: For regular event streaming (logs, events, traces)
- Alerts connection: Dedicated connection for alerts collection with longer timeouts
- Each node connection runs in a separate goroutine
- Log files are created as
<node_ip>.<group>.<timestamp>.jsonin the specified directory - Alert files are created as
<node_ip>.<group>.alerts.jsonin the./alertsdirectory (persistent, deduplicated) - Log file timestamp format:
YYYYMMDDTHHMMSS(e.g.,20250115T143022) - Alerts are queried every 1 minute automatically via the dedicated alerts connection
- The alerts connection can wait up to 5 minutes for responses or until the server closes it
- For S3 output, logs are buffered and uploaded in batches of 100 entries
- For Kafka streaming, messages are buffered and flushed in batches of 100 entries
- WebSocket connections are maintained until interrupted (Ctrl+C) or daemon is stopped
- When using
--stream, events are streamed in real-time while still being saved to log files (if--logis specified)
This project is licensed under the MIT License.
