Evidence Format
When PandoCore detects anomalous behavior, it generates structured evidence records. These records are designed for easy integration with SIEMs, log aggregators, and security analytics platforms.
Evidence Output Channels
Evidence is emitted through multiple channels simultaneously:
| Channel | Format | Persistence |
|---|---|---|
| stdout | JSON (one record per line) | Captured by any log aggregator |
| Kubernetes Events | K8s Event object | Survives pod deletion (namespace-scoped) |
| Evidence Webhook | Full Evidence JSON (POST) | Delivered to external endpoint (portal, SIEM, etc.) |
| Slack Alerts | Slack Block Kit message | Delivered to Slack channel (alert + enforce modes) |
| Prometheus Metrics | Counter/Gauge metrics | Time-series data in your metrics stack |
Evidence Webhook
When PANDO_EVIDENCE_WEBHOOK_URL is configured, the full Evidence JSON record is POSTed to the specified endpoint before pod deletion. This is the primary data path for the PandoCore Portal.
The webhook includes these HTTP headers:
| Header | Description |
|---|---|
Content-Type |
application/json |
X-Pando-Evidence-ID |
UUID of the evidence record |
X-Pando-Event-Type |
collapse |
X-Pando-Schema-Version |
Evidence schema version (e.g., 2.0) |
The request body is the complete Evidence JSON record shown below. Delivery is retried up to 3 times (configurable via PANDO_EVIDENCE_WEBHOOK_MAX_RETRIES) with a default 5-second timeout per attempt.
This webhook can be pointed at any endpoint that accepts JSON POSTs — use it to integrate with SIEMs, incident management platforms, or custom pipelines. The PandoCore Portal uses this channel by default (auto-configured by the Helm chart).
Slack Alerts
When PANDO_SLACK_WEBHOOK_URL is configured and the operating mode is alert or enforce, PandoCore sends formatted Slack messages on each detection event.
Slack alerts are color-coded by severity:
| Severity | Color | Criteria |
|---|---|---|
| Critical | Red (#dc2626) | Pod was terminated (enforce mode) |
| High | Orange (#ea580c) | Drift score ≥ 0.6 |
| Warning | Yellow (#d97706) | Drift score ≥ 0.3 |
| Info | Green (#16a34a) | Low drift detection |
Each Slack message includes:
- Pod name and namespace
- Drift score and detection reason
- Action taken (pod terminated vs. would have killed)
- Timestamp
- Per-dimension drift breakdown (memory, process)
See Configuration for setup instructions.
Evidence JSON Schema (v2.0)
Each detection event produces a JSON record with the following structure. PandoCore uses a three-path detection model for comprehensive anomaly detection:
{
"version": "2.0",
"id": "550e8400-e29b-41d4-a716-446655440000",
"timestamp": "2026-03-16T14:30:00.123Z",
"pod": {
"name": "my-app-7d4f9b8c5-x2kl9",
"namespace": "production",
"node": "gke-cluster-pool-1-abc123"
},
"trip_reason": "cumulative_threshold_exceeded",
"drift_score": 0.67,
"dimensions": {
"memory": 0.52,
"process": 0.43,
"temporal": 0.08,
"system": 0.12
},
"action_taken": "pod_deleted",
"detection": {
"trigger": "cumulative_threshold",
"fast_path": {
"drift_score": 0.67,
"incremental_drift": 0.12,
"velocity": 0.08,
"dimensions": { "memory": 0.52, "process": 0.43, "temporal": 0.08, "system": 0.12 },
"primary_dimension": "memory"
},
"medium_path": {
"attestation_results": { "result_count": 5, "pass_rate": 1.0 },
"integrity_check": "pass"
},
"slow_path": {
"ml_anomaly_score": 0.85,
"ml_decision": "suspicious",
"model_trained": true,
"training_sample_count": 1500
}
},
"learned_thresholds": {
"incremental": 0.12,
"cumulative": 0.38,
"velocity": 0.08,
"learned_at": "2026-03-16T14:20:00Z",
"sensitivity": 2.0
},
"hash": "a1b2c3d4e5f6...",
"previous_hash": "9f8e7d6c5b4a...",
"sequence": 42,
"metadata": {
"sidecar_version": "0.1.0",
"mode": "enforce",
"customer_id": "cust_abc123",
"thresholds": { "incremental": 0.15, "cumulative": 0.45 }
}
}
Field Reference
Top-Level Fields
| Field | Type | Description |
|---|---|---|
version |
string | Schema version (currently "2.0") |
id |
string (UUID) | Unique identifier for this evidence record |
timestamp |
string (ISO 8601) | When the detection occurred |
customer_id |
string | Your customer identifier (from license) |
Pod Information
| Field | Type | Description |
|---|---|---|
pod.name |
string | Full pod name |
pod.namespace |
string | Kubernetes namespace |
pod.node |
string | Node the pod was running on |
Detection Details
| Field | Type | Description |
|---|---|---|
detection.type |
string | Type of detection: drift, integrity, anomaly |
detection.reason |
string | Specific reason for the detection |
detection.score |
number | Overall detection score (0.0 - 1.0) |
detection.threshold |
number | The threshold that was exceeded |
Detection Reasons
| Reason | Description |
|---|---|
incremental_threshold_exceeded |
Sudden behavioral change detected |
cumulative_threshold_exceeded |
Gradual drift accumulated beyond threshold |
velocity_threshold_exceeded |
Rate of change exceeded safe bounds |
integrity_check_failed |
Integrity validation detected anomaly |
anomaly_detected |
Behavioral pattern anomaly identified |
Dimension Scores
The dimensions object contains drift scores for each monitored dimension (0.0 = no drift, 1.0 = maximum drift):
| Dimension | Description |
|---|---|
memory |
Memory usage, allocation patterns |
process |
Thread count, file descriptors, child processes |
temporal |
Execution timing patterns |
system |
CPU and I/O patterns |
Action Information
| Field | Type | Description |
|---|---|---|
action.taken |
string | logged_only (monitor/alert) or pod_terminated (enforce) |
action.mode |
string | Operating mode when detection occurred |
Hash Chain
Evidence records form a tamper-evident chain:
| Field | Type | Description |
|---|---|---|
chain.hash |
string | SHA-256 hash of this record |
chain.previous_hash |
string | Hash of the previous record (enables chain verification) |
chain.sequence |
number | Monotonically increasing sequence number |
Each evidence record includes a cryptographic hash linking it to the previous record. This creates a verifiable chain that can be audited for tampering.
SIEM Integration
Splunk
Configure your log forwarder (Fluentd, Filebeat, etc.) to send PandoCore logs to Splunk:
# Example Splunk search for PandoCore detections
index=kubernetes sourcetype=pando-sidecar
| spath path=detection.type
| where 'detection.type'="drift"
| table timestamp, pod.name, detection.reason, detection.score
Elastic / ELK
PandoCore's JSON output is directly compatible with Elasticsearch. Example Kibana query:
{
"query": {
"bool": {
"must": [
{ "match": { "detection.type": "drift" } },
{ "range": { "detection.score": { "gte": 0.5 } } }
]
}
}
}
Datadog
Forward logs to Datadog and use the following facets for filtering:
@detection.type@detection.reason@detection.score@pod.namespace@action.taken
Prometheus Metrics
PandoCore exposes metrics on port 9090 at /metrics for real-time monitoring and alerting:
Drift Metrics
# Current cumulative drift score from baseline
pando_drift_score 0.23
# Per-dimension drift scores
pando_drift_memory 0.18
pando_drift_process 0.12
pando_drift_temporal 0.05
pando_drift_system 0.08
# Velocity metrics (rate-of-change per second)
pando_velocity_weighted 0.02
pando_velocity_memory 0.01
pando_velocity_process 0.02
pando_velocity_temporal 0.001
pando_velocity_system 0.003
# Acceleration (velocity change per second)
pando_acceleration_weighted 0.005
Trip/Detection Metrics
# Total number of trips (collapses)
pando_trips_total 3
# Trips by reason
pando_trips_by_reason_total{reason="cumulative_threshold"} 2
pando_trips_by_reason_total{reason="velocity_threshold"} 1
# Total evidence emitted
pando_evidence_emitted_total 3
Baseline & Sampling Metrics
# Age of current baseline in seconds
pando_baseline_age_seconds 3600
# Total baselines established
pando_baselines_established_total 1
# Total samples collected
pando_samples_collected_total 36000
# Collection errors
pando_collection_errors_total 0
ML Anomaly Detection Metrics
# Number of training samples used
pando_ml_training_samples 1500
# Total inference runs
pando_ml_inference_total 60
# Most recent anomaly score (0.0=normal, 1.0=anomalous)
pando_ml_anomaly_score 0.25
# Suspicious decisions and corroborated trips
pando_ml_suspicious_total 2
pando_ml_corroborated_trips_total 1
Attestation Metrics
# Attestation pass/fail counts
pando_attestation_pass_total 120
pando_attestation_fail_total 0
# Last attestation result (0=fail, 1=pass)
pando_attestation_last_result 1
Network Audit Metrics
# Network audit pass/fail
pando_network_audit_pass_total 60
pando_network_audit_fail_total 0
# Unexpected connections detected
pando_network_audit_unexpected_remotes 0
pando_network_audit_connections 5
Operational Metrics
# Current operating mode (0=monitor, 0.5=alert, 1=enforce, 2=debug)
pando_operating_mode 1
# Sidecar health status (0=unhealthy, 1=healthy)
pando_healthy 1
# Collection and calculation durations (histogram)
pando_collection_duration_seconds_bucket{le="0.01"} 35000
pando_drift_calculation_seconds_bucket{le="0.001"} 36000
Useful Alerts
# Alert on high drift scores (approaching threshold)
- alert: PandoDriftHigh
expr: pando_drift_score > 0.35
for: 5m
labels:
severity: warning
annotations:
summary: "High drift detected in {{ $labels.pod }}"
# Alert on enforcement events
- alert: PandoEnforcement
expr: increase(pando_enforcements_total[5m]) > 0
labels:
severity: critical
annotations:
summary: "PandoCore terminated pod {{ $labels.pod }}"
Kubernetes Events
Detection events are also emitted as Kubernetes Events:
# View PandoCore events
kubectl get events -n YOUR_NAMESPACE --field-selector reason=PandoDetection
# Example event
LAST SEEN TYPE REASON OBJECT MESSAGE
2m Warning PandoDetection pod/my-app-abc123 Drift detected: score=0.67, reason=cumulative_threshold_exceeded
Next Steps
- Configuration — Tune detection thresholds
- Operating Modes — Control detection vs enforcement
- FAQ — Common questions about evidence