Optimize Logs
Cut Costs by 50%+

Losslessly compact events collected by
Fluentd/Bit · OTel · Filebeat · Logstash · Splunk UF

Verbose → 8x Compact

Lossless — stores shared structure once, ships only changing values

2.8x – 8x reduction across event types
Python ConnectionError from OTel demo load-generator service
Lossless ✓
Raw 4,265 B
{
  "stream": "stderr",
  "log": "2026-02-21 10:23:47,891 ERROR locust.runners: Traceback (most recent call last):
    File \"/app/locustfile.py\", line 42, in browse_product
      response = self.client.get(\"/api/products/OLJCESPC7Z\")
    File \"/usr/local/lib/python3.12/site-packages/locust/clients.py\", line 188, in get
      return self._send_request_safe_mode(\"GET\", url, **kwargs)
    File \"/usr/local/lib/python3.12/site-packages/locust/clients.py\", line 112, in _send_request_safe_mode
      return self.request(\"GET\", url, **kwargs)
    File \"/usr/local/lib/python3.12/site-packages/requests/sessions.py\", line 589, in request
      resp = self.send(prep, **send_kwargs)
    File \"/usr/local/lib/python3.12/site-packages/requests/sessions.py\", line 703, in send
      r = adapter.send(request, **kwargs)
    File \"/usr/local/lib/python3.12/site-packages/requests/adapters.py\", line 519, in send
      raise ConnectionError(e, request=request)
    File \"/usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py\", line 791, in urlopen
      retries = retries.increment(method, url, error=e, _pool=self)
    File \"/usr/local/lib/python3.12/site-packages/urllib3/util/retry.py\", line 592, in increment
      raise MaxRetryError(_pool, url, error or ResponseError(cause))
    File \"/usr/local/lib/python3.12/site-packages/urllib3/connection.py\", line 211, in connect
      sock = self._new_conn()
    File \"/usr/local/lib/python3.12/site-packages/urllib3/connection.py\", line 186, in _new_conn
      raise NewConnectionError(self, msg)
  urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='frontend', port=8080):
    Max retries exceeded with url: /api/products/OLJCESPC7Z
    (Caused by NewConnectionError: Failed to establish a new connection:
    [Errno 111] Connection refused)",
  "docker": {
    "container_id": "5d6f421bbc2861616cacaf2ff589c3cc7e0b8489510c07ef73f3574f0a196e18"
  },
  "kubernetes": {
    "container_name": "load-generator",
    "namespace_name": "default",
    "pod_name": "load-generator-6d4f78459b-fjxxp",
    "container_image": "ghcr.io/open-telemetry/demo:2.1.3-load-generator",
    "container_image_id": "ghcr.io/open-telemetry/demo@sha256:b35d080e712780e2078f3837d334a0ff13204ad2d334f3b4838d64bb543d031a",
    "pod_id": "5c3c2718-4397-8e2c-2109a5b3cc41",
    "pod_ip": "192.168.51.178",
    "host": "ip-192-168-42-205.ec2.internal",
    "labels": {
      "app.kubernetes.io/component": "load-generator",
      "app.kubernetes.io/name": "otel-demo",
      "opentelemetry.io/name": "load-generator",
      "pod-template-hash": "6d4f78459b"
    }
  },
  "tenx_tag": "kubernetes.var.log.containers.load-generator-6d4f78459b-fjxxp_default_load-generator-5d6f421bbc2861616cacaf2ff589c3cc7e0b8489510c07ef73f3574f0a196e18.log"
}
8x smaller
click to explore
Encoded 520 B
70UEVe+i ],1758886627891,42,OLJCESPC7Z,188,112,589,703,519,791,592,211,186,8080,OLJCESPC7Z,0x7f2a8c3d1e50,111,5d6f421bbc2861616cacaf2ff589c3cc7e0b8489510c07ef73f3574f0a196e18,6d4f78459b,fjxxp,2,1,3,b35d080e712780e2078f3837d334a0ff13204ad2d334f3b4838d64bb543d031a,5c3c2718,-4397,8e2c,2109a5b3cc41,192,168,51,178,-192,-168,-42,-205,6d4f78459b,tenx,6d4f78459b,fjxxp,5d6f421bbc2861616cacaf2ff589c3cc7e0b8489510c07ef73f3574f0a196e18
Template shared structure for all events of this type

Constant symbol values, delimiters, and structure — stored once and referenced by hash. Learn more →

Timestamp date/time as 64-bit epoch

Automatically parsed once per template and converted to efficient 64-bit epoch values. Learn more →

Variables what changes per event

High-cardinality values unique to each event — each maps to a {N} placeholder in the template. Learn more →

Events without a matching template pass through unchanged — no data is ever dropped.
INTERACTIVE LOSSLESS LOG COMPACTION DEMONSTRATION What the user sees and experiences on this page: An interactive side-by-side visualization with 4 tabs (Application, Kubernetes, Security, Observability). Each tab demonstrates the compaction of a real log event. LEFT SIDE labeled "Raw" shows the complete, uncompressed JSON log event exactly as originally produced: - Badge displays original size (e.g., "4,265 B") - Contains a "Lossless ✓" checkmark badge - Full example from Application tab: { "stream": "stderr", "log": "2026-02-21 10:23:47,891 ERROR locust.runners: Traceback (most recent call last): File \"/app/locustfile.py\", line 42, in browse_product response = self.client.get(\"/api/products/OLJCESPC7Z\") File \"/usr/local/lib/python3.12/site-packages/locust/clients.py\", line 188, in get ... [full traceback continues with repeated boilerplate paths] ", "docker": { "container_id": "5d6f421bbc2861616cacaf2ff589c3cc7e0b8489510c07ef73f3574f0a196e18" }, "kubernetes": { "pod_name": "load-generator-6d4f78459b-fjxxp", "namespace": "default", ... } } This shows all the repetitive structure: identical field names, repeated file paths, constant error message format. CENTER shows a large arrow labeled with the compression ratio (e.g., "8x smaller") and text "click to explore". RIGHT SIDE labeled "Encoded" shows the compacted binary output, with three color-coded components: 1. Purple hash at the start (e.g., "70UEVe+i ]") — the template reference 2. Blue number (e.g., "1758886627891") — the timestamp as 64-bit epoch 3. Gray comma-separated numbers (e.g., "42,OLJCESPC7Z,188,112,589...") — the variable values - Badge displays compacted size (e.g., "520 B") - Shows this is only 12% of the original size INTERACTIVE LEGEND (clickable below the visualization) explains what each color means: TEMPLATE (purple/blue section): - Description: "Shared structure for all events of this type" - User can click "View template →" button to see the full template structure - The template contains all the constant parts that repeat across events: JSON field names ("stream", "log", "docker", "kubernetes"), JSON delimiters and structure, the repetitive error message boilerplate - When viewed, shows something like: { "stream": "stderr", "log": "yyyy-MM-dd HH:mm:ss,SSS ERROR locust.runners: Traceback (most recent call last): File \"{path}\", line {N}, in {func} ... [repeating pattern with {N} placeholders for variable values] ", "docker": { "container_id": "{id}" }, "kubernetes": { "pod_name": "{pod}", ... } } - Shows how each {1}, {2}, {3}... placeholder corresponds to a variable value in the encoded output TIMESTAMP (blue section): - Description: "Date/time as 64-bit epoch" - Explains that date/time is parsed once and converted to an efficient 64-bit number representation - Example: "2026-02-21 10:23:47,891" becomes "1758886627891" VARIABLES (gray section): - Description: "What changes per event" - These are the high-cardinality values unique to each event - Example values: exception line numbers (42, 188, 112), API product codes (OLJCESPC7Z), port numbers (8080), container IDs, pod names, status codes - Each value is compacted into a position in the comma-separated list - User can hover over any variable value in the encoded output, and it will highlight the corresponding location in the raw JSON - This shows exactly where that value came from and proves nothing was lost CONCRETE TRANSFORMATION EXAMPLE: Raw event contains the Python traceback with repeated structure: File \"/usr/local/lib/python3.12/site-packages/locust/clients.py\", line 188, in get File \"/usr/local/lib/python3.12/site-packages/locust/clients.py\", line 112, in _send_request_safe_mode File \"/usr/local/lib/python3.12/site-packages/requests/sessions.py\", line 589, in request ... [this same path appears dozens of times] The template captures this structure once: File \"/usr/local/lib/python3.12/site-packages/{module}.py\", line {line}, in {function} The compacted event ships the template reference plus only the unique values: [template hash], [timestamp], 188, 112, 589, [other line numbers], [module names], [function names] When expanded by Splunk app or Elasticsearch plugin at search time, the system: 1. Looks up the template by hash 2. Fills the {line} placeholders with 188, 112, 589, etc. in sequence 3. Reconstructs the exact original event 4. Makes it fully searchable—you can query on any field or value INTERACTIVITY: - Clicking tabs switches between 4 real event types: Application (Python error from OTel), Kubernetes (pod failure from kubelet), Security (runtime alert from Falco), Observability (OTel collector batch processor error) - Hovering over variable values highlights them in the raw JSON - Clicking "View template →" opens a modal showing the full template with all {N} placeholders - Clicking "Expand back ←" button reverses the compaction, showing the reconstructed original event - Clicking the flow area expands/collapses the visualization for easier reading RESULTS ACROSS ALL 4 EVENT TYPES: Application (Python traceback): 4,265 B → 520 B (8x smaller, 12% of original) Kubernetes (pod error): 1,835 B → 662 B (2.8x smaller, 36% of original) Security (Falco alert): 1,742 B → 505 B (3.4x smaller, 29% of original) Observability (OTel error): 1,915 B → 632 B (3x smaller, 33% of original) KEY INSIGHT DEMONSTRATED: Log events aren't random data—they're structured, repetitive. The problem with compression (gzip, etc.) is that it has to relearn the structure for every event. Log10x learns the structure once (per event type) and stores it separately. Then every event only ships what's different. This explains why: - It's lossless: nothing is dropped, just reorganized - It's efficient: 70-88% of every event is redundant structure - It's format-agnostic: works for any log type without manual rules - It's expansion-transparent: reconstructed events are byte-for-byte identical to originals - Dashboards, alerts, and queries work unchanged
TenXTemplate

{N} placeholders map positionally to the variable values in the compacted event. The template is generated once per event structure and cached — learn more.

{ "templateHash": "-1VNUo?i|uV",
  "template": "{\"stream\":\"stdout\",
    \"log\":\"E0925 {1}:{2}:{3}.{4} {5} pod_workers.go:{6}]
      Error syncing pod {7}{8}{9}
      (UID: {10}{11}{12}{13}), skipping:
      failed to \\\"StartContainer\\\" for \\\"web\\\"
      with CrashLoopBackOff: \\\"back-off {14}
      restarting failed container=web
      pod=web-app-7f8d9c6b5-x2vnk\\\"\",
    \"docker\":{\"container_id\":\"{15}\"},
    \"kubernetes\":{
      \"container_name\":\"fluentd-{16}\",
      \"namespace_name\":\"{17}\",
      \"pod_name\":\"foo-fluentd-{18}-{19}\",
      \"container_image\":\"ghcr.io/log-10x/fluentd-{20}:latest\",
      \"container_image_id\":\"ghcr.io/log-10x/fluentd-{21}@sha256:{22}\",
      \"labels\":{
        \"app.kubernetes.io/instance\":\"foo\",
        \"app.kubernetes.io/managed-by\":\"Helm\",
        \"controller-revision-hash\":\"{23}\",
        \"pod-template-generation\":\"{24}\"},
      \"host\":\"ip-{25}-{26}-{27}-{28}.ec2.internal\",
      \"master_url\":\"https://{29}.{30}.{31}.{32}:443/api\"},
    \"tenx_tag\":\"kubernetes.var.log.containers.foo-fluentd-{33}-{34}_default_fluentd-{35}-{36}\"}" }

Measure the Impact

View optimization savings in managed 10x Console or your monitoring stack

Log10x Edge Optimizer reducing log volume

Edge Optimizer losslessly reduces event volume at the source — typically 50-65% reduction in shipped volume.

Edge Optimizer Workflow

Eliminates redundancy using shared templates. Works with your forwarders.

Hover over each step to learn more

Frequently Asked Questions

How does Edge Optimizer reduce volume without losing data?

The engine identifies repeating structure in your logs — JSON keys, timestamp formats, constant strings — and stores each unique pattern once as a cached template. Only the variable values (IPs, pod names, trace IDs) are shipped per event. Similar to how Protocol Buffers define a schema once and send only field values over the wire.

The AOT Compiler builds symbol vocabulary from your repos in CI/CD; the JIT engine uses those symbols to create and assign templates at runtime. 150+ frameworks are covered by the built-in library.

Result: 50-80% volume reduction with 100% data fidelity. Every field, value, and timestamp remains intact. Real-world benchmark: 64% reduction on Kubernetes OTel logs (1,835 → 662 bytes per event).

Compact vs. compress:

  • Compressed data must be decompressed before it can be searched or aggregated. Compact events remain searchable in place — they can be streamed to log analyzers and aggregated to metrics without a decompression step.
  • Many SIEMs bill on uncompressed ingest volume — gzip and zstd reduce storage but not the ingestion bill. Edge Optimizer reduces volume before ingestion, cutting both costs.
  • The two approaches are complementary: optimize first, then let your SIEM compress on top.

Why not sampling or filtering? Those are lossy — they permanently discard data, eliminating evidence for troubleshooting and security investigations. Edge Optimizer uses only lossless techniques.

How do I search optimized events in my log analytics platform?

How expanding works depends on your analyzer:

  • Splunk: The open-source 10x Splunk app (GitHub) transparently expands at search time. A JavaScript hook intercepts search requests and expands compacted events via KV Store template lookup — your searches, dashboards, and alerts work unchanged
  • Datadog & CloudWatch: Events are expanded via Storage Streamer from S3 before ingestion — no expansion step needed on the analyzer side. Your dashboards, monitors, and alerts work as-is
  • Elasticsearch / OpenSearch: The open-source L1ES plugin (GitHub) transparently rewrites standard queries and decodes results at search time — Kibana dashboards, KQL queries, and alerts work unchanged. For managed services (Elastic Cloud, AWS OpenSearch Service), Storage Streamer expands events from S3 before ingestion
What's the query-time overhead?

~1.25x query time for both Splunk and Elasticsearch. Datadog and CloudWatch have zero overhead — events are expanded before ingestion.

  • Splunk: The 10x for Splunk app expands using native SPL and KV Store template lookups. Per-event expansion is O(1). A 10-second search takes ~12.5 seconds. Compatible with interactive search, scheduled search, alerts, dashboards, REST API, and summary indexing
  • Elasticsearch / OpenSearch: The L1ES plugin expands at the Lucene segment level — each shard handles expansion locally. A 100ms query takes ~125ms. Scales horizontally with cluster size
  • Datadog, CloudWatch & managed Elasticsearch: Storage Streamer expands events before ingestion — zero query overhead
What ROI can I expect from Edge Optimizer?

ROI formula: (daily volume in GB) × (reduction ratio) × (your per-GB cost) × 365 = annual savings

Use the free Dev tool to measure your exact reduction ratio on your own logs, or try the ROI Calculator with your analyzer's per-GB cost. See Datadog, Splunk, Elasticsearch, or CloudWatch pages for vendor-specific savings estimates.

When should I use Edge Optimizer vs Edge Regulator?

Edge Regulator filters and samples events based on cost and priority policies — events are dropped before they ship. This is lossy but sends directly to your analyzer with no additional infrastructure.

Edge Optimizer compacts events losslessly — no data is discarded. For Splunk, the 10x app expands at search time. For self-hosted Elasticsearch and OpenSearch, the L1ES plugin does the same. For Datadog, CloudWatch, and managed Elasticsearch, events route through S3 and Storage Streamer expands them before ingestion.

Many teams deploy both: Regulator for known low-value events, Optimizer for everything else.

What are the resource requirements?

512 MB heap + 2 threads handles 100+ GB/day per node. Both values map directly to Kubernetes resource specs in your DaemonSet manifest.

Full resource details →

What does deployment look like?

Edge Optimizer deploys in your infrastructure — no data leaves your network. Add a tenx blockEdge Optimizer — Helm values:tenx: enabled: true apiKey: "YOUR-LICENSE-KEY" kind: "optimize" runtimeName: my-edge-optimizerFluent Bit · Fluentd · Filebeat · OTel Collector · Logstash · Splunk UF · Datadog AgentFull deploy guide → to your existing forwarder Helm values and run helm upgrade. For VMs, the CLI and Docker options take under 5 minutes.