Scale Logs 10x
Chrome V8 made the web fast.
The 10x Engine accelerates observability
The Team
Tal Weiss and Dor Levi founded VisualTao (Sequoia-backed, acquired by Autodesk) and OverOps (Lightspeed-backed)—where they scaled Java production debugging across 250+ enterprises, processing billions of events.
Log10x applies that experience to the problem underneath every observability cost conversation: the stack still processes log data like it’s 2010.
Self-funded by the founding team. Engineering-led.
The Shift
Every tool in the observability stack—from Fluent Bit to Splunk to Datadog—treats log events as raw text. Parse JSON. Evaluate regex. Event by event. Line by line.
And now AI generates the code too. Twenty million developers use GitHub Copilot—every generated microservice adds volume. The growth is structural—it only accelerates. The raw-text model doesn't bend. It breaks.
This is exactly where JavaScript was before Chrome V8. Untyped, interpreted, slow. The solution was applying the principle of intelligent design—objects fall into a small number of recurring shapes, so at runtime each one is assigned a hidden class for direct, compiled access.
Machine data is no different—a typical microservice in production emits 40–200 distinct log structures, repeated millions of times per hour. Human-written or AI-generated, the structure is the same—and the processing costs grow with every line.
You can't solve a runtime problem with better pipelines. Filtering, sampling, routing—these are band-aids. V8 didn't “filter slow JavaScript”—it made JavaScript fast.
The Engine
The 10x Engine builds symbol vocabulary ahead of time, then recognizes log structure at runtime—so events are processed as typed objects, not raw text.
AOT Compiler. Builds symbol vocabulary from repos, containers, and Helm charts — 150+ frameworks included. Runs in your CI/CD pipeline.
JIT Stream Processor. Uses AOT metadata to dynamically assign cached hidden classes—no repeated JSON parsing, or complex regex matching—even across billions of events.
Take a Kubernetes pod that logs a crash loop—one of roughly 80 event shapes the kubelet produces. A traditional pipeline parses that JSON fresh at every stage—collector, aggregator, analyzer—repeating identical work three to five times per event, millions of times per hour.
The 10x Engine maps that shape once and processes every future instance with direct access to instance and class-level fields. No parsing. No regex. No per-event overhead.
{
"stream": "stdout",
"log": "E0925 14:32:45.678901 12345 pod_workers.go:836]
Error syncing pod abc123-4567-890 (UID: def456-7890-
1234-5678), skipping: failed to \"StartContainer\"
for \"web\" with CrashLoopBackOff: \"back-off 5m0s
restarting failed container=web pod=web-app_production
(abc123-4567-890) in namespace=default, reason: high
memory pressure on node worker-3.us-west-2 with
current usage 89.45% (threshold: 80%), affected
resources include disk I/O at 1200 ops/sec and
network traffic of 4.56GB from source IP
192.168.5.42\"",
"docker": {
"container_id": "a7ce4c736be5beb8ef0859791b3c77de7bcce8bfc307e017c2fb7bcfa29ccde7"
},
"kubernetes": {
"container_name": "fluentd-10x",
"namespace_name": "default",
"pod_name": "foo-fluentd-10x-68s2p",
"container_image": "ghcr.io/log-10x/fluentd-10x:0.22.0-jit",
"container_image_id": "ghcr.io/log-10x/fluentd-10x@sha256:b5263a6bef925f47c1f43ee06bb46674461da74059bd99a773e5cef1a4e4f8f8",
"pod_id": "5a9cc9c8-3a71-41af-bffe-0a0914253361",
"pod_ip": "192.168.33.78",
"host": "ip-192-168-57-207.ec2.internal",
"labels": {
"app.kubernetes.io/instance": "foo",
"app.kubernetes.io/name": "fluentd-10x",
"controller-revision-hash": "f4789b8fd",
"pod-template-generation": "1"
}
}
}
The result. Not a router that moves data. Not a regex filter that drops data. Structured, typed access to every log event without repeated parsing—reducing processing costs 50–80% compared to equivalent Splunk, Datadog, or Elastic ingestion.
Powering agents. Every event exits the 10x Engine as a typed object with direct field access—not raw text to parse. Aggregation condenses millions of events into compact summaries—so AI agents operate on compact summaries instead of burning tokens on raw log lines—without exposing customer data to external models.
Design Principles
Per-GB pricing punishes growth. Log10x is priced per infrastructure node, not per byte ingested—your costs scale with your architecture, not your log volume.
The 10x Engine runs inside your infrastructure—your laptop, log forwarder, log analytics platform, Kubernetes, or S3 and Azure Blobs. No log events sent to external APIs, no third-party model training on customer data.
Works with your existing stack—Splunk, Datadog, Elastic, OTel, Fluent Bit, Fluentd. Drop-in runtime, not a platform migration. Fully extensible.
Contact Us
Engineering-led · Repeat founders · Build with us

Americas
New York Metro Area

EMEA
Barcelona, Spain