Scale Logs 10x

Chrome V8 made the web fast.
The 10x Engine accelerates observability

The Team

Tal Weiss and Dor Levi founded VisualTao (Sequoia-backed, acquired by Autodesk) and OverOps (Lightspeed-backed). At OverOps they scaled Java production debugging across 250+ enterprises, processing billions of events.

Log10x applies that experience to the problem underneath every observability cost conversation: the stack still processes log data like it’s 2010.

Self-funded by repeat founders. Direct engineering support from the team that built it.

The Shift

Every tool in the observability stack (Fluent Bit, Splunk, Datadog) treats log events as raw text. Parse JSON, evaluate regex, event by event, line by line.

AI-generated code ships faster and logs more. Every deployment adds unstructured volume to the codebase, and the growth is structural.

This is exactly where JavaScript was before Chrome V8. Untyped, interpreted, slow. V8’s insight was that objects fall into a small number of recurring shapes, so at runtime each one gets assigned a hidden class for direct, compiled access.

Machine data works the same way. A typical microservice in production emits 40–200 distinct log structures, repeated millions of times per hour. The structure is the same whether the code is human-written or AI-generated, and the processing costs grow with every line.

Filtering, sampling, and routing don’t fix this. V8 didn't “filter slow JavaScript.” It made JavaScript fast.

The Engine

10x builds symbol vocabulary from code and binaries ahead of time. The Runtime Engine matches log structure against that vocabulary to process events as typed, class-based objects instead of raw text.

Take a Kubernetes pod that logs a crash loop, one of roughly 80 event shapes the kubelet produces. A traditional pipeline parses that JSON fresh at every stage (collector, aggregator, analyzer), repeating identical work three to five times per event, millions of times per hour.

{
  "stream": "stdout",
  "log": "E0925 14:32:45.678901 12345 pod_workers.go:836]
    Error syncing pod abc123-4567-890 (UID: def456-7890-
    1234-5678), skipping: failed to \"StartContainer\"
    for \"web\" with CrashLoopBackOff: \"back-off 5m0s
    restarting failed container=web pod=web-app_production
    (abc123-4567-890) in namespace=default, reason: high
    memory pressure on node worker-3.us-west-2 with
    current usage 89.45% (threshold: 80%), affected
    resources include disk I/O at 1200 ops/sec and
    network traffic of 4.56GB from source IP
    192.168.5.42\"",
  "docker": {
    "container_id": "a7ce4c736be5beb8ef0859791b3c77de7bcce8bfc307e017c2fb7bcfa29ccde7"
  },
  "kubernetes": {
    "container_name": "fluentd-10x",
    "namespace_name": "default",
    "pod_name": "foo-fluentd-10x-68s2p",
    "container_image": "ghcr.io/log-10x/fluentd-10x:0.22.0-jit",
    "container_image_id": "ghcr.io/log-10x/fluentd-10x@sha256:b5263a6bef925f47c1f43ee06bb46674461da74059bd99a773e5cef1a4e4f8f8",
    "pod_id": "5a9cc9c8-3a71-41af-bffe-0a0914253361",
    "pod_ip": "192.168.33.78",
    "host": "ip-192-168-57-207.ec2.internal",
    "labels": {
      "app.kubernetes.io/instance": "foo",
      "app.kubernetes.io/name": "fluentd-10x",
      "controller-revision-hash": "f4789b8fd",
      "pod-template-generation": "1"
    }
  }
}

The 10x Engine maps that shape once. Every future instance gets direct access to instance and class-level fields: no parsing, no regex, no per-event overhead. Structured, typed access to every log event without repeated parsing, reducing processing costs 50–80% compared to traditional log ingestion.

The Signal

Every AI model runs against the same constraint: a finite context window. Raw logs don’t fit: 2 million events per hour can’t go into a prompt. Even sampled, most lines repeat the same structures, burning tokens on redundant text while critical events are missed.

Every event exits the engine as a typed object, and aggregation condenses millions of those objects into pattern digests: which log shapes fired, how often, which fields changed. The same query that chokes on raw logs fits in a single prompt.

The engine runs inside your infrastructure, so frontier models can operate on structured output without exposing raw logs to external services.

Design Principles

Predictable PricingPredictable Pricing

Per-GB pricing punishes growth. Log10x is priced per infrastructure node, not per byte ingested, so costs scale with your architecture instead of your log volume.

Zero Data Egress

The 10x Engine runs inside your infrastructure: your laptop, log forwarder, log analytics platform, Kubernetes, or S3 and Azure Blobs. No log events leave your environment.

BYO StackBYO Stack

Works with your existing stack: Splunk, Datadog, Elastic, OTel, Fluent Bit, Fluentd. Drop-in runtime, not a platform migration.

Contact Us

Engineering-led · Repeat founders · Build with us

New York

Americas

New York Metro Area

Barcelona Business District

EMEA

Barcelona, Spain