Ingest On-Demand
Cut 80% Analytics Cost
Stream from S3 only the events you need intoSplunk · Datadog · Elasticsearch · CloudWatch Logs
ROI Calculator
Full ingestion vs. on-demand streaming from S3
View cost breakdown
Storage Streamer Workflow
Index on upload, query and stream on-demand. Works with your log analytics — View implementation on GitHub
Hover over each step to learn more
Frequently Asked Questions
How It Works
Storage Streamer stores 100% of your logs in S3 at just $0.023/GB/month and indexes them at ingest time.
When you query, the system scans the index to find which files contain matching data. Only those files are streamed to your analyzer.
You pay analyzer license costs only on the data you actually query -- typically 5-30% of total volume -- not all your logs.
Incident investigation — search months of historical logs during an outage without pre-paying for ingestion. Matching events stream to your analyzer within seconds.
Scheduled dashboard population — a Kubernetes CronJob streams the last hour's data from S3 on a recurring schedule, keeping Splunk/Datadog/Elastic dashboards current without full-volume ingestion.
Compliance and audit — retain years of logs in S3 at $0.023/GB/month. Stream to your analyzer only when auditors request specific time ranges.
Metric aggregation — convert S3 events into metric data points (counts, rates, percentiles) and publish to Datadog Metrics, Prometheus, CloudWatch, or Elastic, bypassing log ingestion entirely.
Store 100% of logs in S3 at a fraction of analyzer costs. Pay your analyzer license only on the data you actually query -- typically 5-30% of total volume.
Most customers see 70-80% cost reduction. Use the ROI calculator above to estimate your savings, or see our pricing page for details.
Bloom filter lookups identify matching files in under 1 second. Full event retrieval depends on result set size:
- ~100 events — 2-5 seconds
- ~10,000 events — 10-30 seconds
For sub-second alerting, keep critical log types streaming to your primary analyzer. Use Storage Streamer for incident investigation, scheduled dashboard population, compliance audits, and metric aggregation — see use cases for query workflows and savings per vendor.
Queries are initiated via REST API — from a script, runbook, or CronJob. Results stream back into your existing analyzer.
- Send a query with time range and search expression
- Bloom filter index identifies matching S3 files (<1 second)
- Matching events stream through Fluent Bit to your analyzer (Splunk HEC, Elasticsearch Bulk API, Datadog, CloudWatch)
- Events appear in Kibana / Splunk Search / Datadog Logs with original timestamps — alongside your live data
Example — find all payment errors in the last 6 hours:
curl -X POST http://streamer:8080/streamer/query \
-d '{"from":"now(\"-6h\")","to":"now()",
"search":"level == \"ERROR\" && message.includes(\"payment\")"}'
No separate UI to learn. Results are standard indexed events in your existing tool — search, filter, and dashboard them the same way you always do. Events are permanently ingested; your analyzer's standard retention policy applies.
For recurring workflows (dashboard population, compliance scans), schedule queries via Kubernetes CronJob.
Comparisons
Federated Search scans every file in S3 via AWS Glue — no indexes. It's capped at 10 TB per search, ~100 sec/TB, and Splunk Cloud on AWS only.
Storage Streamer indexes files at upload so queries skip 99%+ of files in seconds. No scan caps, works with Splunk Cloud and Enterprise, and results stream with original timestamps for full analytics.
Archive Search scans every byte in your S3 archive at $0.10/GB per query — no indexes. Results are capped at 100K events and expire after 24 hours.
Storage Streamer indexes files at upload so queries skip 99%+ of files in seconds, with no per-query fees and no result caps.
Flex Logs still charges $0.10/GB to ingest everything into Datadog first — it just stores it cheaper after. Querying Flex data adds compute tier fees on top.
Storage Streamer skips ingestion entirely: logs go straight to your S3 at $0.023/GB and you stream only what you need to any analyzer.
Rehydration re-indexes archived data back into Datadog's hot tier — you pay ingestion costs a second time, and only data that was originally sent is available. Anything you filtered at ingest is gone.
Storage Streamer keeps 100% of your logs in S3 and queries in-place — no re-ingestion, no data loss, results in seconds.
Storage Streamer deploys in your AWS account — no data leaves your infrastructure. Deploy via a Terraform moduleStorage Streamer — Terraform:module "tenx_streamer" {
source = "log-10x/tenx-streamer/aws"
tenx_api_key = var.tenx_api_key
tenx_streamer_index_source_bucket_name = "my-app-logs"
}Full deploy guide → that provisions S3 buckets, SQS queues, and IAM roles. Run terraform apply and logs start flowing to S3 within minutes.
Reduce Analytics Costs
Docker · KubernetesWorks with your log analytics


