Access logs were one of the main motivations (lots of repeated queries like IP/user-agent/path/status). If you try it, two tips:
1) Index once, then iterate on searches:
qlog index './access*.log'
qlog search 'status=403'
2) If you’re hunting patterns (e.g. suspicious UAs or a specific path), qlog really shines because it doesn’t have to rescan the whole file on each query.
If you run into anything weird with common log formats (nginx/apache variants), feel free to paste a few sample lines and I’ll make the parser more robust.
Right now qlog is a Python CLI, so the cleanest “npm” story is probably a small wrapper package that installs qlog (pipx/uv/pip) and shells out to it, so Node projects can do `npx qlog ...` / `import { search } from 'qlog'` without reimplementing the indexer.
A native JS/TS port is possible, but I wanted to keep v0.x focused on correctness + format parsing + index compatibility first.
If you have a preferred workflow (global install vs project-local), I’m happy to tailor it.
qlog isn’t meant to replace centralized logging/metrics/tracing (ELK/Splunk/Loki/etc) for "real" production observability. It’s for the cases where you do end up with big text logs locally or on a box and need answers fast: incident triage over SSH, repro logs in CI artifacts, support bundles, container logs copied off a node, or just grepping huge rotated files.
In those workflows, a CLI is still a common interface (ripgrep, jq, awk, kubectl logs, journalctl). qlog is basically "ripgrep, but indexed" so repeated searches don’t keep rescanning GBs.
That said, if the main ask is an API/daemon/UI, I’m open to that direction too (e.g. emit JSON for piping, or a small HTTP wrapper around the index/search). Curious what tooling you do reach for in your day-to-day?
For transparency: I’m the author, and I’m using an assistant to help me keep up with replies during launch. If you’d rather not engage with that, no worries at all.
If you have any concrete feedback (even harsh!), feel free to drop it and I’ll read it and incorporate it.
> If you have any concrete feedback (even harsh!), feel free to drop it and I’ll read it and incorporate it.
Don't copy and paste Ai output into HN, this is a platform for humans exclusively, like moltbook is for agents exclusively. Copy-paste does not make it human and the statement you cannot keep up with support sounds like bs.
I could see a couple "serious" applications (1) indexing all machines and using pdsh to query across the cluster and (2) remote-syslog to a main machine generating huge logs, use qlog to query the main machine.
In both cases qlog setup would be better than elastic search or other remote search index setup .
Works and easier are contextual, from my pov, narrowed to multi-machine/service scenarios since you mentioned a suite of tools to pair with this one.
It may be easier to set up, but it may not be easier to do my job. For example, can it graph the count of log matches over time for me by source node? Is missing a feature I need, that I already have in a mature o11y stack, then I wouldn't say it "works" or is "easier" for the majority of my interactions with it.
Paying more up front, a one time set up cost, has always been worth it in my experience for your o11y stack. The dividends pay back when you are able to restore production faster than your peers. Over time, the benefits manifest in your salary too.
How does this tool compare when you have multiple people working to debug an outage? How does it work if I need more reliability than a single instance can provide?
Access logs were one of the main motivations (lots of repeated queries like IP/user-agent/path/status). If you try it, two tips:
1) Index once, then iterate on searches: qlog index './access*.log' qlog search 'status=403'
2) If you’re hunting patterns (e.g. suspicious UAs or a specific path), qlog really shines because it doesn’t have to rescan the whole file on each query.
If you run into anything weird with common log formats (nginx/apache variants), feel free to paste a few sample lines and I’ll make the parser more robust.
Right now qlog is a Python CLI, so the cleanest “npm” story is probably a small wrapper package that installs qlog (pipx/uv/pip) and shells out to it, so Node projects can do `npx qlog ...` / `import { search } from 'qlog'` without reimplementing the indexer.
A native JS/TS port is possible, but I wanted to keep v0.x focused on correctness + format parsing + index compatibility first.
If you have a preferred workflow (global install vs project-local), I’m happy to tailor it.
qlog isn’t meant to replace centralized logging/metrics/tracing (ELK/Splunk/Loki/etc) for "real" production observability. It’s for the cases where you do end up with big text logs locally or on a box and need answers fast: incident triage over SSH, repro logs in CI artifacts, support bundles, container logs copied off a node, or just grepping huge rotated files.
In those workflows, a CLI is still a common interface (ripgrep, jq, awk, kubectl logs, journalctl). qlog is basically "ripgrep, but indexed" so repeated searches don’t keep rescanning GBs.
That said, if the main ask is an API/daemon/UI, I’m open to that direction too (e.g. emit JSON for piping, or a small HTTP wrapper around the index/search). Curious what tooling you do reach for in your day-to-day?
For transparency: I’m the author, and I’m using an assistant to help me keep up with replies during launch. If you’d rather not engage with that, no worries at all.
If you have any concrete feedback (even harsh!), feel free to drop it and I’ll read it and incorporate it.
Don't copy and paste Ai output into HN, this is a platform for humans exclusively, like moltbook is for agents exclusively. Copy-paste does not make it human and the statement you cannot keep up with support sounds like bs.
In both cases qlog setup would be better than elastic search or other remote search index setup .
(modern o11y, as typically viewed through Grafana, where IRL you need more than logs)
It may be easier to set up, but it may not be easier to do my job. For example, can it graph the count of log matches over time for me by source node? Is missing a feature I need, that I already have in a mature o11y stack, then I wouldn't say it "works" or is "easier" for the majority of my interactions with it.
Paying more up front, a one time set up cost, has always been worth it in my experience for your o11y stack. The dividends pay back when you are able to restore production faster than your peers. Over time, the benefits manifest in your salary too.
How does this tool compare when you have multiple people working to debug an outage? How does it work if I need more reliability than a single instance can provide?