This issue exists to the right of your solution and is (for now) out of scope, but the biggest issue I have with security data lakes is the need to (easily) get both row-based data and visualizations. Back when I had access to a well-built and cared for Splunk environment, I would constantly run queries, build visualizations, go back to the results index, tweak the query, go back to viz, etc. This feedback loop is important and allows for fast iteration, especially if you are conducting a high-stakes investigation and need answers rapidly. I should be able to look at my available fields and tweak the viz accordingly in under a few seconds; preferably in one mouse click.
Now I live on an ELK stack and I experience nothing but full-time agony as I switch between Kibana and Kibana Lens constantly. It's clear they are two completely separate "products" built for different use-cases. The experience reminds you constantly that they were not purpose-built for how I use them, unlike Splunk.
Increasingly we are moving towards the reality of a security data lake, and all I can think is that I'm about to lose what little power I had left as I have to move to something like Mode, Sisense, or Tableau which again, were not purpose-built for these use-cases and even further separate the query/data discovery and visualization layers.
I hate how crufty and slow Splunk has gotten as an organization, and they use their accomplishments from 15 years ago to justify the exorbitant price they charge. I really hope the OSS/next-gen SaaS options can fill this need and security data lake becomes a reality. But for that to happen, more focus is needed on the user experience as well.
Regardless, very cool stuff and could definitely fill a need for organizations that are just starting to dip toes into security data lakes. I wish you success!
I completely agree with you and the need for a fully integrated solution with great visualizations without hosting additional tools that aren't purpose built! Unfortunately there are very few SIEMs that get this right today..
Here's how we are thinking of it. We think it's important for a successful security program to first have high quality data and this is why we want help every organization build structured security data lakes to power their analysis using our open source project. The Matano security lake can sit alongside their SIEM and be incrementally adopted for a data sources that wouldn't be feasible to analyze otherwise.
Our larger goal as a company though is to build a complete platform that allows a security data lake to fully replace traditional SIEM
-- including a UI and collaborative features that give you that great feedback loop for fast iteration in detection engineering and threat hunting as you mentioned. Stay tuned I think you will be excited by what we are building!
I’m assuming the difference is: “big $$ license fees” for on-prem is $X a year, while “sweet saas revenue” is $A a year, $B per user, $C for compute, $D for storage, and $E for requests.
As a large company, what are the things you are more than happy to pay for with on-prem?
The reason I’m asking: this feels like the largest issue with cloud saas, which is one of the more popular implementations of open-core for B2B. Not saying Splunk is open-core, but it’s related to above/dbt cloud discussion.
Enterprise customers have the highest propensity to pay, but don’t need or want their cloud offering.
Mid-tier customers actually prefer a managed service by their cloud provider, aws/gcp/azure, because it strikes a balance between easy AND it works within their vpc/iam/devops. But this cuts off open-core companies main revenue, so they start making ELv2 licenses (elastic, airbyte, etc) which makes things harder on mid-tier.
Small customers are the ones who love saas the most, but have the least ability to pay, have the least need for powerful tools, and will probably grow out of being a small customer…
I’m curious if there are any companies which are: source code available, commercial license, allow you to fork/modify the source code, only offer on-prem (no cloud saas offering), want the mega-clouds to offer a managed service. BUT the commercial license requires any companies over 250 employees or $X revenue (docker desktop style) to pay a yearly license fee.
The biggest issue I have with data lakes is they always without fail turn into a data cesspool. The more you add the less ROI you get out. And yes using Splunk as an example it becomes an organisational cost problem. I have spent way too many hours arguing with them over billing.
The only viable solution is design metrics into your platform properly from the ground up rather than trying to suck them out of a noisy datasource for megabucks.
I loaded the GitHub link, bracing myself for yet another AGPL license, but no, it's Apache 2! So I wanted to say thank you for that and I hope to take a deeper look when I'm back at my desk because trying to keep Splunk alive and happy is a monster pain point. There are so many data sources we'd love to throw at it but we don't have the emotional energy to put up with Splunk crying about it
- Doing real time data processing on tera/peta bytes involves a lot of IO, which is a significant part of of the cost in AWS. Things like Athena are simply not cheap to run at that scale.
- With time series data, the emphasis is usually on querying recent data, not all of the data. You retain older data for auditing for some time. But this can essentially be cold storage.
- Especially alerting related querying is effectively against recent data only. There's no good reason for this to be slow.
- People tend to scale Elasticsearch for the whole data set instead of just recent data. However, with suitable data stream and index life cycle management policies, you can contain the cost quite effectively.
- Elastic Common Schema is nice but also adds a lot of verbosity to your data, and queries. Bloating individual log entries to a KB or more. Parquet is a nice option for sparsely populated column oriented data of course. Probably the online disk storage is not massively different from a well tuned elastic index.
- Elastic and Opensearch have both announced stateless as a their next goal. So, architecturally similar to this and easier to scale horizontally.
- SIEM is just one use case. What about APM, log analytics, and other time series data? Security events usually involve looking at all of that.
Matano is completely serverless and stores all data in ZSTD compressed parquet files in dirt-cheap object storage, allowing you to bring your own analytics stack for queries on large amounts of data for things like investigations and threat hunts. Since we store data in a columnar format and plug in query engines like Snowflake that are optimized for analytical processing the queries on specific columns will run much faster than they would run if executed on a search engine database like Elasticsearch which
would require maintenance to scale.
I think it's important to understand that search engines and OLAP/data warehouse query engines have fundamental architectural differences that offer pros/cons for different use cases.
For enterprise security analytics on things like network or endpoint logs which can hit 10-100TB+/day, using anything other than a data lake is simply not a cost-effective option. Apache Iceberg was created as a big data table format for exactly this type of use case at companies like Netflix and Apple.
You are not wrong, but I do think realtime and olap have been converging a bit for a while.
Stateless elasticsearch and opensearch are actually moving to a similar model as what Matano proposes. They both have made announcements for stateless versions of their mutual forks. Data at rest with that will live in s3 and there are no more clusters, just auto scaling ingest and query nodes that coordinate via s3 and that allow you to scale your writes and reads independently. Internal elasticearch and opensearch data formats are of course heavily optimized and compact as well. Recent versions have e.g. added some more compression options and sparse colunn data support.
But they are also optimized for good read performance. There's a tradeoff. If you write once and read rarely, you'd use more heavy compression. If you expect to query vast amounts of data regularly, you need something more optimal because it takes CPU overhead to de-compress.
For search and aggregations, you either have an index or you basically need to scan through the entirety of your data. Athena does that. It's not cheap. Lambda functions still have to run somewhere and receive data. They don't run locally to buckets. Ultimately you pay for compute, bandwidth, and memory. Storing data is cheap but using it is not. That's the whole premise of how AWS makes money.
Splunk and Elasticsearch are explicitly aimed at real-time use cases (dashboards, alerts, etc.), which is also what Matano seems to be targeting. But it can also deal with cold storage. Index life cycle management allows you to move data from hot, warm, and cold storage. Cold here means snapshot in S3 that can be restored on demand for querying. It also has rollovers and a few other mechanisms to save a bit on storage. So, it's not that black and white.
Computing close to where the data lives is a good way to scale. Having indexing and caching can cut down on query overhead. That's hard with lambdas, athena, and related technology. But those are more suited for one off queries where you don't care that it might take a few seconds/minutes/hours to run. Different use case.
Hi Shaeq and Samrose - congrats on the launch! Matano looks great.
Out of curiosity, at some point I believe you were working on a predecessor called AppTrail whic tackled (customer-facing) audit logs, it was something I was interested in at the time (and still am! I would've loved to use that).
Would you perhaps be willing to share your learnings from that product, and (I assume) why it evolved into Matano?
Thank you! Yes with AppTrail we wanted to solve the pain points around SaaS audit logs but since it was a product that needed to be sold and integrated into B2B startups rather than the enterprises that felt the pain points and needed audit logs in their SIEM, we couldn't find a big enough market to sell it.
We realized that the big problem was that most SIEM out there today did a poor job with pulling and handling the data from the multitude of SaaS and Cloud log sources that orgs have today, and decided to build Matano as a cloud-native SIEM alternative :)
Amazon Security Lake's main value prop is that it is a single place where AWS / partner security logs can be stored and sent to downstream vendors. As such, Amazon only writes OCSF normalized logs to the parquet-based data lake for it's own data in a fully managed way (VPC flow logs, Cloudtrail, etc.) and leaves it to the customers to handle the rest.
The Amazon Security Lake offering is built on top of Lake Formation, which itself is an abstraction around services such as Glue, Athena, and S3. Security Lake is built using the legacy Hive style approach and does not use Athena Iceberg. There is a per-data cost associated with the service, in addition to the costs incurred by other services for your data lake. Looks like the primary use case of the service is being able to store first-party AWS logs across all your accounts in a data lake and being able to route them to analytical partners (SIEM) without much effort. It does not seem very useful for an organization that is looking to build its own security data lake with more advanced features, as you will still have to do all the work yourself.
Matano, has a broader goal to help orgs in every step of transforming, normalizing, enriching and storing all of their security logs into a structured data lake, as well as giving users a platform to build detection-as-code using Python & SQL for correlation on top of it (SIEM augmentation/alternative). All processing and data lake management (conversion to parquet, data compaction, table management) is fully automated by Matano, and users do not need to write any custom code to onboard data sources.
Matano can ingest data from Cloud, Endpoint, SaaS, and practically any custom source using the in-built Log transformation pipeline (think serverless Logstash). We are built around the Elastic Common Schema, and use Apache Iceberg (ACID support, recommended for Athena V2+). Matano's data lake is also vendor neutral and can be queried by any Iceberg-compatible engine without having to copy any data around (Snowflake, Spark, etc.).
What distinguishes a SIEM from traditional log analysis?
I know the feature set is oriented towards SIEM but it seems like a super set of
regular log analysis. I don't have a need for a SIEM now but this looks good even for non security logs.
I'm a vendor in the cyberspace so not a potential customer (feel free not to waste time answering) but am just intellectually curious who you're targeting this at. High-skill tech companies who are just building up a security program? I don't see most security teams building their own SIEM'ish solution just because they really don't have the chops or resource to do it. OTOH, it would be a big rip-out operation for F100 companies to change to this from Splunk et al.
Many enterprises using Splunk are already being forced to purchase products like Cribl to route some of their data to a data lake because writing it all to Splunk is just way too expensive at that scale 1-100TB+/day (7 figures $).
But a data lake shouldn't just be a dump of data right? Matano OSS helps organizations build high value data lakes in S3 and reduce their dependency on SIEM by centralizing high throughput data in object storage using Matano to power investigations. To give you an example, one company is using Matano to collect, normalize, and store VPC Flow logs from hundreds of AWS accounts which was too expensive with traditional SIEM.
Matano is also completely serverless and automates the maintenance of all resources/tables using IaC so it's perfect for smaller security teams on the cloud dealing with a large amount of data and wanting to use a modern data stack to analyze it.
(not them, but in this space with major enterprises and gov agencies deploying Graphistry)
We are pretty active here with security cloud/on-prem data lakes teams as a way to augment their Splunk with something more affordable & responsive for bigger datasets. Imagine stuffing netflow or winlogs somewhere at TB/PB scale and not selling your first born child. A replacement/fresh story may happen at a young/midage tech co, and a bunch of startups pitching that. But for most co's, we see augmentation and still needing to support on-prem detection & response flows.
It's pretty commodity now to dump into say databricks, and we work with teams on our intelligence tier with GPU visual analytics, GPU AI, GPU graph correlation, etc. to make that usable. Most use us to make sense of regular alert data in Splunk/neo4j/etc. However, it's pretty exciting when we do something like looking at vpc flow logs from a cloud-native system like databricks and can thus look at more session context and run fusion AI jobs like generating correlation IDs for pivoting + visualizing.
Serverless is def interesting but I've only seen for light orchestration. Everyone big has on-prem footprint which is an extra bit of fun for the orchestration vs investigation side.
I have been exploring this realm of SIEM, XDR, NDR etc. Sure, all proprietary SIEMs are expensive. But what is not clear is how you are going to price it. Security teams have dedicated budget. If you are coming cheaper than them, they you are destroying your TAM because I know customer would not mind paying those license fees. OSS GTM might work but might against your TAM.
SIEM costs were rapidly ballooning, and we were being charged by RAM. RAM?? Of all things!!
After our SIEM costs for ELK ramped up to where Splunk was - we just bought Splunk instead. I imagine there are many security teams out there that would entertain a cheaper alternative that isn't priced by RAM.
the reason for that is near real-time detection of threats requires aggregation of terabytes of data according to rules (continuous GROUP BY on thousands columns on a sliding window) - and these aggregates by design have to be stored in RAM.
Otherwise these detections stop being near-realtime and become offline detection instead, just like any other sql server.
We think building a more efficient solution using data lakes is a win-win because it unlocks additional use cases for customers and allows them to analyze larger datasets within the same budget.
Solutions that offer a magnitude of order better performance than what is available today are critical for the industry because the amount of data teams are dealing with is growing much faster than their budgets!
What distinguishes Matano'd existing or planned products from Google Chronicle? Would you have any limits on data ingestion or retention?
Also, python detections sounds horrible! I love python but it sounds like you haven't considered the challenges of detection engineering. This one of my main "expertise" if you will. You should think more in the lines of flexible sql than python. People who write detection rules to the most part don't know python and even if they do it would be a nightmare to use for many reasons.
I hope someone from your team reads this comment: DO NOT try to invent your own query language but if you do, DON'T start from scratch. Your product could be the best people who like the fabulous splunk need to also like it. And for a security data lake, you must support Sigma rule conversion into your query/rule format. Python is a general purpose language, there are very good reasons why no one else from Splunk,elastic, graylog, Google,Microsoft use Python. Don't learn this hard lesson with your own money. Querying it needs to be very simple and most importantly you need to support regex with capture groups and the equivalent of "|stats" command from splunk if you want to quickly capture market share. I have used and evaluated many of these tools and have written a lot of detection content.
Your users are not coders, DB admins or exploit developers. They are really smart people whose focus is understand threat actors and responding to incidents -- not coding or anything sophisticated. FAANG background founders/devs have a hard time grasping this reality.
- Matano has realtime Python + SQL detections as code with advanced correlation support. Chronicle uses inflexible YARA-like detection rules iirc
- Matano supports Sigma detections by automatically transpiling them to the Python detection format
- Matano has an OSS Vendor Agnostic Security Data Lake and can work with multiple clouds / let's you bring your own query engine (Snowflake, Spark, Athena, BigQuery Omni). Chronicle is a proprietary SIEM that uses BigQuery under the hood and cannot be used with other tooling.
There are no limits on data retention or ingestion with Matano, it's your S3 bucket and the compute scales horizontally.
Thanks for the response. Chronice uses Yara-l and bigquery uses sql on steroids. Both are difficult to start working with them. I would want someone that has never even looked at python code to be able to query the data. Having a different query langauge than detection language is also a big problem (e.g.graylog). I will keep an open mind, I prefer python but it is not ideal for getting a wider audience (general IT staff) to use it. Junior staff prefer chronicle over splunk because they can put in an IP or domain and just get results. Now ask them to learn python and you have a revolt.
I looked at your sample detection on the home page. This is have for me but I can't get others to use it. I promise you, doing a little market research on thid outside of the tech bubble will save you a lot of money and resources.
Long term, I believe Python (along with good ol' SQL for correlation) is the best language to model the kind of attacker behaviours companies are dealing with in the cloud and a lot of the difficulties with it are not inherent but around tooling. For example, in our cloud offering we plan on building abstractions that let you search for an IP or domain and get results with a click of a button as well the ability to automatically import Sigma rules and test Python logic directly with an instant feedback loop of a "low-code" workflow.
Currently we focus on more modern companies with smaller teams that have engineers that can write Python detections and actually prefer it over a custom DSL that needs to be learned and has restrictions.
Keep in mind there are more people in general that know Python than are trained in a vendor-specfic DSL so perhaps long term the role of a security analyst will evolve to overlap with that of an engineer. We are already seeing more and more roles require basic proficiency in Python as attacks on the cloud become increasingly complex :)
> get results with a click of a button as well the ability to automatically import Sigma rules and test Python logic directly with an instant feedback loop of a "low-code" workflow.
Ok, so importing sigma rules is the easy part, it takes on average 2-3 hours of tuning a sigma imported rule to get it to where it is usable in a large environment where you have all sorts of false positives. The language in question should not me making a fuss about indentation or importing the right module. You never (to my knowledge) need loops, classes,etc... Python is great just not purpose built for this use case. Most companies, even fortune 50 companies can't get many people in their security team who know or are willing to learn python well. You need someone to write/maintain it, someome to review it and the people responding to detections would want to read it and understand it. I am not saying python is difficult, just that you have to take the time to learn it. Detection engineering is all about matching strings and numbers and analyzing or making decisions on them. You have to encode/decode things in python, deal with all kinds of exceptions, it is very involved compared to the alternatives like eql,spl,yara-l,etc... But them again, maybe your customers who want to run their own siem datalake in the cloud might also have armies of python coders. But generally speaking, it is rare (but it happens) to find people interested in learning python but also doing boring blue team work. I would love python so long as I don't have to deal with newer python versions requiring refactoring rules.
> Currently we focus on more modern companies with smaller teams that have engineers that can write Python detections and actually prefer it over a custom DSL that needs to be learned and has restrictions.
Fair enough, honestly, if your focus is silicon valley Python is great. You will just get a reputation about what your product demands of users if you ever want to branch out. The only time I have ever done a coding interview wss with a statup like typical YC funded type company. I am just warning you the world is different outside the bubble. I would want to recommend your product and I will probably mention it to others but it looks like you know what you want.
> Keep in mind there are more people in general that know Python than are trained in a vendor-specfic DSL so perhaps long term the role of a security analyst will evolve to overlap with that of an engineer. We are already seeing more and more roles require basic proficiency in Python as attacks on the cloud become increasingly complex :)
Attacks on the cloud are not that complex but python does not make the job easier just more complicated. And I spend at least 10-15% of my time writing Python so I am not hating on it.
The golden standard is splunk. Nothing, I mean absolutley no technology exists that even comes close to splunk. Not by miles. Not any DSL or programming language. Do you know why CS Falcon is the #1 EDR, similary all alone at the top? Splunk!
Even people that leave splunk to start a competitor like Graylog and Cribl can't get close.
A detection engineer is a data analyst (not a scientist or researcher) that understands threat actor's TTPs and the enterprise they are defending well. I wish I wasn't typing on mobile so I can give you an example of what I mean. None of the sigma rules out there come close to the complexity of some the rules I have seen or written. Primarily, I need to piece together conditions and analysis functions rapidly to generate some content and ideally be able to visualise it. It doesn't matter how good you are with a language, can you work with it easily and rapidly enough to analyze the data and make sense of it? Maybe you can get python to do this, I haven't tried. But you are not going to compete with Splunk or Kusto like that. The workflow is more akin to shell scripting than coding where you cam easily pipe and redirect IO.
E.g.: "Find GCP service account authentication where subsequently the account was used to perform operations it rarely performs and from IP addresses located in countries from which there has not been a recent login for that project in the last 60 days"
I am just giving you an example of what a detection engineer might want to do, especially if they've been spoiled by something like Kusto or Splunk SPL. That's the future not simple matches.
The role of a security analyst and engineer already overlap, modern security teams have a lot of cross functionality where everyone in part is involved and embedded with other teams that have the same security objective (detect threat actors in this case).
Just to show you the state of things. I can't get a team solely dedicated to security automation to write one simple python script to solve humongoud problem that we have. We spent over a year arguing and battling over the solution. The hangup was they would be responsible for maintaining it, so instead they wanted an outside vendor to do it.
At a completely different company, I wrote a small python script to make my life analyzing certain incidents easier and that started a territorial battle.
What you consider a simple python script requires consultants and many meetings amd reviews and approvals at bigcompanies and they're constantly worried the guy who wrote the script can't be replaces easily. I am telling you all this so you understand how people making purchasing decisions think: they are very much on the buy side of things than build once a company gets to a certain size when their main business is not technology services.
So I hope you also think about offering managed detection/maintenace services (kind of like where socprime is going) in the long term.
Finally, I think your strategy is to have an exit/ipo in a few years and if you don't see long past that, I have no doubt you will succeed. And I am very happy to hear about another player in this field. I have even pitched building an in house datalake solution similar to this except with tiered storage where you drop/enrich/summarize data at each tier (you need lots of data immediately but less details and more analysis of that data at each tier).
Thank you for typing up a long detailed response. I think a lot of the points and concerns you bring up are valid, and we are mostly agreed upon.
In Matano however, we see Python as a viable component in security operations for narrowly tracking atomic signals while the language for writing detections and hunting threats will be SQL, which works perfectly well for use cases like the detection example you provided, albeit verbose. We have thought of also building a transpiler that would let analysts actually use the succinct syntax of SPL and compile that to SQL under the hood. This could be a great way to get adoption in companies where using Python would be difficult.
If you are interested, I would love to find some time to chat and share thoughts. Can you email me at shaeq at matano dot dev?
Thanks for the well thought out response. I hope Matano succeeds. I can't email you since my hn presence isn't public/social but I might be involved in evaluating your product some day soon and would chat and share thoughts with your folks then.
This is awesome. Nice work open-sourcing it! I used Splunk at Expedia and it was super expensive and slow. While I wasn't using it for security purposes, it could take 15-30 min for us to detect error logs, and I can imagine that's not okay for security purposes. Good luck guys!
The code is written in high performance multi-threaded Rust and uses the  Arrow compute framework. We also batch events and target about 32MB of event data per lambda invocations. As a result it can process tens of thousands of events per second per thread, depending on the number of transformations.
That said, we are working on performance estimates and a benchmark on some real world data for Matano to help users like you better understand the cost factors. Stay tuned.
Shaeq and Samrose: for us investors here, where are you in terms of fundraising? I'm an ex AWS (google me, you'll have a few laughs!), turned VC in the past few years. $HN_username at gmail if you want to reach out and chat!
Edit: here's me with Andy, from a millenium ago .