> Real cardholders almost never buy something for exactly $1.00. Coffee is $4.73, gas is $52.81. The roundness is the signal.
Surely this depends on how the vendor sets their prices? If you're going to buy something from a website to test a stolen credit card you don't just get to make up your own prices.
And I think you may be over-indexing on the US "prices don't include tax" thing. Elsewhere, round-number prices are extremely common.
In fact a lot of the rest of the stuff in the post seems like it wouldn't work very well either. (E.g. you're flagging anyone who has done a transaction in the last 90 days outside the range of hours at which they have 2+ transactions? Wouldn't that be like 50% of people?).
It's unclear to me whether this article is an attempt at breaking down complex expertise into over-simplified SQL queries, or whether it is all speculative and made up.
There is a conflict between "Six SQL patterns I use to catch transaction fraud" and "Nothing here comes from anything I’ve actually worked on or seen".
The "transaction outside usual hour range" seems pretty basic.
I don't usually buy gas, coffee or snacks at 2am. But on the very rare occasion that I do, I'm dealing with some kind of personal emergency and don't also want to have to call my bank.
I get that that's also a time opportunistic thieves, etc, might be operating. But the cost of false positives is also a thing.
Coffee usually _is_ a round number in my experience, and I know of people who aim for round numbers when filling their car, and of fuel stations which require a pre-set value, often 10, 20, 50€ etc
Yes, as your parent comment points out, the article centers itself on US transactions, where listed prices seldom include tax and are frequently a cent below a round number. For example, the menu says a dish is $15.00 but the restaurant charges $18.83 after tax and tip. Globally, there's no doubt the US is the exception rather than the norm.
That sounds reasonable for some states but 5 states have no sales tax
and many states have exclusions to sales tax. Many of those are also likely to have rural areas where small businesses like to use even amounts.
All of that is easy to account for, all of the metadata you need is available. This also applies to the sibling comment about rounding up to charity at the grocery store, the data is all there, even if it's e.g. the fraud analyst at the bank or credit card company instead of the fraud analyst at the grocery store.
I'm seeing a few stores here and there which have a "round up to donate" option. I guess I'm a bit of a sucker and I always use that option. My groceries are always a round number as a result.
> Border crossings inside 10 minutes. International rings.
Or normal people living in Europe in border-adjacent areas.
Also, I guess you don't include card-not-present transactions in this, but you incorrectly assume that every merchant has their location set correctly. And that every sale happens in a brick-and-mortar establishment, not from travelling salespeople or whatever. And that all transactions happen online.
We develop tirreno (1), an open-source security framework.
I question the described approaches. For example, while impossible travel is a legitimate and widely used technique, it's related to online user behaviour based on IP address. Moreover, tirreno, for example, has separate rules for cases where the IP clearly comes from Apple Relay or VPN/Tor — those are separate flags. I assume some or all examples are LLM-generated, as the context is mixed up and no one actually collects GPS location in bulk for card swipes.
> Fraud detection in transaction data is mostly SQL. Not machine learning, not graph databases, not whatever Gartner is hyping this year. SQL, run against the right tables, with the right joins, looking for the right shapes.
It's also not all program-integrity, which is the only work that could justify such blanket statements. Worse is better as long as it addresses the problem domain.
Fintech clients are generally interested in knowing whether a transaction happening _right now_ is fraud. They want to know that in a few milliseconds, for high-dimensional data. It's work done at a scale where relational databases cannot meet these real-time constraints, and instead find other uses like historical data loading. That's how you end up with in-memory databases, stream-processing engines, and yes, even machine learning.
Having said that, some of the author's points are valid, and I'm looking forward for their next writings, in particular dealing with noisy alerts is a general problem beyond performance engineering.
In my experience, what you're describing would more specifically be called Fraud Prevention rather than Fraud Detection. Both tend to coexist and are complementary in a mature setup.
For Prevention, you're always going to be constrained by latency requirements, available data and an incomplete picture of user behaviour. You make a quick decision using ML and rules that deals with the majority of cases. But those constraints make it impossible to precisely prevent all fraud.
Detection deals with the downstream consequences of this. A team of analysts will typically analyse the accepted transactions for signs of fraud. This is particularly important for fraud types where you don't get an external signal like a chargeback or customer complaint. Platform integrity is one such example. But Fintechs will also see this building anti-money laundering systems - you need to go looking for the fraud. This is the process the article is describing.
I say they're complementary because the detected transactions become the labels for training and evaluating the next iteration of prevention models.
"Fixel Smith" is an AI-generated person, with an article that has very little to do with fraud analysis. 'This' is also a music artist (1), novelist (2), fraud analyst (3), influencer (4), and whatever else you can imagine.
220+ points and 70 comments, and very few notice it's quite a fake post — and no one that it's an AI generated person?
I was checking the submission on the phone and only peeked at the comments section. While it's not always easy to judge if something is AI-generated or edited, here it was obvious at first glance from the quotes. Assuming that all of the comments were done in good faith, I think that the low AI literacy even here is really concerning.
A cursory glance does make it appear like either a prolific individual, or a bot. The fact that the novel bears little relation to the analytics posts, which seem to bear the style of LLM prose, makes the whole thing fishy. Ironic given the subject matter of TFA
I could imagine a person having or doing all of these over time, people do have many interests, but a cursory glance does give an impression of AI. The Instagram account uses a lot of it at least, and the top domain was likely made in conjunction with AI, given the style.
Kind of fascinating, though it could still be a person doing this using AI as opposed to an entirely generated persona. Thanks for bringing it up.
I'd be more surprised to hear that most folks made a habit of investigating the people whose articles we read. To be honest, I usually don't even look at the byline, let alone the rest of the website.
I'm one of the creators of an open-source security framework (1). I've been eating online fraud for breakfast for 8 years. The article is delusional enough that I had to visit the top page of the domain (2).
Swiping a card (or inserting, or tapping) is a "card present" transaction. Online shopping, where you type in the card number, is a "card not present" transaction. Retailers and banks can tell the difference.
This is very cool to read. Although I've never truly worked in fraud prevention, I stumbled into automating a lot of similar pattern checks to catch collusion and fraud when I wrote and ran a poker site / casino. Window functions were not available then so the queries were LONG. One way I'd deal with it was to assign uuids to every pair of players who'd ever shared a poker table, and then run nightly analysis of how much their betting deviated from expected norms and their own baseline on each stage of the game if they were in the same hand as each other. This could actually be done in one or two magnificent 100+ line SQL queries on the history table, on a read replica.
Lagging window functions and/or lateral joins probably would have reduced it to 1/4 the size but definitely increased the cost versus just narrowing the sets into smaller tables first.
Fraudulent transactions will eventually cost the bank (when they would have to reverse/reimburse it and eat the loss). A denied transaction only results in an angry customer who will quickly forget after they complained - so the customer bears the brunt of the externalized cost.
Therefore, the bank's incentive is to err on the side of more caution, and deny transactions when finding false positives.
Isn't the point of ML that you learn these rules from the data? The right approach to me would be to use ML models to detect patterns that correspond with fraud and then evaluate them to see if any make sense. This way you might discover new hyptotheses.
Anything that can't be explained and iterated deterministically is too risky for the business of declining financial transactions.
Human analysts need to be able to explain to compliance in a single 5 minute email why a specific transaction was declined, and most importantly, what could have been done differently to avoid the adverse decision.
Fixing one problem with ML often creates two new problems that aren't quite obvious yet. SQL tends to have fewer surprises with regard to regressions and unexpected side effects as things change over time.
In my experience, Visa support can’t tell me why my legitimate transaction was tagged as fraudulent, other than to say it triggered an AI thing. They also can’t tweak the settings like they used to do, but they can manually allow specific transactions one by one on an ad hoc basis.
Recently, they've stopped even being able to allow specific transactions through for me. They can tag the flagged transaction as legitimate and hope the AI picks up on that, but that hasn't worked once in the last ~15 calls for me. I've just stopped trying to use Visa as my primary card online, a habit that bled into in-person purchases as well.
Umm no, they submit thousands of random pages of business communication and system spec in discovery. This does not include the source code of their algorithm, which in any case if not stored in any form which can be recreated and shared. If you pay a lawyer a million bucks to read them all it would say that they don't know how the algo works. At the same time they offer you low four digits to make the case go away, if you have a case. If you don't have a valid case at all, they rapidly spend $250,000 on filings and motions which you would have to spend $100,000 to stay in the game.
The main problem with these SQL calculations is that they are deterministic shortcuts for a probabilistic problem. Fraud is not usually a “true because rule X matched.” It is more like "what is the probability this is fraudulent"? SQL patterns are useful, but they are blunt instruments. I really don't think banks use deterministic heuristics but more data science stuff.
I have a fair amount of experience in this industry, albeit a couple of years old now. I worked at Square on their payment risk team in 2015 and 2016, at Plaid om their ACH fraud API product called Signal from 2021 to 2024. At Plaid I was involved in client meetings and learned how many companies were already handling risk, and I've interviewed at a handful of other companies' risk teams when I was looking for a new role.
Basically it's not just banks and formal financial institutions doing this, and how they do it depends on the company size. Size tends to correlate not only with how many resources you have for a risk team, but also with whether fraud rings are targeting you.
Usually what I've seen is that companies start with some kind of batch SQL/simple logic process that runs daily and tends to flag accounts for manual review and block automatic events like settlement or trading (or whatever the platform does) until that review has been done. Then over time the company will transition to an ML-based approach that still mostly flags things for manual review. The goal of the ML is to improve the precision of the flagging without hurting dollar recall or fraud event recall too much. Depending on the payment system companies may be sensitive to both (for example, in ACH if you get too many returns, even very low dollar payment returns, you're going to get a hard time from your partner bank and you risk not being able to use ACH anymore).
I had this happen once - I flew to a city about 8 hours of driving time away to buy a motorcycle and landed late in the evening. My card was declined when I got gas a little after midnight and I had no cash or other card with me so I called the 24 hour support line. I had a quick conversation with a support agent explaining that I was traveling and the card needed to be reactivated right away. Within five minutes the card was working and I was back to working my way down a long chain of mistakes.
As the tail end of the article explains these are independent pieces of evidence not independent proofs: most of them can be legitimate operations (even the speed one, airliners cruise at that speed but if you get to ride a long-range business plane they can cruise faster).
All of them can be genuine use, these are fraud signals not fraud proofs, and the article does cover this:
> What works is running them all and scoring each transaction across the signals. A transaction that fails on three or four of them is almost always fraud. A transaction that fails on one might be your grandma being weird with her debit card on vacation.
> If a card swipes in Chicago and seven minutes later swipes in Los Angeles, one of those swipes is fake. The card is cloned. This is the most uncontroversial fraud signal you’ll find — there’s almost no legitimate reason a single card is in two distant places in seven minutes.
The Apple/Google Pay cards have a DPAN (device account number) that is different to the CPAN of the physical card. It keeps the same issuer (first 6 digits) and the same "last 4" digits, but the others are different.
The DPAN is translated into the CPAN by software at the issuing bank, so it's not identifiable by the merchants.
Merchants get the "last 4" digits, but that's not enough to identify specific CPANs.
> A transaction that fails on three or four of them is almost always fraud. A transaction that fails on one might be your grandma being weird with her debit card on vacation.
The article states that the particular item is a clear sign of fraud. If that was true, then it should be treated in a special manner. A more paranoid bank could enforce it without adhering to this guidance of multi-factor detection.
It isn't though, so balancing it with other rules is fine.
This takes me back, fighting telephone fraud back when folks use to accept cc over the phone. We used similar patterns but only had phone numbers and the white pages. Cross state boundaries inside similar time frames and categorizing similar merchant types. It’s fun to see these same patterns still in use 20 years later for the same purpose.
This seems interesting, but has so many signs of AI writing that I worry it's not just edited but generated from whole cloth. Probably still a lot of truth in there but it does give me pause!
Oh shyte, I use (and have used) these for a long time. Guess everything is classes as AI nowadays just yield and use it (everyone thinks you do anyway)
This comment certainly does not scan as AI! Look, this isn't perfect, but it's the best we've got, and so long as AI writing is meaningfully worse than human writing, people are going to try to tell the difference.
This is the sort of thing I used to love doing and I often gaze at raw data analysis and sometimes wish my career had pivoted towards working with data like this.
But I must admit there was a point where I suddenly lost my love for SQL and it was pretty much when the OVER PARTITION BY syntax appeared.
It never clicks. I always have to look up how it works, I always find it unintuitive. I've never understood why I hate it so much.
> Most people are creatures of habit when they spend money. A nine-to-fiver doesn’t suddenly start buying gas at 3am.
Breaking out of a habit once in a while is what keeps one's mind sharp.
A big "fuck you" to financial analysts with those groundhog-day mindsets for making my life much more miserable than it needs to be and for adding a chilling effect to those little getaways that make life interesting and worthwhile. I despise you for this.
Reading this to the very end uncovers empty and contradictory advice. I'm almost sure it's LLM generated.
We learn simultaneously that 'your team' shouldn't rely on any one of those patterns ('none of them is enough'), but that pattern 1 'alone will surface a useful amount of fraud'.
We also read strange sentences like "Every analyst on your team will use them (ie window functions) once they exist, and adding the next fraud pattern stops being a project. [end of paragraph]"
Or irrelevant discussions about how filtering by "IS NULL" might be not applicable when none of the provided examples uses it.
Surely this depends on how the vendor sets their prices? If you're going to buy something from a website to test a stolen credit card you don't just get to make up your own prices.
And I think you may be over-indexing on the US "prices don't include tax" thing. Elsewhere, round-number prices are extremely common.
In fact a lot of the rest of the stuff in the post seems like it wouldn't work very well either. (E.g. you're flagging anyone who has done a transaction in the last 90 days outside the range of hours at which they have 2+ transactions? Wouldn't that be like 50% of people?).
It's unclear to me whether this article is an attempt at breaking down complex expertise into over-simplified SQL queries, or whether it is all speculative and made up.
There is a conflict between "Six SQL patterns I use to catch transaction fraud" and "Nothing here comes from anything I’ve actually worked on or seen".
I don't usually buy gas, coffee or snacks at 2am. But on the very rare occasion that I do, I'm dealing with some kind of personal emergency and don't also want to have to call my bank.
I get that that's also a time opportunistic thieves, etc, might be operating. But the cost of false positives is also a thing.
Coffee usually _is_ a round number in my experience, and I know of people who aim for round numbers when filling their car, and of fuel stations which require a pre-set value, often 10, 20, 50€ etc
> Real cardholders almost never buy something for exactly $1.00. Coffee is $4.73, gas is $52.81. The roundness is the signal.
a) trivial to bypass by adding dither to the test transactions and
b) trivial to improve upon with proper statistical analysis and
c) shouldn't this kind of heuristic pattern recognition with no expectation of near-100% accuracy be what AI is good at?
Just set up a direct debit to your favourite charity.
Or normal people living in Europe in border-adjacent areas.
Also, I guess you don't include card-not-present transactions in this, but you incorrectly assume that every merchant has their location set correctly. And that every sale happens in a brick-and-mortar establishment, not from travelling salespeople or whatever. And that all transactions happen online.
I question the described approaches. For example, while impossible travel is a legitimate and widely used technique, it's related to online user behaviour based on IP address. Moreover, tirreno, for example, has separate rules for cases where the IP clearly comes from Apple Relay or VPN/Tor — those are separate flags. I assume some or all examples are LLM-generated, as the context is mixed up and no one actually collects GPS location in bulk for card swipes.
1. https://github.com/tirrenotechnologies/tirreno
It's also not all program-integrity, which is the only work that could justify such blanket statements. Worse is better as long as it addresses the problem domain.
Fintech clients are generally interested in knowing whether a transaction happening _right now_ is fraud. They want to know that in a few milliseconds, for high-dimensional data. It's work done at a scale where relational databases cannot meet these real-time constraints, and instead find other uses like historical data loading. That's how you end up with in-memory databases, stream-processing engines, and yes, even machine learning.
Having said that, some of the author's points are valid, and I'm looking forward for their next writings, in particular dealing with noisy alerts is a general problem beyond performance engineering.
For Prevention, you're always going to be constrained by latency requirements, available data and an incomplete picture of user behaviour. You make a quick decision using ML and rules that deals with the majority of cases. But those constraints make it impossible to precisely prevent all fraud.
Detection deals with the downstream consequences of this. A team of analysts will typically analyse the accepted transactions for signs of fraud. This is particularly important for fraud types where you don't get an external signal like a chargeback or customer complaint. Platform integrity is one such example. But Fintechs will also see this building anti-money laundering systems - you need to go looking for the fraud. This is the process the article is describing.
I say they're complementary because the detected transactions become the labels for training and evaluating the next iteration of prevention models.
"Fixel Smith" is an AI-generated person, with an article that has very little to do with fraud analysis. 'This' is also a music artist (1), novelist (2), fraud analyst (3), influencer (4), and whatever else you can imagine.
220+ points and 70 comments, and very few notice it's quite a fake post — and no one that it's an AI generated person?
1. https://www.amazon.it/Forged-Soundtrack-Explicit-Fixel-Smith...
2. https://fixelsmith.com
3. https://analytics.fixelsmith.com/
4. https://www.instagram.com/fixeltales/
Kind of fascinating, though it could still be a person doing this using AI as opposed to an entirely generated persona. Thanks for bringing it up.
1. https://github.com/tirrenotechnologies/tirreno
2. https://fixelsmith.com
Can also imagine an edge case: couple shares an online account, one is traveling and purchases with the saved card details.
Lagging window functions and/or lateral joins probably would have reduced it to 1/4 the size but definitely increased the cost versus just narrowing the sets into smaller tables first.
This is an underrated CX factor: If my card gets denied when i’m a new customer or exhibiting a new pattern, i’m impressed with their software.
However if they deny a transaction where there is any previous history of me authenticating, then I’m frustrated by their naive paranoid algorithm.
Fraudulent transactions will eventually cost the bank (when they would have to reverse/reimburse it and eat the loss). A denied transaction only results in an angry customer who will quickly forget after they complained - so the customer bears the brunt of the externalized cost.
Therefore, the bank's incentive is to err on the side of more caution, and deny transactions when finding false positives.
How do you deal with vacations and online shopping. You could be in another country or two in a few hours and purchase from across the world
Human analysts need to be able to explain to compliance in a single 5 minute email why a specific transaction was declined, and most importantly, what could have been done differently to avoid the adverse decision.
Fixing one problem with ML often creates two new problems that aren't quite obvious yet. SQL tends to have fewer surprises with regard to regressions and unexpected side effects as things change over time.
Basically it's not just banks and formal financial institutions doing this, and how they do it depends on the company size. Size tends to correlate not only with how many resources you have for a risk team, but also with whether fraud rings are targeting you.
Usually what I've seen is that companies start with some kind of batch SQL/simple logic process that runs daily and tends to flag accounts for manual review and block automatic events like settlement or trading (or whatever the platform does) until that review has been done. Then over time the company will transition to an ML-based approach that still mostly flags things for manual review. The goal of the ML is to improve the precision of the flagging without hurting dollar recall or fraud event recall too much. Depending on the payment system companies may be sensitive to both (for example, in ACH if you get too many returns, even very low dollar payment returns, you're going to get a hard time from your partner bank and you risk not being able to use ACH anymore).
> What works is running them all and scoring each transaction across the signals. A transaction that fails on three or four of them is almost always fraud. A transaction that fails on one might be your grandma being weird with her debit card on vacation.
The DPAN is translated into the CPAN by software at the issuing bank, so it's not identifiable by the merchants.
Merchants get the "last 4" digits, but that's not enough to identify specific CPANs.
It isn't though, so balancing it with other rules is fine.
> The roundness is the signal.
> Slight pain, same result.
to point at a few.
This is Claude talking isn’t it.
And my favourite most hated pattern, the no no no:
> Not machine learning, not graph databases, not whatever Gartner is hyping this year.
But I must admit there was a point where I suddenly lost my love for SQL and it was pretty much when the OVER PARTITION BY syntax appeared.
It never clicks. I always have to look up how it works, I always find it unintuitive. I've never understood why I hate it so much.
Or, the cardholder is trying to do the cannonball run:
https://www.youtube.com/shorts/Dx5WPNIEwiE
chargeback-mcp
or would you turn it all into a markdown file and call it a skill?
> Most people are creatures of habit when they spend money. A nine-to-fiver doesn’t suddenly start buying gas at 3am.
Breaking out of a habit once in a while is what keeps one's mind sharp.
A big "fuck you" to financial analysts with those groundhog-day mindsets for making my life much more miserable than it needs to be and for adding a chilling effect to those little getaways that make life interesting and worthwhile. I despise you for this.
We learn simultaneously that 'your team' shouldn't rely on any one of those patterns ('none of them is enough'), but that pattern 1 'alone will surface a useful amount of fraud'.
We also read strange sentences like "Every analyst on your team will use them (ie window functions) once they exist, and adding the next fraud pattern stops being a project. [end of paragraph]"
Or irrelevant discussions about how filtering by "IS NULL" might be not applicable when none of the provided examples uses it.
This is low quality and too long.