At my old job I was able to use a timing attack for remote username enumeration. Our login process was
1) get username and password from user
2) check if username exists and if not, return error
3) if username exists, look up hashed password from db
4) hash potential password
5) compare actual and potential password. if same, generate token and return. else, return error
As you can imagine, we live in a universe with physical laws so steps 3-5 took time. That meant that you could blast login requests with any username and a known-bad password and compare the times it took to return the bad login error. If it was relatively quick, the username was bad. If it took longer than average, the username was good and the password was bad. Combine this with the freely available lists of common usernames and common passwords and one could start with nothing but the address of the login page and walk away with valid cred pairs. We ended up doing ip-based lockout after 10 failed login attempts and adding a random delay of 100-400ms before erroring out on a bad username.
miki123211 35 days ago [-]
Timing attacks are such a pernicious idea.
You look at the code and see that there's an auth check in place, you test the code to verify that the auth check has no bugs, you make sure that information is never shared with people who don't have authorization to access it, and yet it turns out it can be accessed as if there was no auth check at all.
To make matters worse, everything can be fine for some time, and then some clever optimization in the CPU, the compiler, cache layer or the database engine introduces a completely unexpected side channel.
emilfihlman 34 days ago [-]
Fortunately a quick fix is to first go through a cryptographically secure trapdoor function that makes the initial check security time invariant, like with sha256 with a random salt, before checking exact byte matching.
tptacek 34 days ago [-]
This is an old (and unpopular) countermeasure for simple string timing attacks, but that's not what this article is talking about and that countermeasure isn't reasonable in most of the cases Kettle is talking about.
GoToRO 35 days ago [-]
would adding random delays prevent this?
pwagland 34 days ago [-]
No, it only makes it take longer to get the underlying secret.
Timing attacks are already dealing with "noisy" data, task scheduling et al, so they all boil down to some level of statistical analysis on the response times. Adding noise to that slows you down, but the underlying bias on the timings is still there.
ozim 34 days ago [-]
So in practice it prevents the attack as real world attackers have limited resources and try to find easier targets.
saagarjha 34 days ago [-]
That’s what everyone says until they realize they understated the costs to attempt such an attack.
ozim 30 days ago [-]
That's true if you host wordpress or joomla or something that is widely used that would have timing attack, because then someone will automate it for sure.
If you have your custom web app I don't think anyone will bother unless you are bank or something.
GoblinSlayer 34 days ago [-]
So you need to compute this statistics and add just the right delay to even out the bias.
saagarjha 34 days ago [-]
At that point you’ve implemented a constant-time algorithm.
Alex-Programs 34 days ago [-]
It works quite well in practice though. I wonder if you could make an ergonomic library for it.
Just add a macro to a function and it'll keep track of how long past executions took to execute and add artificial delays to ensure all subsequent executions are at least that long. If they're longer, extend the minimum time by 2x.
Perhaps apply an AIMD algorithm to it? Though there's still room for exploitation there, it'd just take a lot longer to find. Just letting the programmer specify the minimum time might be better in practice.
endofreach 34 days ago [-]
Good luck explaining CEO / PM you need slower response times.
dotancohen 32 days ago [-]
"It's a security measure" would be a very convincing line for a slower response time on a single, infrequent action that the user takes.
GoblinSlayer 34 days ago [-]
It can be implemented once, by, say, nginx and enabled by a devops instead of every random outsourced java webapp.
Filligree 35 days ago [-]
Random delays specifically do not, as they average out. Delays until a specific point in time do work, so long as the delay is never negative.
bostik 34 days ago [-]
This particular case would be a fantastic fit for timer wheel.[0] Instead of writing a brittle implementation of "after a fixed time in the future" logic yourself, you queue the outgoing event to occur after N ticks [of granularity X], and let the dedicated data structure engine do the work for you.
One thing that I’ve done where I previously had a random delay is implement a delay till a constant time from the start of the request. So all of the timing you get out is effectively how well sleep can target a time.
tptacek 35 days ago [-]
It depends on the kinds of attacks you're thinking of. For the stuff Kettle is doing, probably yes. For cryptographic side channels, probably no.
jack_pp 35 days ago [-]
or you could benchmark the functions that compare secrets to user input and figure out how much time it's supposed to take, add 0.5s to the average and always add time before responding to get to that target so essentially your response time is constant regardless of input
tptacek 35 days ago [-]
Important to keep in mind here that the timing attacks Kettle is talking about generally do not take the form of "providing secret input to a function with variable timing".
jack_pp 34 days ago [-]
He says this exact same thing in the Defense at the end:
> Finally, yes I do recommend using constant-time functions when comparing user input with secret keys. Just ask anyone who says this is an actual threat to provide a proof of concept.
tptacek 35 days ago [-]
A fun thing about this work is that it's following different threads than the remote timing attack research in cryptography follows; high-end remote timing in cryptography involves some signal processing work, which isn't really present here. Which means Kettle's attacks are likely to get more powerful.
ssklash 35 days ago [-]
I look forward to James Kettle's yearly research results, he's the most incredible appsec researcher I know of.
markandrewj 35 days ago [-]
I have never met James, but I agree his research is great. I always enjoy reading or watching his content.
I came to the same conclusion. Many string comparison implementations don't actually compare one character at a time. In one case strcmp seemed to compare eight characters at a time, so you would need to guess eight characters correctly to get a time difference. Glibc memcmp can compare 32 bytes at a time. In C# the timing of string compare depends on whether it does Unicode normalization or not. Even then, the difference is less than a nanosecond per compared character. It is not as straightforward that every string comparison between sensitive data and user input is at risk of timing attacks.
I love this, thanks for sharing. When I failed to get a measurable time difference myself I was worried I might just be doing something wrong and it'd get flagged the moment I published my research, so it's great to get confirmation from other people.
Faelian2 34 days ago [-]
I love how he is porting every local attack to web applications.
I am also a bit frightening that the number of attack you have to know for the Burp Certification keeps getting longer.
marcus_holmes 34 days ago [-]
I'm curious that he appears to completely ignore the network latency/jitter on the return path. How does this work?
albinowax_ 34 days ago [-]
With the single-packet attack, you look at the order that the responses arrive in, instead of the time they take to arrive. Since the responses are on a single TLS stream, they always arrive at the client in the order that the server issued them in. Hope that makes sense!
tptacek 34 days ago [-]
I take them to be asking why jitter on the return path doesn't confound the results, regardless of the trick used to ensure they arrive concurrently (and cancel out the jitter on the ingress path). The responses to single-packet H2 attacks are not themselves single packets.
marcus_holmes 33 days ago [-]
Yes to both!
It makes sense that the packets return in an order that provides information, but we're talking about timing differences of a few ms; as tptacek says I would expect that there's some network jitter on the return path that has to be allowed for with timings this small?
Yet apparently not - obviously the attacks are working. Does he somehow know when the response left the server?
1) get username and password from user
2) check if username exists and if not, return error
3) if username exists, look up hashed password from db
4) hash potential password
5) compare actual and potential password. if same, generate token and return. else, return error
As you can imagine, we live in a universe with physical laws so steps 3-5 took time. That meant that you could blast login requests with any username and a known-bad password and compare the times it took to return the bad login error. If it was relatively quick, the username was bad. If it took longer than average, the username was good and the password was bad. Combine this with the freely available lists of common usernames and common passwords and one could start with nothing but the address of the login page and walk away with valid cred pairs. We ended up doing ip-based lockout after 10 failed login attempts and adding a random delay of 100-400ms before erroring out on a bad username.
You look at the code and see that there's an auth check in place, you test the code to verify that the auth check has no bugs, you make sure that information is never shared with people who don't have authorization to access it, and yet it turns out it can be accessed as if there was no auth check at all.
To make matters worse, everything can be fine for some time, and then some clever optimization in the CPU, the compiler, cache layer or the database engine introduces a completely unexpected side channel.
Timing attacks are already dealing with "noisy" data, task scheduling et al, so they all boil down to some level of statistical analysis on the response times. Adding noise to that slows you down, but the underlying bias on the timings is still there.
If you have your custom web app I don't think anyone will bother unless you are bank or something.
Just add a macro to a function and it'll keep track of how long past executions took to execute and add artificial delays to ensure all subsequent executions are at least that long. If they're longer, extend the minimum time by 2x.
Perhaps apply an AIMD algorithm to it? Though there's still room for exploitation there, it'd just take a lot longer to find. Just letting the programmer specify the minimum time might be better in practice.
0: https://www.snellman.net/blog/archive/2016-07-27-ratas-hiera...
> Finally, yes I do recommend using constant-time functions when comparing user input with secret keys. Just ask anyone who says this is an actual threat to provide a proof of concept.
https://www.sjoerdlangkemper.nl/2024/05/29/string-comparison...
I am also a bit frightening that the number of attack you have to know for the Burp Certification keeps getting longer.
It makes sense that the packets return in an order that provides information, but we're talking about timing differences of a few ms; as tptacek says I would expect that there's some network jitter on the return path that has to be allowed for with timings this small?
Yet apparently not - obviously the attacks are working. Does he somehow know when the response left the server?