Ange Albertini is listed as an author in both articles.
hannob 86 days ago [-]
Not really. While both relate to SHA-1, those are two quite different issues.
The Shattered attack was an actual demonstration of a long-known weakness in SHA-1 that allows creating practical collisions.
This one is more an exploration of the question whether a malicious actor can create a standard that looks good, but is actually weak. Those questions got quite a bit of attention in that time due to the Snowden leaks, and the realization that this actually happened at least in one case (Dual EC DRBG, which, technically, was already known before Snowden, but only got major attention afterwards).
smaudet 86 days ago [-]
Seems like a reminder to me...
Also, I wonder how the security of cryptosystems relates to a combined approach, i.e. if you cut the message and hash smaller parts with multiple algorithms (SHA-1+SHA-256) how much more infeasible you can make the "attack" here.
So you can have a global hash that is checked 3 times (so not much of a hamper but better than nothing, if you theorize there are 1000x faster than birthday problem attacks on each hash), to serve as a sort of "quick check", and a deeper hash when you split a 10 GB file into say, 10 MB chunks, and so gain back a 1000x order-of complexity (at a minimum) to compute.
For streams where order of bits matters, I speculate this may be even more difficult to attack, since each attacked hash is now constrained by the data that will fill each prior and next chunks, if not the whole set of chunks (so perhaps 1 million or greater difficulty?)
bawolff 86 days ago [-]
I'm not sure if i understand what you are saying correctly, but generally hashing the same data with multiple different Merkle–Damgård hash functions is not anywhere as secure as you might naively expect.
afiori 86 days ago [-]
I am indeed curious about this: consider an hash function built like
H(m) = H0( H1(m) + H2(m) + ... + Hn(m))
where + is string concatenation and H0,...,Hn are mostly good hash function.
I would expect H to be as good as the best of H0,...,Hn on almost every metric; at worst being limited by the block size of H0.
That is you would need to badly break all of them to break H.
Honestly I would also guess that even if H_i = HMAC(md5, i, m) you would get a decent H (except for the small block size.
1. More hash functions equals more security (hash encapsulation)
2. The attacker wants to know the content of 'm'
H0(H1(m)) has the security of just H0. Hashes are not made to protect the content of m, but instead made to test the integrity of m. As such, a flaw in H0 will break the security guarantee, no matter how secure H1 is.
Practically, H0(H1(m)) is the same as H0(m), as m is just "some data," and the result of H1 can be seen as "some data".
If your construction is H0(m0) + H1(m1) where m0, m1 are both halves of m, then the overall security is reduced to the weakest hash function. For example, a length extension attack in the weakest hash function breaks the overall integrity of the construction.
purkka 86 days ago [-]
> H0(H1(m)) has the security of just H0. Hashes are not made to protect the content of m, but instead made to test the integrity of m. As such, a flaw in H0 will break the security guarantee, no matter how secure H1 is.
But this isn't true for all flaws. For example, even with the collision attacks against SHA-1, I don't think they're even remotely close to enabling a collision for SHA-1(some_other_hash(M)).
Similarly, HMAC-SHA-1 is still considered secure, as it's effectively SHA-1-X(SHA-1-Y(M)), where SHA-1-X and SHA-1-Y are just SHA-1 with different starting states.
So there's some value to be found in nesting hashes.
We are saying the same thing. H0 is SHA-1 in your example.
The strength of an HMAC depends on the strength of the hash function; however, since it uses two derived keys, the outer hash protects the inner hash (using the same hash function), which in turn provides protection against length extension attacks.
The case I was making, is that weakhash(stronghash(m)) has the security of weakhash, no matter how strong stronghash is.
purkka 86 days ago [-]
> The case I was making, is that weakhash(stronghash(m)) has the security of weakhash, no matter how strong stronghash is.
I'll have to disagree. There are no known collision attacks against SHA-1(SHA-3(M)), so in the applied case, a combination can be more secure for some properties, even if it isn't in the theoretical case.
rurban 86 days ago [-]
There is only SHA-1 with a fixed starting state!
Once you change the IV the hash becomes entirely insecure and can be broken in seconds. You just need to overwrite the first IV word with 0, and it's broken. It's a very weak and fragile hash function. They demonstrated it with internal constants K1-K4, but the IV is external, and may be abused as random seed.
afiori 85 days ago [-]
The properties I am thinking of are strong and weak collision resistance, there are other relevant properties to hash functions (like every bit being about independent of every other bit, but I care less about those).
> If your construction is H0(m0) + H1(m1)
Here if H0 has a weak collision attack and H1 has a strong collision attack and + is xor or addition the i see how H0(m0) + H1(m1) can be vulnerable.
> H0(H1(m)) has the security of just H0
I believe it has the security of just H1, but my construction was very different; it was H0(H1(m) || H2(m)). (I used + as concatenation, I forgot that it is usually written as ||)
Here you would need strong collision attacks on all three hash functions (including an attack on H0 that is limited to very short messages of a fixed size.
smaudet 86 days ago [-]
I think I was misunderstood.
I do not mean H0(H1(m0)+H1(m1)) nor H1(m0)+H(m1) but Reinman(x={0,1000})(H0(m(x)))
Where there are 1000 hashes. So H0 must be broken one thousand times, then it does not matter that some attack exists to reduce the security 1000 times because the attack must be performed 1000 times. You could easily nest these so that F(y)=Reinman(x={0,y})(H0(m(y))) and take G(z)=Reinman(y={1,z})(F(y))
So that G(3) e.g. would produce 6 hashes of strength H0. No hash is taking another hash as a function, but the message itself here provides security - you must not just find a duplicate for one hash, but all 6 simultaneously. I wonder if the increased complexity might easily defeat most attacks on H0.
consumerx 86 days ago [-]
folks holding BTC should get wet hands, lol. it sounds a bit like Crypto AG, probably nothing, keep on moving.
a_dabbler 86 days ago [-]
The article is from 2014 and BTC doesn't even use SHA-1 anyway.
consumerx 86 days ago [-]
SHA-2 was mentioned in the article as well
Otek 86 days ago [-]
Yeah, wake me up when there are known exploits to sha256
https://news.ycombinator.com/item?id=13713480 - Announcing the first SHA-1 collision (2017-02-23)
Ange Albertini is listed as an author in both articles.
The Shattered attack was an actual demonstration of a long-known weakness in SHA-1 that allows creating practical collisions.
This one is more an exploration of the question whether a malicious actor can create a standard that looks good, but is actually weak. Those questions got quite a bit of attention in that time due to the Snowden leaks, and the realization that this actually happened at least in one case (Dual EC DRBG, which, technically, was already known before Snowden, but only got major attention afterwards).
Also, I wonder how the security of cryptosystems relates to a combined approach, i.e. if you cut the message and hash smaller parts with multiple algorithms (SHA-1+SHA-256) how much more infeasible you can make the "attack" here.
So you can have a global hash that is checked 3 times (so not much of a hamper but better than nothing, if you theorize there are 1000x faster than birthday problem attacks on each hash), to serve as a sort of "quick check", and a deeper hash when you split a 10 GB file into say, 10 MB chunks, and so gain back a 1000x order-of complexity (at a minimum) to compute.
For streams where order of bits matters, I speculate this may be even more difficult to attack, since each attacked hash is now constrained by the data that will fill each prior and next chunks, if not the whole set of chunks (so perhaps 1 million or greater difficulty?)
H(m) = H0( H1(m) + H2(m) + ... + Hn(m))
where + is string concatenation and H0,...,Hn are mostly good hash function.
I would expect H to be as good as the best of H0,...,Hn on almost every metric; at worst being limited by the block size of H0.
That is you would need to badly break all of them to break H.
Honestly I would also guess that even if H_i = HMAC(md5, i, m) you would get a decent H (except for the small block size.
So maybe something even more nested like
where H_ij(m) = HMAC(md5,i*(p+1)+j,m).1. More hash functions equals more security (hash encapsulation)
2. The attacker wants to know the content of 'm'
H0(H1(m)) has the security of just H0. Hashes are not made to protect the content of m, but instead made to test the integrity of m. As such, a flaw in H0 will break the security guarantee, no matter how secure H1 is.
Practically, H0(H1(m)) is the same as H0(m), as m is just "some data," and the result of H1 can be seen as "some data".
If your construction is H0(m0) + H1(m1) where m0, m1 are both halves of m, then the overall security is reduced to the weakest hash function. For example, a length extension attack in the weakest hash function breaks the overall integrity of the construction.
But this isn't true for all flaws. For example, even with the collision attacks against SHA-1, I don't think they're even remotely close to enabling a collision for SHA-1(some_other_hash(M)).
Similarly, HMAC-SHA-1 is still considered secure, as it's effectively SHA-1-X(SHA-1-Y(M)), where SHA-1-X and SHA-1-Y are just SHA-1 with different starting states.
So there's some value to be found in nesting hashes.
[1]: https://en.wikipedia.org/wiki/HMAC#Definition
The strength of an HMAC depends on the strength of the hash function; however, since it uses two derived keys, the outer hash protects the inner hash (using the same hash function), which in turn provides protection against length extension attacks.
The case I was making, is that weakhash(stronghash(m)) has the security of weakhash, no matter how strong stronghash is.
I'll have to disagree. There are no known collision attacks against SHA-1(SHA-3(M)), so in the applied case, a combination can be more secure for some properties, even if it isn't in the theoretical case.
Once you change the IV the hash becomes entirely insecure and can be broken in seconds. You just need to overwrite the first IV word with 0, and it's broken. It's a very weak and fragile hash function. They demonstrated it with internal constants K1-K4, but the IV is external, and may be abused as random seed.
> If your construction is H0(m0) + H1(m1)
Here if H0 has a weak collision attack and H1 has a strong collision attack and + is xor or addition the i see how H0(m0) + H1(m1) can be vulnerable.
> H0(H1(m)) has the security of just H0
I believe it has the security of just H1, but my construction was very different; it was H0(H1(m) || H2(m)). (I used + as concatenation, I forgot that it is usually written as ||)
Here you would need strong collision attacks on all three hash functions (including an attack on H0 that is limited to very short messages of a fixed size.
I do not mean H0(H1(m0)+H1(m1)) nor H1(m0)+H(m1) but Reinman(x={0,1000})(H0(m(x)))
Where there are 1000 hashes. So H0 must be broken one thousand times, then it does not matter that some attack exists to reduce the security 1000 times because the attack must be performed 1000 times. You could easily nest these so that F(y)=Reinman(x={0,y})(H0(m(y))) and take G(z)=Reinman(y={1,z})(F(y))
So that G(3) e.g. would produce 6 hashes of strength H0. No hash is taking another hash as a function, but the message itself here provides security - you must not just find a duplicate for one hash, but all 6 simultaneously. I wonder if the increased complexity might easily defeat most attacks on H0.