Symmetric

如何在密鑰派生函式 (KDF) 在安全方面變得無用之前估計其最大計算成本界限?

  • December 9, 2020

根據我對密鑰派生函式 (KDF)(例如 scrypt、Argon2 等)的理解,我們可以調整它們的參數,從而最終讓攻擊者更難通過它們暴力破解密碼到密鑰。此時,攻擊者可以直接暴力破解密鑰,比如 AES128。

最好不要過度調整 KDF 的參數,這樣使用者就不會因為使用緩慢的應用程序而遭受不必要的痛苦。我認為,如果僅對 KDF 進行調整以使使用者遭受的損失最少,同時仍然可以最大限度地提高安全性,例如 AES128-CBC(或任何其他對稱密碼),這是理想的選擇。

一個簡單的方法是探索硬體和算法設計的所有改進,以便估計某些資金充足的組織必須等到他們最終設法解密我的密碼需要多長時間。但是我認為這種方法是不必要的複雜,因為我認為我們可以通過從資訊論的角度簡單地研究這個問題來對 KDF 的計算邊界說很多話。

下面是一個嘗試。我的問題是:我們可以讓它更緊嗎?


到目前為止我做了什麼:

讓我們這麼說 $ f $ is a 128 bit encryption/decryption function, and the KDF function is $ k $ . also let’s say that a single round of $ k $ equals the encryption/decryption of a single block by $ f $ . Let’s say that our password has only $ 70 $ bits of entropy.

So the total attempts to bruteforce all keys is $ 2^{128} $ , while the total attempts to bruteforce the password is $ 2^{70} $ . Since $ f $ and $ k $ computationally cost equally $ c $ , then the actual cost of bruteforcing the keys is $ c \times 2^{128} $ , while the password is $ c \times 2^{70} $ . In this case, the adversary will obviously go after bruteforcing the password.

To make the attacker not find the password easier to break, we can repeat the KDF $ k $ for $ r $ many times until the difficulty matches. Basically: $$ \begin{split} c2^{128} &= rc2^{70} \ 2^{128} &= r2^{70} \ \frac{2^{128}}{2^{70}} &= r \ 2^{128-70} &= r \ 2^{58} &= r \ \end{split} $$

If the KDF $ k $ is itself is implemented by recursively calling $ k $ , then this $ c $ is guaranteed, and simply repeating it long enough, recursively, will guarantee that the difficulty of bruteforcing the password via the KDF $ k $ is as hard as bruteforcing keys with $ 128 $ bits of entropy.

Meaning, if $ r > 2^{58} $ , then for the attacker would find it easier to bruteforce the key directly. In this case, the attacker would totally ignore the KDF $ k $ and move on to bruteforce $ f $ ’s key. In other words, $ r>2^{58} $ is pointless.

Update: the above is also implemented as part of ciphart.

Generally we look at strength by looking at the order $ O $ that it adds to the password search when an attacker is trying to guess passwords. That’s just the same as the number of iterations basically, assuming a salt and correct password hash. Often it is simpler to just use bits, which is basically the $ \log_2 $ of the order.

So if a password strength is an average of about 40 bits, then you’d take the $ log_2 $ of the number of iterations and simply add the values together to get the resulting strength. So given 1048576 iterations, we’d get around $ 40 + \log_2(1,048,576) = 40 + 20 = 60 $ bits of strength. Given the average weakness of passwords, there is no higher limit that is not entirely impractical. Obviously performing $ 2^{88} $ operations to allow for even average passwords to have 128 bits security is out of the question. So generally you should aim for the highest value possible for a specific service.

For the same reason it is very important to take other measures than just using a password hash with large iteration count. Possible measures are a max number of retries, an added delay before testing each password, requiring a good password with (likely) high entropy or using a password manager of some kind. Browsers nowadays offer internal password managers including generation for a good reason.

Note that some of password hashes such as bcrypt use a two-exponential “work factor” instead of an iteration count to give a better idea of the strength added to the password entropy in bits.

引用自:https://crypto.stackexchange.com/questions/85676