Nonetheless I do not see what issues 2FA has that this solves. Having the electronic device is the security. Without it there is no security.
I think it is too simple to reduce the definition of second factor to how it is stored. It is rather a question of what you need to log in. For TOTP the client has the freedom to choose any of (not exhaustive):
1. Remember password, put TOTP in an app on smartphone => Client has to remember password and be in possession of smartphone.
2. Put password and TOTP in password manager => Client has to remember the master password to the password manager and be in possession of the device on which it runs. Technically, you have to be in possession of just the encrypted bits making up the password database, but it is still a second factor separate from the master password.
In the end it's all just hidden information. The question is the difficulty an attacker would face attempting to exfiltrate that information. Would he require physical access to the device? For how long? Etc.
If the threat model is a stranger on the other side of an ocean using a leaked password to log in to my bank account but I use TOTP with a password manager (or even, god forbid, SMS codes) then the attack will be thwarted. However both of those (TOTP and SMS) are vulnerable to a number of threat models that a hardware token isn't.
The "additional constraint" is the entire point. You can't get rid of it without seriously degrading your security.
For example, a TOTP secret stored in a password manager will be leaked at the same time as the password itself when the password manager is compromised - which once again allows for impersonation by an overseas attacker.
And when you're using a password manager a leak on the website side is not a real threat, as yours is unique per-website and contains enough randomness to not be guessable if its hash leaks.
If anything, TOTP is the weaker factor here, as the website needs access to the raw TOTP secret to verify your code - which means a compromised website is likely going to mean its stored TOTP secrets are leaked in plaintext!
... yes? I wholeheartedly agree with that statement so I'm really not sure what your point is. I have shared passwords with family members in the past. It works when it works and it doesn't when it doesn't.
> The "additional constraint" is the entire point.
I believe I already refuted that. Every practical implementation will have weaknesses. Being vulnerable to a greater number of attack vectors does not disqualify the method. All that matters is that the method works as intended for the attack vectors of interest.
I can bypass the lock on my front door by breaking a window. That doesn't mean that the thing on my front door doesn't qualify as a physical lock. It just means that my security model is vulnerable to certain attack vectors. That might or might not be a problem.
> You can't get rid of it without seriously degrading your security.
Whether or not my security is degraded depends on the extent to which the attack vectors the "additional constraint" was defending against are relevant to me. Writing my password on a post-it note and sticking it to my monitor degrades my security if the attack vector is someone breaking and entering my home. However it does not degrade my security even slightly if the only attack vector I care about is a stranger on a different continent illicitly logging into the associated account.
In general your thinking on this topic seems overly rigid and dogmatic. Security practices exist only to serve real world usecases. The expected attack vectors matter. So does user inconvenience. Something that is less secure but more convenient can often be the "more correct" solution in the real world. This is no different than how businesses will often choose to implement processes with well known flaws coupled with a response plan or insurance policy. For example shipped software often has bugs that were already known prior to release.
> a TOTP secret stored in a password manager will be leaked at the same time as the password itself when the password manager is compromised
Agreed. I went out of my way earlier to acknowledge the vulnerability to additional threat models.
> when you're using a password manager a leak on the website side is not a real threat
Well sure, but how are you going to get the vast majority of your users to use a password manager? They can always choose not to and there's approximately nothing you can do to reliably detect that.
You could mandate switching to a key based solution but then you'll get lots of complaints and maybe even lose customers. Or you could augment passwords with something else. TOTP is reasonable. So are SMS or email codes. Despite not being as secure or foolproof as a hardware token those solutions are sufficient for many scenarios.
I think the defining characteristic is how it is used. I can use a password like a second factor, and I can use a TOTP code like a password. The service calls it a password or a second factor because that was the intention of the designer. But I can thwart those intentions if I so choose.
Recall the macabre observation that for some third factor implementations the "something you are" can quickly be turned into "something your attacker has".
It's definitely computable on a piece of paper and reasonably secure against replay attacks.
So the key would have to be longer. And random or a lot longer. Over 80 random bits is generally a good idea. That's roughly 24 decimal digits (random!). I guess about 16 alphanumerical characters would do to, again random. Or a very long passphrase.
So either remember long, random strings or doing a lot more math. I think it's doable but really not convenient.
So given a single pass code and the login time, you can just compute all possible pass codes. Since more than one key could produce the same pass code, you would need 2 or 3 to narrow it down.
In fact, you don't even need to know the login time really, even just knowing roughly when would only increase the space to search by a bit.
It has it's own synch problems (can you be sure which key to use next and did the server update the same as you, or did the last request not get through?).
This post on security stack exchange seems relevant.
https://security.stackexchange.com/questions/150168/one-time...
2FA is "something you have" (or ".. you are", for biometrics): it is supposed to prove that you currently physically posses the single copy of a token. The textbook example is a TOTP stored in a Yubikey.
Granted, this has been watered down a lot by the way-too-common practice of storing TOTP secrets in password managers, but that's how it is supposed to work.
Does your mTOTP prove you own the single copy? No, you could trivially tell someone else the secret key. Does it prove that you currently own it? No, you can pre-calculate a verification token for future use.
I still think it is a very neat idea on paper, but I'm not quite seeing the added value. The obvious next step is to do all the math in client-side code and just have the user enter the secret - doing this kind of mental math every time you log in is something only the most hardcore nerds get excited about.
The idea of it was so neat to me, I just had to thinker with it.
As long as you never enter the secret anywhere but only do the computation is your head, this is just using your brain as the second factor. I would not call this a password since it is not used in the same way. Passwords are entered in plain text into fields that you trust, but that also means that passwords can be stolen. This proves that you are in possession of your brain.
The only difference here is that you are hashing the password in your head, instead of trusting the client to hash it for you before submitting it to the server.
Which makes the threat model here what, exactly? Keyloggers, or login pages that use outdated/insecure methods to authenticate with the server?
If we were talking a >256-bit secret, I'd buy this, but in the human-calculated case I don't see how it actually helps with this, because you've substituted a ~8 character password for a 6 digit number, which is significantly less search space to brute-force.
> Also phishing attacks tricking users into entering their passwords in fake login pages
yes, this is more-or-less a subset of the "keylogger/insecure login page" case
> and stolen password databases
There's still a server-side TOTP secret database to be stolen, no? And normally that would be hard to reverse-engineer the actual secret from, but again, you've shrunk the search space down to 1,000,000 entries, which is trivial to brute force.
And the main point (though I agree that it doesn't make it 2FA), is to not have the secret be disclosed when you prove that you have it, which is what TOTP also achieves, which makes phishing or sniffing it significantly less valuable.
The non-disclosure is indeed neat, but the same can be achieved with a password. For example: generate public/private keypair on account creation. Encrypt private key with user password. Store both on server. On auth, client downloads encrypted priv key, decrypts it with user-entered password, then signs nonce and provides it to server as proof of knowledge of user password.
AFAIK the primary technical concerns are insecure storage by the server (bad hash or salt) or keylogging of the client device. But the real issue is the human factor - ie phishing. As long as the shared secret can't be phished it solves the vast majority of real world problems.
Point being, TOTP on a rooted phone handled by a FOSS password manager app whose secret store the end user retains full access to will successfully prevent the vast majority of real world attacks. You probably shouldn't use a FOSS password manager on a rooted device for your self hosted crypto wallet though.
I completely agree about phishing being the main attack vector. However, I do think malware is a not-too-distant second - which makes having a single device contain both your password and TOTP secret a Really Bad Idea. Having not-perfectly-secure TOTP codes only your phone and a password manager DB only on your desktop is a pretty decent solution for that.
An ssh keyfile requires an attacker to break into the device but is likely fairly easy to snag with only user level access.
Bypassing a password manager that handles TOTP calculations or your ssh key or similar likely requires gaining root and even then could be fairly tricky depending on the precise configuration and implementation. That should generally be sufficient to necessitate knowledge of the master password plus device theft by an insufficiently sophisticated attacker.
Given TOTP or an ssh key managed exclusively by a hardware token it will be all but impossible for anyone to avoid device theft. Still, even TPMs have occasionally had zero day vulnerabilities exposed.
import base64
import hmac
import struct
import time
def totp(key, time_step=30, digits=6, digest='sha1'):
key = base64.b32decode(key.upper() + '=' \* ((8 - len(key)) % 8))
counter = struct.pack('>Q', int(time.time() / time_step))
mac = hmac.new(key, counter, digest).digest()
offset = mac[-1] & 0x0f
binary = struct.unpack('>L', mac[offset:offset+4])[0] & 0x7fffffff
return str(binary)[-digits:].zfill(digits)
https://dev.to/yusadolat/understanding-totp-what-really-happ...As I already mentioned, the fact that people often use it wrong undermines its security, but that doesn't change the intended outcome.
> the fact that people often use it wrong undermines its security
That applies to everything.
If you build with the brick properly you will have a great wall, if you dont then it will fall down. Pretty simple.
Also honestly TIL that TOTP are somehow supposed to also enforce a single copy of the backing token being in existence. That's not just bad UX, that feels closer to security overreach.
People in tech, especially software and security folks, tend to miss the fact that most websites with 2FA already put a heavier security burden on their users than anything else in real life. There's generally no other situation in peoples' lives that would require you to safely store for years a document that cannot be recovered or replaced when destroyed[0]. 2FA backup codes have much stricter security standard than any government ID!
And then security people are surprised there's so much pushback on passkeys.
--
[0] - The problem really manifest when you add lack of any kind of customer support willing to or capable of resolving account access issues.
That's at best a retcon, given given that the RFC was first published in 2008
>You are also supposed to immediately destroy the QR code after importing it.
Most TOTP apps support backups/restores, which defeats this.
How so? Apple didn't invent the idea of a secure enclave. Here is a photo of one such device, similar to one I was issued for work back in ~2011: https://webobjects2.cdw.com/is/image/CDW/1732119
No option to get the secret key out. All you can get out is the final TOTP codes. If anything, having an end-user-programmable "secure enclave" is the only thing that has changed.
I think they probably meant "Secure Enclave" in the same way that people say "band-aid" instead of "adhesive bandage", "velcro" instead of "hook and loop fastener", and "yubikey" instead of "hardware security token".
If I managed to intercept a login, a password and a TOTP key from a login session, I can't use them to log in. Simply because TOTP expires too quickly.
That's the attack surface TOTP covers - it makes stealing credentials slightly less trivial by making one of the credentials ephemeral.
TOTP is primarily a defense against password reuse (3rd party site gets popped and leaks passwords, thanks to TOTP my site isn't overrun by adversaries) and password stuffing attacks.
No, 2FA means authentication using 2 factors of the following 3 factors:
- What you know (eg password)
- What you have (eg physical token)
- What you are (eg biometrics)
You can "be the 2FA" without a token by combining a password (what you know) and biometrics (what you are). Eg, fingerprint reader + password, where you need both to login.
Combine that with the practical problems with biometrics when trying to auth to a remote system, and in practice that second factor is more often than not "something you have". And biometrics is usually more of a three-factor system, with the device you enrolled your fingerprints on being an essential part of the equation.
GP ignores the conventions of the field.
Like, banking site requiring phone's 2FA (whether actual or SMS), okay, you have to know password and access to the device or at least a SIM card so 2 things need to be compromised. Computer vulnerable, no problem, phone vulerable, no problem, both need to be vulnerable to defeat it
...then someone decided to put banking on the second factor and now phone has both password and token (or access to SMS) to make a transaction, so whole system is one exploit away from defeat.
> It explores the limits of time-based authentication under strict human constraints and makes no claims of cryptographic equivalence to standard TOTP.
I think they're just having fun.
https://en.wikipedia.org/wiki/Password-authenticated_key_agr...
I'm open to discovering I'm wrong here, but I have never understood this line of thinking. Assuming you 2fa into your password manager when you first sign in on your device, it's still 2 factors all the way down.
As you sign into your password manager, the "something you have" is your 2fa device that you use to sign into your password manager (which is obviously not being filled in by your password manager). Subsequent password manager unlocks which don't prompt for your token are still 2fa because the "something you have" is your computer with which you signed into your password manager.
Why is this a problem?
This leaks every single password in the vault, including any TOTP keys - so if you were storing your TOTP password here, you are now screwed, and attacker has a full access. On the other hand, if your TOTP was a separate device, your TOTP-protected accounts are fine. And even if it's just an app on your phone, you are likely still fine, as phones have much stronger isolation, and people don't usually "npm install" random stuff on them.
(And that's Google Authenticator adding cloud backup functionality is such a bad idea.. If you enable it, then all your 2FAs are leaked once Google password is leaked)
(You could argue that your password manager stores TOTP secrets in secure enclave and it's impossible to extract from there... but those same secrets have to be stored in your account as well, and they could be extracted from there)
And that makes it a password (i.e. the primary factor, not a second factor). The whole point of a second factor is that it's not trivially cloneable (hence why, for example, SMS is a poor form of 2FA in the presence of widespread SIM cloning attacks).
Cloning the knowledge in someone's brain is fairly easy. You just need a wrench.
If we are talking rubber-hose cryptography then a physical hardware token is just an insecure as a brain. Most people are not hacked via wrenches.
But this isn't a hard requirement. See Protonmail as a counterexample. And again, wifi authentication. I reckon debit card PINs as well.
This is only true if the verifier lives on your local terminal - otherwise we use an encrypted channel to transmit to the verifier, or do the exactly same type of timed-salted-hash scheme used here to transmit without revealing the password.
It doesnt add any security, as it is trivially computable from the other digits already computed.
It appears to be a checksum, but I can't see why one would be needed.
This is an early POC, and sanity checks like this are exactly the kind of feedback I’m looking for.
You are already part of the 2FA — you’re the first factor: “something you know”.
The second factor: “something you have” — often a personal device, or an object. This is ideally something no one else can be in possession of at the same time as you are.
They are both too mutable (cuts and burns will alter them) and not mutable enough (you can't re-roll your fingerprints after a leak).
On top of that, you are also literally leaving them on everything you touch, making it trivial for anyone in your physical presence to steal them.
They are probably pretty decent for police use, but I don't believe they are a good replacement for current tech when it comes to remote auth.
My concern with them nearly always comes down to privacy. They are far too easy to abuse for collecting and selling user data. There are probably ways around that but how much will you ever be able to trust an opaque black box that pinky promises to irreversibly and uniquely hash your biometric data? It's an issue of trust and transparency.
The worst thing about it is that people will go like "uuuh naaaah" and will just grab a random app off the play store and put their code in it. Now you are leaking secrets to whatever random app they use.
TOTP works because you have to possess the secure device at the time you're authenticating. If you don't have the device, then no amount of time with the rubber hose can make you cough up the required token.
I don't know, something like "name the fruits that correspond to your first school colors" or similar
Seriously, am I the only one who was happier without any of this "2FA" crap? VPS/Domain/Google with a hardware token is the one narrow scope where I see any value, and even those I could do without. Every other site is just a non-consensual nagging that hassles me when logging in. Bank accounts are the worst, as every bit of friction for checking my balance/transactions actually decreases my security!
And at the very least, 2FA should be a much more "openly open standard." Which is to say, just do TOTP everywhere, let people have their initial generating key and be done with it.
I generate mine from my computer when I can, but I'm surrounded by all this magic that implies that something different is going on, e.g. the Duo system which I'm forced to use by my job and doesn't make this sort of thing easy, if possible at all.
I now wonder if it's possible to store a random value in one's head without it being eavesdroppable. Humans don't really do random, but it's essential for auth.