They're a model company for data-minimization. No account names, no passwords, can pay by cash in an envelope, RAM-only infrastructure, thorough and frequent 3rd-party auditing, etc.
They provide back, fund privacy initiatives, have a history of being unable to provide user data when requested by governments, all of their stuff is well documented. You'd be hard-pressed to find anyone privacy & security conscious speak poorly about them.
jiveturkey 35 days ago [-]
They were deceptive about why they removed static IPs and port forwarding. Such deceptions speak to character, and a VPN company isn't private -- it's trust transference. So character matters.
There are 6 other providers that do offer static IP, and one of those uses AWS nitro to ensure that mappings aren't available to LEO. So this wasn't a technical limitation.
ziddoap 35 days ago [-]
>They were deceptive about why they removed static IPs and port forwarding.
What were they deceptive about? Their announcement is straight forward.
"Regrettably individuals have frequently used this feature to host undesirable content and malicious services from ports that are forwarded from our VPN servers. This has led to law enforcement contacting us, our IPs getting blacklisted, and hosting providers cancelling us.
The result is that it affects the majority of our users negatively, because they cannot use our service without having services being blocked."
I stand corrected, apologize for misinformation, and thank you for sticking with this thread.
But if I may put my cynical hat on (I think this is fair for any VPN provider), mullvad states in HN[0]
> Port forwarding needed to be removed on moral grounds.
Fair enough, however such moral grounds only came to light when extreme and immediate pressure was applied to their business model. The same post does talk about abuse, but only in terms of how it created a negative experience for "some" users (no details). The blog post does go into those negative effects, good, and doesn't try to whitewash it in moral reasons, also good. I think I mistook the official blog with an official statement here on HN.
There was another HN post apparently by a mullvad engineer that didn't pull any punches. I can't find it anymore, but I remember that it was that post that somehow led me to kfred's post and then left a very bad taste in my mouth. Maybe someone else is a better researcher than me and can dig it up.
I'll retract my "character" criticism, since mullvad clearly cares deeply about privacy, regardless of my perceived problems with their public communications.
Personally, iCloud Private Relay ticks all the boxes for my use cases, so I should have just kept my mouth shut.
What do you mean by static IPs? Mullvad has never offered static IPs to customers. Please clarify.
theamk 35 days ago [-]
When I was reading the doc, it was not clear initially how can a device with Soft-CPU implemented on FPGA can be secure? Surely someone can replace CPU implementation with the one with full debug capabilities, and then use that to decrypt stored secrets?
Turns out, there is no such thing as "stored secrets", and the device has no non-volatile memory at all, other than FPGA configuration (NVCM). The only secret is UDS, 256 random bits that are baked into FPGA configuration stream, and they are protected by FPGA's read-out protection. The mechanism that is normally used to prevent device duplication is instead used to protect cryptographic secrets. Replacing bitstream is all-or-nothing affair, so a hypothetical "CPU with debug capabilities" will not have access to UDS.
This means all storage must be on PC and some classes of things are are absolutely impossible - for example an anti-bruteforce counter that clears secrets if too many wrong attempts are entered.
kfreds 35 days ago [-]
> how can a device with Soft-CPU implemented on FPGA can be secure?
> FPGA configuration (NVCM). The only secret is UDS, 256 random bits that are baked into FPGA configuration stream, and they are protected by FPGA's read-out protection.
Correct. Here are some more interesting details:
- we're using the Lattice iCE40 UltraPlus FPGA, which is supported by open tooling and has been for a long time. During the course of the project we also had the configuration and locking protocol reverse engineered so that one can configure and lock the FPGA with open tooling.
- the iCE40's Non-Volatile Configuration Memory (NVCM) uses anti-fuse technology for storing the configuration bits, where the 0s and 1s are stored in vias on the die. The physics of how these vias are modified to represent a 0 or a 1 make it very hard to read out information using X-ray, unlike on-die storage implemented using "horizontal" e-fuses. That's the gist of it.
- the FPGA's boot state machine is unfortunately designed such that you can get it to boot an external bitstream from SPI even after you've configured and locked NVCM, and the state of EBRs (block memory) is retained across warm reboots of the FPGA. We took several steps to mitigate this limitation, which, now that I think about it, would make several interesting blog posts. The UDS memory itself is in LCs, you can mix in key material from the host, the exact timing of that is randomized, RAM (implemented in EBR) is has both address and data randomization... and a few more things.
- physical security is hard, and the TKey won't be able to stand up against any and all physical attacks, but I don't think there is any security hardware in the world that is as open and inspectable as the TKey.
If you want to delay, you don't even need something that complicated, just run your favorite PBKDF function as a part of your app.. Or maybe not even add, add a plain "sleep()" call before trying to decode input data.
Most Open Source ASICs I am aware of provide open-source RTL designs, but the tool chains are usually proprietary. Hard macros, memories, security mechanisms, etc are typically also closed source. And then there is the manufacturing process itself that is not transparent.
There isn’t a definitive answer as to what constitutes “enough” openness for security inspections. Individuals have different thresholds for what they consider acceptable.
So far we chose as much open source as possible.
gclawes 35 days ago [-]
Been tracking this project for a while, I'm surprised they don't have a FIDO2 implementation yet, given how popular that's gotten
dathinab 35 days ago [-]
their approach is a one which is fully stateless on the chip (see their documentation for why)
theoretically this is quite desirable but various protocols aren't build in a way enabling it
e.g. TOTP pushes a secret from the server to the client, instead of having some form of deterministic key exchange
similar while FIDO protocols and passkeys could have been designed in a way which can work fully stateless they are not
now it doesn't mean you can't make it work with tkeys but it can get more complex e.g. for TOTP using TKey to guard a local password vault which then does the TOTP instead of having the TOTP on the key directly (but then while TOTP is better then SMS 2FA it's still pretty bad compared to what technically is possible, like much worse then a lot of people realize)
woodruffw 35 days ago [-]
> similar while FIDO protocols and passkeys could have been designed in a way which can work fully stateless they are not
It's been a while since I've looked deeply at FIDO, but I think they would have had to make a handful of nontrivial security concessions to make WebAuthn stateless. One pretty important one that comes to mind is the token counter, which in principal enables RPs to detect a cloned credential.
Rafert 35 days ago [-]
The counter can always be 0, which is what cloud synced passkeys are doing IIRC.
dathinab 35 days ago [-]
The problem starts earlier with the secret key which you can't place "into" a TKey. You can deterministically derive one between the TKey and a server using some thing like a (semi) static DH but that isn't how it is implemented in general.
cuu508 35 days ago [-]
I understand that the ability to place stuff "into" a TKey would be needed to support discoverable WebAuthn credentials ("passkeys"). But would it also be needed for non-discoverable credentials?
Borealid 35 days ago [-]
Yes, to set a PIN protecting the non-discoverable credentials. The FIDO PIN can be changed while you have access to the authenticator and not to the credentials it previously created.
arianvanp 35 days ago [-]
User verification is optional.
If you only do user presence and non-discoverable, then WebAuthn is completely stateless and deterministic for a given (challenge,rpId,origin) triplet
michaelt 35 days ago [-]
Isn't a 'passkey' with no discoverable credentials and no user verification just a regular U2F token?
Borealid 34 days ago [-]
Well, it could still provide credBlob (up to 32 bytes of data stored in the non-discoverable credential and handed back after verification). But mostly yes, it's losing the advantages of FIDO2.
arianvanp 34 days ago [-]
Modulo supporting more algorithms -- yes
woodruffw 35 days ago [-]
Huh yeah, I hadn't considered how they got around that. I suppose in that case this key could do something similar?
kfreds 35 days ago [-]
We're working on it. Our next hardware should have USB HID, FIDO2, and persistent storage. The persistent storage will be per device app. Well, that's the idea anyway. I'm not sure when we'll be done but check our website in a few months, or join our mailing list.
globular-toast 35 days ago [-]
I can't see anything about the physical durability of this. How tough is the injection moulded plastic? Can I expect it to be at least as tough as a metal key that I can safely keep on my person all day regardless of what I'm doing? Is it waterproof?
Being transparent looks cool, but doesn't it make it more attractive to opportunistic thieves? What about something more plain that just looks like a key fob?
plagiarist 35 days ago [-]
I've seen this one before, when I was looking for something like a human affordable HSM.
I think it is a really smart concept. Take the measurement feature of a TPM but remove the untrusted components supplying the measurements.
I really want one. I just currently don't know how to write device driver stuff and my ADHD means if I don't accomplish a project instantly I won't accomplish it at all.
> I really want one. I just currently don't know how to write device driver stuff
Ease into it. We have fairly good documentation and some getting started material you can read (tillitis.se, dev.tillitis.se, GitHub) to gauge your level of understanding. The design is also meant to have the bare minimum complexity necessary to accomplish its functionality. It seems to have potential as a learning platform for college students who want to understand computers from the gates and up.
philsnow 35 days ago [-]
(Gray check marks in the application support matrix apparently mean "not yet", not "yes support but maybe not perfect")
exceptione 35 days ago [-]
If you look for a FIDO token, Token2 has affordable offerings. (I am not sure how well they are resistant against physical attacks though, it looks the case can be pried open.)
1oooqooq 35 days ago [-]
always remember the FIDO standard is more about validating identities than authorization.
defraudbah 35 days ago [-]
This looks amazing, any information how this compares to yubikey?
I love how configurable this is and the ability to tamper with it
PS. Another question, any plans to adopt usb-a?
MrSimontia 35 days ago [-]
No firm plans for USB-A. A USB-C to USB-A converter works, but maybe not so elegant.
There's also Glasklar Teknik AB and Karlstad Internet Privacy Lab AB.
Glasklar does:
- Sigsum, a transparency log design
- System Transparency, a security architecture for transparent systems
- Hosts and maintains the Debian Snapshot service, an archive of the past decade of released Debian packages
KIPL does traffic analysis defense against AI-based classifiers, which Mullvad recently integrated into the VPN app.
https://www.glasklarteknik.se
https://www.sigsum.org
https://www.system-transparency.org
They're a model company for data-minimization. No account names, no passwords, can pay by cash in an envelope, RAM-only infrastructure, thorough and frequent 3rd-party auditing, etc.
They provide back, fund privacy initiatives, have a history of being unable to provide user data when requested by governments, all of their stuff is well documented. You'd be hard-pressed to find anyone privacy & security conscious speak poorly about them.
There are 6 other providers that do offer static IP, and one of those uses AWS nitro to ensure that mappings aren't available to LEO. So this wasn't a technical limitation.
What were they deceptive about? Their announcement is straight forward.
"Regrettably individuals have frequently used this feature to host undesirable content and malicious services from ports that are forwarded from our VPN servers. This has led to law enforcement contacting us, our IPs getting blacklisted, and hosting providers cancelling us.
The result is that it affects the majority of our users negatively, because they cannot use our service without having services being blocked."
https://mullvad.net/en/blog/removing-the-support-for-forward...
I'm not saying you have to agree with the decision, but I don't see any deception. They even gave a months notice.
https://web.archive.org/web/20230530003202/https://mullvad.n...
But if I may put my cynical hat on (I think this is fair for any VPN provider), mullvad states in HN[0]
> Port forwarding needed to be removed on moral grounds.
Fair enough, however such moral grounds only came to light when extreme and immediate pressure was applied to their business model. The same post does talk about abuse, but only in terms of how it created a negative experience for "some" users (no details). The blog post does go into those negative effects, good, and doesn't try to whitewash it in moral reasons, also good. I think I mistook the official blog with an official statement here on HN.
There was another HN post apparently by a mullvad engineer that didn't pull any punches. I can't find it anymore, but I remember that it was that post that somehow led me to kfred's post and then left a very bad taste in my mouth. Maybe someone else is a better researcher than me and can dig it up.
I'll retract my "character" criticism, since mullvad clearly cares deeply about privacy, regardless of my perceived problems with their public communications.
Personally, iCloud Private Relay ticks all the boxes for my use cases, so I should have just kept my mouth shut.
[0] https://news.ycombinator.com/item?id=37062965
Turns out, there is no such thing as "stored secrets", and the device has no non-volatile memory at all, other than FPGA configuration (NVCM). The only secret is UDS, 256 random bits that are baked into FPGA configuration stream, and they are protected by FPGA's read-out protection. The mechanism that is normally used to prevent device duplication is instead used to protect cryptographic secrets. Replacing bitstream is all-or-nothing affair, so a hypothetical "CPU with debug capabilities" will not have access to UDS.
This means all storage must be on PC and some classes of things are are absolutely impossible - for example an anti-bruteforce counter that clears secrets if too many wrong attempts are entered.
> FPGA configuration (NVCM). The only secret is UDS, 256 random bits that are baked into FPGA configuration stream, and they are protected by FPGA's read-out protection.
Correct. Here are some more interesting details:
- we're using the Lattice iCE40 UltraPlus FPGA, which is supported by open tooling and has been for a long time. During the course of the project we also had the configuration and locking protocol reverse engineered so that one can configure and lock the FPGA with open tooling.
- the iCE40's Non-Volatile Configuration Memory (NVCM) uses anti-fuse technology for storing the configuration bits, where the 0s and 1s are stored in vias on the die. The physics of how these vias are modified to represent a 0 or a 1 make it very hard to read out information using X-ray, unlike on-die storage implemented using "horizontal" e-fuses. That's the gist of it.
- the FPGA's boot state machine is unfortunately designed such that you can get it to boot an external bitstream from SPI even after you've configured and locked NVCM, and the state of EBRs (block memory) is retained across warm reboots of the FPGA. We took several steps to mitigate this limitation, which, now that I think about it, would make several interesting blog posts. The UDS memory itself is in LCs, you can mix in key material from the host, the exact timing of that is randomized, RAM (implemented in EBR) is has both address and data randomization... and a few more things.
- physical security is hard, and the TKey won't be able to stand up against any and all physical attacks, but I don't think there is any security hardware in the world that is as open and inspectable as the TKey.
It's just the counter that is impossible.
2023 (204 points, 78 comments) https://news.ycombinator.com/item?id=38764353
2022 (305 points, 119 comments) https://news.ycombinator.com/item?id=32896580
Note they've had a couple of security vulnerabilities https://news.ycombinator.com/item?id=39830553 https://news.ycombinator.com/item?id=40055726
Dude. Can I borrow your time machine?
Yes, OpenTitan is cool.
The philosophical discussion about FPGAs and ASICs in the context of security is interesting.
For the TKey FPGA design you can inspect both the design (https://github.com/tillitis/tillitis-key1/tree/main/hw/appli...) and the toolchain (Icestorm: https://github.com/tillitis/tillitis-key1/blob/main/doc/tool... that contains synthesis, place&route, NVCM programming tools). However, the internal FPGA fabric—consisting of the logic cells, memory, and interconnects—remains proprietary.
Most Open Source ASICs I am aware of provide open-source RTL designs, but the tool chains are usually proprietary. Hard macros, memories, security mechanisms, etc are typically also closed source. And then there is the manufacturing process itself that is not transparent.
There isn’t a definitive answer as to what constitutes “enough” openness for security inspections. Individuals have different thresholds for what they consider acceptable.
So far we chose as much open source as possible.
theoretically this is quite desirable but various protocols aren't build in a way enabling it
e.g. TOTP pushes a secret from the server to the client, instead of having some form of deterministic key exchange
similar while FIDO protocols and passkeys could have been designed in a way which can work fully stateless they are not
now it doesn't mean you can't make it work with tkeys but it can get more complex e.g. for TOTP using TKey to guard a local password vault which then does the TOTP instead of having the TOTP on the key directly (but then while TOTP is better then SMS 2FA it's still pretty bad compared to what technically is possible, like much worse then a lot of people realize)
It's been a while since I've looked deeply at FIDO, but I think they would have had to make a handful of nontrivial security concessions to make WebAuthn stateless. One pretty important one that comes to mind is the token counter, which in principal enables RPs to detect a cloned credential.
If you only do user presence and non-discoverable, then WebAuthn is completely stateless and deterministic for a given (challenge,rpId,origin) triplet
Being transparent looks cool, but doesn't it make it more attractive to opportunistic thieves? What about something more plain that just looks like a key fob?
I think it is a really smart concept. Take the measurement feature of a TPM but remove the untrusted components supplying the measurements.
I really want one. I just currently don't know how to write device driver stuff and my ADHD means if I don't accomplish a project instantly I won't accomplish it at all.
Or download for free the CryptoServer SDK from https://utimaco.com/products/platform/cryptoserver-general-p... Their SDK contains a HSM simulator. They provide instructions how to run it in a container so that you then even have a network HSM.
Ease into it. We have fairly good documentation and some getting started material you can read (tillitis.se, dev.tillitis.se, GitHub) to gauge your level of understanding. The design is also meant to have the bare minimum complexity necessary to accomplish its functionality. It seems to have potential as a learning platform for college students who want to understand computers from the gates and up.
I love how configurable this is and the ability to tamper with it
PS. Another question, any plans to adopt usb-a?