The title reads like: "Why jumping from a bridge is a bad idea". Does this needs to be stated?
snowstormsun 27 days ago [-]
Yes, because it keeps happening.
theanonymousone 27 days ago [-]
Yes it does, but is it because some people believe and argue that jumping from a bridge "is not that bad" or "may be justifiable under some circumstances"?
pwdisswordfishz 27 days ago [-]
You just need to use the right tool for the job, you see. If jumping from a bridge is sufficient for this person, who are you to categorically disallow it?
theanonymousone 27 days ago [-]
It's been some time since the last occasion where I was this close to a coin flip on whether a comment is sarcasm.
saurik 27 days ago [-]
In my experience arguing with people about these kinds of bugs--which I have done a lot of, as people would write apps that are buggy and then blame me for their app being hacked, as, clearly, if jailbroken phones didn't exist or were more illegal or whatever, they would have been safe--a lot of the time people are under the impression that their code on the client is secure. They will believe:
1) that it is effectively impossible to reverse engineer binary code and understand it (particularly so if it is obfuscated in any way at all, as the security claims made by the people who develop such tools are often absurd).
2) that it is additionally possible to prevent the hacker from getting access to even their binary, as it is encrypted by the app store and might require jailbreaking the device; either way, it is akin to piracy and thereby illegal.
3) that it is possible to add further mitigations to prevent people from analyzing your app, such as certificate pinning for all of your network requests, or trying to verify the device is "authentic" and not running a jailbroken OS.
Now, "obviously", all these beliefs are all false; but the problem is that, in some sense, they also are not entirely wrong, and so they stick: I am extremely competent at reverse engineering, but I am going to groan given the task of reverse engineering an iPhone app if I find myself forced by certificate pinning to work around some obfuscated network checks after stealing a copy of the app using a jailbroken phone... like, these mitigations actually do make it a lot more annoying for me to do any of this work, and I certainly am not going to that much effort in a casual drive-by fashion.
Meanwhile, client-side security is also a thing the industry relies on in other ways: developers want to limit denial of service attacks or limit piracy of their product or limit external access to private user data stored on the device, and these techniques that "don't work" can certainly raise the bar for an attacker, and so aren't considered dumb in the general case.
I think the real core education is that people don't understand how to determine what kind of credential should be required to access what kind of information, and that different pieces of interoperating software might all possess different credentials, and the limits on those credentials need to be honored.
This also comes up with things like with tokens for various services: people will sign up for a service, and then store the with token in the app so they can make API calls from the client... but now, I have their auth token, right? They don't get that, and part of the reason is that a lot of services kind of encourage that model. And like, with your OpenAI key, at least the damage is probably "just" monetary; but, if it is your AWS key, suddenly it is super serious.
So, yeah: I think developers will, in fact, say stuff like client-side filtering "isn't that bad", or that "it might be justifiable under some circumstances"; and they might even be sort of almost right at times for certain kinds of checks (not with other user's data, of course ;P) under certain kinds of tradeoffs... but then misapply the boundary in a way that is flat-out incorrect.
Now, is this article the article that would explain this or convince the developer that these cases are wrong? I don't know... it isn't even clear to me that that's their audience, as opposed to being more of a portfolio piece that the author does understand this issue and thereby is competent at one or both of security engineering or website development (and, FWIW, I think it is sufficiently successful at that).
27 days ago [-]
jgeada 27 days ago [-]
We put nets and high guard barriers on bridges for a reason
sammyteee 27 days ago [-]
If it didn't keep happening, the article wouldn't exist.
Sephr 27 days ago [-]
Caveat to the title: Except for local client-side data emissions. Filtering private data before it gets sent from your device in the first place is a good idea.
manvillej 27 days ago [-]
With the caveat that you ALSO filter server side as well.
You cannot rely on anything from the client.
atoav 27 days ago [-]
If you can avoid making a request that to the knowledge of your client has to fail, don't make it.
This has the benefit that it allows you to give clear and timely feedback to your client and potentially to your users.
As for the problem outlined here: If you can reach any private, for-your-users-eyes-only endpoint without authentification you suck at what you do and you should probably change into a profession where you can deal less damage.
cesarb 27 days ago [-]
This is a risk common to all "fat clients", when the same team develops both the server code and the client code: it's easy to forget that, unlike the server code, the client code cannot be trusted.
treyd 27 days ago [-]
I don't really understand how this is so hard to get. Is this a phenomenon of using "full stack JS" for everything and tools that intentionally try to hide the boundary between client and server? If that's the case then why are the tools designed to cause those problems?
hoten 27 days ago [-]
It's a tale as old as time - not all developers understand the abstractions they work under.
th3w3bmast3r 27 days ago [-]
^ This and also taking a shortcut is easier. For teams that are not full-stack, doing it client-side means you don't have to bother the backend team for more APIs or wait for them to implement it fo you.
JoshTriplett 27 days ago [-]
Lack of security mindset. It's important to have the fundamental habit of assuming that every surface area you expose could receive arbitrary inputs and will not necessarily only interact with code you've written. But that's not an innate thing that everyone knows without explicit learning/training.
dboreham 27 days ago [-]
Translated: implementing a server query interface with insufficient access controls is a bad idea.
The article is mostly about the resulting security by obscurity being broken.
Cerium 27 days ago [-]
They should learn about bloom filters. Could kill two birds with one stone, fix leaking the preferences via the swipe list and fix the ever growing query problem.
_factor 27 days ago [-]
Bloom filters really need to be more present in today’s technology landscape. They’re like magic for certain problems. I believe they have much wider uses than the current subset they’re relegated to.
toong 27 days ago [-]
Just sending the bloom filter values back? Neat!
globular-toast 28 days ago [-]
I wonder how many backends are just pure CRUD with all business rules implemented on the frontend? Scary to think. I'm forever having to tell devs that form validation in js isn't enough, you need to do it on the backend too (or, preferably, only). This article is about reading data you shouldn't be able to, but my strong suspicion is a bunch of stuff out there will let you write stuff you shouldn't be able to as well.
Etheryte 27 days ago [-]
Not sure I agree with the idea that you should validate only on the backend, I think you should do both. Backend validation is for you, to ensure that the data is valid and sane, frontend validation is for the user, so they can get early feedback if something is wrong.
globular-toast 27 days ago [-]
You can do basic validation on the frontend. The problem is if you do too much you end up with two sets of, possibly subtly different, rules that you need to maintain.
tgv 27 days ago [-]
In many cases, that's better than having it done only on the backend, because it would confuse (and anger) users. "It knows this isn't correct, why does it let me do it and then say 422 SCNUKS"?
whiterknight 27 days ago [-]
You don’t return readable errors from backend to user? You don’t wait for requests to confirm before assuming they do?
ongy 27 days ago [-]
For something like input field validation, a request per keystroke might be a bit much, but the update rate of feedback users expect.
In systems with frontend UX validation and backend functional validation, errors from the backend tend to be aimed at devs.
I.e. might expose the reflex that's known good over having nice words for the approximate test done in the frontend.
tgv 25 days ago [-]
Then the code duplication problem moves to the back-end. And it's not the best place for readable error messages.
robertclaus 28 days ago [-]
I've always been a bit suspicious that mistakes like this are easier in GraphQL than older REST (or even SOAP) models because GraphQL is designed for more frontend-driven development. Obviously this is just one example, but it was interesting that it involved "hidden" GraphQL data.
DimmieMan 28 days ago [-]
I think GraphQL vs others isn't relevant in this case. Would very likely be returning too much data with a REST API too.
This is just neglect rather than a technical problem, any decent server implementation lets you authorise on a per field basis.
RamblingCTO 27 days ago [-]
It has nothing to do with the implementing technology but bad decisions of people. Nothing in particular from the GraphQL standard enables this.
Arch-TK 27 days ago [-]
Long post to say that yet another application had an access control issue which was being masked because the access control was implemented on the client.
Incredibly common in my experience in the security field.
olliej 27 days ago [-]
Oh I see, the claim is “we don’t do the result filtering ourselves so we don’t know what you’re looking for” but that is done by … taking your filters and broadcasting them to everybody?
So they’ve removed the server from the filtering process but made the privacy implications far worse.
andreareina 28 days ago [-]
403 Forbidden
kkfx 27 days ago [-]
Ehm... A long time developer do think data sent on someone else machine can still be "private"? Ehm... Mh... I have some issue to find a politically correct way to state the fact that no damn laws can "protect" people who send anything to a third party...
BTW if some user of a dating service is concerned about his/her own searches... More than beings scared about "potential client-side leaks other dating service user might harvest" try to concentrate on how much personal dating interests the service can harvest and eventually re-sell, if not "the service" just some working for it and having some side business...
perching_aix 27 days ago [-]
> I have some issue to find a politically correct way to state the fact that no damn laws can "protect" people who send anything to a third party
The article is exclusively(!) about the technological enforcement of that, not legal, so there really wasn't any need to exercise those apparently weak political correctness muscles of yours in the first place.
tsimionescu 27 days ago [-]
On the contrary, laws are the only thing that can protect you form this. Other than simply not using dating apps, there is nothing you can do as an individual to protect yourself from the service.
Now, laws in this area are woefully inadequate, even in the best places like Europe's GDPR or California's regulaitons, so in practice I do agree that at the moment shared with a third party == shared with the whole world, to some extent. But this just means we need harsher laws, explicit controls, probably agencies that conduct periodic inspections like the FDA for restaurants etc.
kkfx 27 days ago [-]
Well, a simple example: essentially all banks at least inside a nation have some standard open APIs, typically signed XML or JSON, to exchange transactions. In the EU/SEPA for instance is OpenBank API. All banks by laws support it and use it for transfers and so on. No one offer it to their customers. If something happen to your money APPARENTLY licit good luck proving it's not you. You are slave of a service and no law except mandating aforementioned APIs open for all, not just between banks, can really protect you because you can't prove it's not you but someone else who have done something with your money.
If your car crash onto a school group on a trip you might state "I've try braking and steering but the car does not respond" (for the rare all-by-wire models who start to appear) beside car logs and third party cam you have nothing to prove you are right. That's because the car it's not really yours but under the control of the OEM at a much deeper access than the limited you have. No laws can protect you except mandating FLOSS cars in their owner hands as he/she wish. Modern cars are services on wheels, you are a slave not knowing of your position.
If your emails are on GMail GDPR/HIPAA etc state you have some right, but gives no means to materially verify if Alphabet do not do something from training LLMs to analyses you messages for ads and so on. You are not on their servers and you have no right to inspection their infra. Even if you suspect something and file a complaint a Judge might command Google to share with a third party technician a certain set of infos, but no one can be sure they are true. Even an USA Judge inside USA, so in the same country of Alphabet can't do much to really know what happen on their servers, as you can't know what happen in your CPU, it's a closed source black box.
You can be "sure enough" only in technical terms "hey, my computer is not connected, It's composed of hw from different manufactures, running a FLOSS OS I know, ... maybe my files on it's storage are just mine", "the drive in my pocket it's mine, it can't leak data around being not connected with the rest of the world", but not more. If you give some data to a third party no one can really tell you what happen to your data.
tsimionescu 27 days ago [-]
You're not at the mercy of any of those things, because we live in a state of law. If these companies mishandle data, they know they are liable for quite a lot of money. For some cases, individual employees are liable personally, potentially even risking prison time.
You're speaking from an absurdly low level of trust in instituions. The fact is that banks work, they shield their clients from mass amounts of fraud, and do so reliably over decades. The vast majority of people have never lost a single cent to a bank mistake. Google employees don't have routine access to your emails, and if some group do and use that power and Google gets sued, it will very likely be found out at trial, because most ordinary employees don't perjure themselves to protect colleagues for obvious ilegalities (outside the police, but that's a different discussion). Of course, the chance of actually successfully pursuing a suit against Google is small for a mere mortal, but that's a different issue that has to do with corruption in the system and not a fundamental inability to achieve this.
kkfx 27 days ago [-]
It's not a matter of trust in institutions, but a matter of ability to know, not the company but you, that your data are mishandled. How can I know that an Alphabet sysadmin do not use my mails? I can only know because Alphabet state it respect the laws. I have NO PROOF that's true, nor I can't have one.
Since I can't prove unlawful handling of "my" data, I can't complaint because I can't even prove such unlawful use exists. Sometimes, here and there we read some scandals about "this coming out of an LLM", "this coming out from a leak" etc, but there is no real proof.
One of my past banks at a certain point in time have chosen to suppress the RSA OTP for a mobile crapplication that do soft-token and many more, I've sent a classic GDPR Nightmare Letter to them with many detailed complaint, they answer politely:
- we need speaker permissions because you can call customer service from the app, so you can see and act on the screen while talking
- we need camera permissions because there are many QR-based payments systems we support
- we need precise location and location history because with them we can try to prevent potential misuse of the app
- we demand contacts reading and writing to allow you easy transfers, quickly call the right number if you lost your card, ....
Essentially 100% is formally justified BUT the app does not run on an emulator or on rooted phones, there is no source code and formally reverse engineer it is forbidden. So essentially I can't prove they access some sensors and data ONLY for the purpose they declare or not. If I can't prove illicit use I can't use my country laws about my privacy.
autoexec 28 days ago [-]
I don't understand this idea that you can do anything "privately" on a device designed to collect and leak your personal information whose admin is a corporation that can make changes to the system at any time without your consent or awareness, and where multiple parties (carrier, and manufactures) have privileged access to do the same, and where your own access is extremely limited and controlled. The entire system is totally insecure and non-private by design.
The idea that dating app could prevent your preferences from being collected seems unlikely to me too. If people are posting profiles and messaging each other on a platform, that platform is going to have no problem learning what their interests are. They don't need to know what you're searching for, as long as they know who you're finding.
perching_aix 28 days ago [-]
I'm not sure you genuinely don't understand, the privacy promised was very clearly inteded to be from other users, not the service itself, or those who may be able to get unauthorized access to your/their devices and data. It was just phrased in an oh-so-typically misleading way by the service provider.
Yes, these groups might overlap, particularly the users and those with unauthorized access to the service provider's devices and data (as demonstrated in the article). But identifying this as a concern I don't think is much of a revelation. Like yeah, unauthorized access is a privacy concern, who woulda known.
27 days ago [-]
tsimionescu 28 days ago [-]
Whenever you use an online service, you share at least all of your data related to that service with that service (often they get even more data than you think). The companies making your phone, your OS, maybe your baseband, and any advanced attackers may also be getting some or all of this data. Many of these parties may be sharing this data with other parties that they trust, and their employees may be using it for their own ends too. This much is at least partly understood by most people and it is impossible to use an electronic device and an online service without exposing yourself to all of this.
But what you don't expect is that any other user of that service has access to your data. That is a completely different level of privacy breach. And it's also one that people using a dating app in particular have much more reason to worry about than the more nebulous threat from above. Especially when they're not out in their community about their romantic and/or sexual preferences, and are told by the app that it hides this information.
eru 28 days ago [-]
> The companies making your phone, your OS, [...]
These expectations are very different in the desktop computer world. When I use a website or a program, neither the various people who made the components in my computer nor the people who made my OS learn anything at all.
So it's reasonable, at least on the surface, for some people to develop similar expectations on eg mobile devices.
tsimionescu 27 days ago [-]
The situation is not that different for desktop OSs. Apple, at least by default, knows every app you launch on MacOS (through the security feature they have of checking some hash of the executable against their database). Windows collects all kinds of metrics and info about everything you do, including detailed crash dumps that leak who knows what information. Both Intel and AMD CPUs have secure enclaves that run full blown OSs that you have no control over and do who knows what; and those have access to your network devices as well.
If you're running a Linux, you're probably much more protected, but even then, on Ubuntu and most other popular distros, you probably download all or most of your apps with apt or rpm from their own official repos, so they probably can get a pretty good idea of what you're running.
The situation is generally better than on mobile, but unless you're taking significant pains, you're still a pretty open book to your OS manufacturer. Whether they're reading this book or not is a separate matter.
eru 27 days ago [-]
> If you're running a Linux, you're probably much more protected, but even then, on Ubuntu and most other popular distros, you probably download all or most of your apps with apt or rpm from their own official repos, so they probably can get a pretty good idea of what you're running.
Yes, but not when nor whether you are actually using the apps you have installed. Nor any data created at runtime.
kube-system 27 days ago [-]
Privacy isn't about keeping secrets from everyone, it is about the selective control of personal information.
The people who use dating apps want to share their information with some people, and not others.
phkahler 27 days ago [-]
Clearly you didn't read the article before commenting. It's not about "big brother" collecting your data. It's about other users of the app being able to see your data on their device.
autoexec 27 days ago [-]
Why would other users on a dating app ever be able see what you're filtering when searching for other users in the app? Is that normal? Would they get a "This person was searching for X hair color/age range/gender and found you" message? Why would processing your searches client side be needed just to stop other users from seeing what you searched for? The company could just not share that data and still process it on their servers.
1) that it is effectively impossible to reverse engineer binary code and understand it (particularly so if it is obfuscated in any way at all, as the security claims made by the people who develop such tools are often absurd).
2) that it is additionally possible to prevent the hacker from getting access to even their binary, as it is encrypted by the app store and might require jailbreaking the device; either way, it is akin to piracy and thereby illegal.
3) that it is possible to add further mitigations to prevent people from analyzing your app, such as certificate pinning for all of your network requests, or trying to verify the device is "authentic" and not running a jailbroken OS.
Now, "obviously", all these beliefs are all false; but the problem is that, in some sense, they also are not entirely wrong, and so they stick: I am extremely competent at reverse engineering, but I am going to groan given the task of reverse engineering an iPhone app if I find myself forced by certificate pinning to work around some obfuscated network checks after stealing a copy of the app using a jailbroken phone... like, these mitigations actually do make it a lot more annoying for me to do any of this work, and I certainly am not going to that much effort in a casual drive-by fashion.
Meanwhile, client-side security is also a thing the industry relies on in other ways: developers want to limit denial of service attacks or limit piracy of their product or limit external access to private user data stored on the device, and these techniques that "don't work" can certainly raise the bar for an attacker, and so aren't considered dumb in the general case.
I think the real core education is that people don't understand how to determine what kind of credential should be required to access what kind of information, and that different pieces of interoperating software might all possess different credentials, and the limits on those credentials need to be honored.
This also comes up with things like with tokens for various services: people will sign up for a service, and then store the with token in the app so they can make API calls from the client... but now, I have their auth token, right? They don't get that, and part of the reason is that a lot of services kind of encourage that model. And like, with your OpenAI key, at least the damage is probably "just" monetary; but, if it is your AWS key, suddenly it is super serious.
So, yeah: I think developers will, in fact, say stuff like client-side filtering "isn't that bad", or that "it might be justifiable under some circumstances"; and they might even be sort of almost right at times for certain kinds of checks (not with other user's data, of course ;P) under certain kinds of tradeoffs... but then misapply the boundary in a way that is flat-out incorrect.
Now, is this article the article that would explain this or convince the developer that these cases are wrong? I don't know... it isn't even clear to me that that's their audience, as opposed to being more of a portfolio piece that the author does understand this issue and thereby is competent at one or both of security engineering or website development (and, FWIW, I think it is sufficiently successful at that).
You cannot rely on anything from the client.
This has the benefit that it allows you to give clear and timely feedback to your client and potentially to your users.
As for the problem outlined here: If you can reach any private, for-your-users-eyes-only endpoint without authentification you suck at what you do and you should probably change into a profession where you can deal less damage.
The article is mostly about the resulting security by obscurity being broken.
In systems with frontend UX validation and backend functional validation, errors from the backend tend to be aimed at devs. I.e. might expose the reflex that's known good over having nice words for the approximate test done in the frontend.
This is just neglect rather than a technical problem, any decent server implementation lets you authorise on a per field basis.
Incredibly common in my experience in the security field.
So they’ve removed the server from the filtering process but made the privacy implications far worse.
BTW if some user of a dating service is concerned about his/her own searches... More than beings scared about "potential client-side leaks other dating service user might harvest" try to concentrate on how much personal dating interests the service can harvest and eventually re-sell, if not "the service" just some working for it and having some side business...
The article is exclusively(!) about the technological enforcement of that, not legal, so there really wasn't any need to exercise those apparently weak political correctness muscles of yours in the first place.
Now, laws in this area are woefully inadequate, even in the best places like Europe's GDPR or California's regulaitons, so in practice I do agree that at the moment shared with a third party == shared with the whole world, to some extent. But this just means we need harsher laws, explicit controls, probably agencies that conduct periodic inspections like the FDA for restaurants etc.
If your car crash onto a school group on a trip you might state "I've try braking and steering but the car does not respond" (for the rare all-by-wire models who start to appear) beside car logs and third party cam you have nothing to prove you are right. That's because the car it's not really yours but under the control of the OEM at a much deeper access than the limited you have. No laws can protect you except mandating FLOSS cars in their owner hands as he/she wish. Modern cars are services on wheels, you are a slave not knowing of your position.
If your emails are on GMail GDPR/HIPAA etc state you have some right, but gives no means to materially verify if Alphabet do not do something from training LLMs to analyses you messages for ads and so on. You are not on their servers and you have no right to inspection their infra. Even if you suspect something and file a complaint a Judge might command Google to share with a third party technician a certain set of infos, but no one can be sure they are true. Even an USA Judge inside USA, so in the same country of Alphabet can't do much to really know what happen on their servers, as you can't know what happen in your CPU, it's a closed source black box.
You can be "sure enough" only in technical terms "hey, my computer is not connected, It's composed of hw from different manufactures, running a FLOSS OS I know, ... maybe my files on it's storage are just mine", "the drive in my pocket it's mine, it can't leak data around being not connected with the rest of the world", but not more. If you give some data to a third party no one can really tell you what happen to your data.
You're speaking from an absurdly low level of trust in instituions. The fact is that banks work, they shield their clients from mass amounts of fraud, and do so reliably over decades. The vast majority of people have never lost a single cent to a bank mistake. Google employees don't have routine access to your emails, and if some group do and use that power and Google gets sued, it will very likely be found out at trial, because most ordinary employees don't perjure themselves to protect colleagues for obvious ilegalities (outside the police, but that's a different discussion). Of course, the chance of actually successfully pursuing a suit against Google is small for a mere mortal, but that's a different issue that has to do with corruption in the system and not a fundamental inability to achieve this.
Since I can't prove unlawful handling of "my" data, I can't complaint because I can't even prove such unlawful use exists. Sometimes, here and there we read some scandals about "this coming out of an LLM", "this coming out from a leak" etc, but there is no real proof.
One of my past banks at a certain point in time have chosen to suppress the RSA OTP for a mobile crapplication that do soft-token and many more, I've sent a classic GDPR Nightmare Letter to them with many detailed complaint, they answer politely:
- we need speaker permissions because you can call customer service from the app, so you can see and act on the screen while talking
- we need camera permissions because there are many QR-based payments systems we support
- we need precise location and location history because with them we can try to prevent potential misuse of the app
- we demand contacts reading and writing to allow you easy transfers, quickly call the right number if you lost your card, ....
Essentially 100% is formally justified BUT the app does not run on an emulator or on rooted phones, there is no source code and formally reverse engineer it is forbidden. So essentially I can't prove they access some sensors and data ONLY for the purpose they declare or not. If I can't prove illicit use I can't use my country laws about my privacy.
The idea that dating app could prevent your preferences from being collected seems unlikely to me too. If people are posting profiles and messaging each other on a platform, that platform is going to have no problem learning what their interests are. They don't need to know what you're searching for, as long as they know who you're finding.
Yes, these groups might overlap, particularly the users and those with unauthorized access to the service provider's devices and data (as demonstrated in the article). But identifying this as a concern I don't think is much of a revelation. Like yeah, unauthorized access is a privacy concern, who woulda known.
But what you don't expect is that any other user of that service has access to your data. That is a completely different level of privacy breach. And it's also one that people using a dating app in particular have much more reason to worry about than the more nebulous threat from above. Especially when they're not out in their community about their romantic and/or sexual preferences, and are told by the app that it hides this information.
These expectations are very different in the desktop computer world. When I use a website or a program, neither the various people who made the components in my computer nor the people who made my OS learn anything at all.
So it's reasonable, at least on the surface, for some people to develop similar expectations on eg mobile devices.
If you're running a Linux, you're probably much more protected, but even then, on Ubuntu and most other popular distros, you probably download all or most of your apps with apt or rpm from their own official repos, so they probably can get a pretty good idea of what you're running.
The situation is generally better than on mobile, but unless you're taking significant pains, you're still a pretty open book to your OS manufacturer. Whether they're reading this book or not is a separate matter.
Yes, but not when nor whether you are actually using the apps you have installed. Nor any data created at runtime.
The people who use dating apps want to share their information with some people, and not others.