"The new authentication scheme is the second in recent weeks that relies on photos. Earlier this month, Facebook asked users to upload nude photos to Facebook Messenger, as part of an effort to prevent revenge porn. Facebook said it would use the nude photos to create a digital fingerprint against which to compare future posts."
Wait what? I had to check whether today was April 1st.
Not sure how I feel about it still, but this article doesn't really accurately portray the service. From the source linked by the Wired article [1], this is specifically intended to prevent a nude photo from being shared maliciously when a user has reason to believe that it will be. They then voluntarily upload the image to Facebook which uses the digital fingerprint to prevent it from being reposted either in its original form or with slight modifications. It's always completely voluntary, Facebook is certainly not requiring or even really encouraging average users to upload their nudes.
> yes, you're going to have to trust them with a copy of it.
No. They could have you compute a fingerprint locally without uploading the image itself, preferably through some audited open source software, without auto-updates. They only don't do that because they want to guard their image hashing as corporate secrets.
they can verify that after they have found a match.
if there is no match you gave them no data of interest.
if there is a match then they already have that image anyway and you didn't make your privacy situation worse, except maybe telling them that you claim that is you, now allowing them to associate more images was you, but that's probably an acceptable tradeoff if there is actual revenge porn of you out there.
It's not a simple hash. From Alex Stamos: “There are algorithms that can be used to create a fingerprint of a photo/video that is resilient to simple transforms like resizing.” That doesn't make it clear that they are making use of the powerful classifiers that Facebook has, but it's clearly not md5, either.
They could also combine locally computed perceptual hashes of the unwanted images with face recognition of already uploaded regular images on the profile. That combination makes it even less necessary to send nudes to facebook.
My guess would be that they trained an autoencoder network for facial recognition and they use the encoder part for fingerprinting and check images by the decoder part.
If they already have the image, then you don't need this system.
If they don't have it, but are doing verify-after, then the image won't actually be blocked right away. The proposed system would be of pretty marginal value. Some value, yes, but low enough that facebook decided it wasn't worth building that system.
Or presumably an attacker could just slightly modify the image, and trick whatever CNN Facebook is trying to use into thinking it's not the same photo.
That's what is missing from the internet, generally speaking. Services like public notaries. That's kindof what CAs provide. But there are a lot of similar services that could exist for things like this.
This is aimed at non-technical users. If we're lucky, maybe like 1% of users will be able to figure out how to do that. Not to mention all of the other issues with it mentioned below.
A hash would only match an identical file, right? It sounds like image recognition is used here to catch derivatives of the original file, which a hash wouldn't catch.
There are specific (non-cryptographic) hashes which are built to compare equal for equal data, and to be similar for similar data (for different metrics of similarity). It's a somewhat esoteric subject, but it's a thing.
The "hash" would have to be non-reversible. When we compare images based on content, afaik we transfer them into a vector space and measure distance. Do we really have tech to make and compare such a content representation in a non-reversible way? That sounds like a lot to ask for tech that seems still in active development.
All hashes are non-reversible. Being one way is one of the characteristics of a hash. Depending on how effective it is, there may be collisions possible. This is kind of happened with previously common hashes like md5 where people were able to break it by generating collisions.
But basically saying “as long as it is unreversable” is redundant unless you think there’s some reason to think the hash sucks.
If your hash is smaller than the (compressed) image itself, then you have information-theoretical loss; there is no possible way to reverse the operation. And these hashes are usually much smaller than the original.
I think it would be easy to create multiple hashes (reverse, flipped, etc...) and upload them all and look for matches. Provide the users with image recognition signature creating software (which is exactly what they are going to do) and let the users do it, vs uploading extremely personal and potentially embarrassing images to strangers on the internet.
then don't send them, but don't complain about facebook allowing your nudes to be posted. I don't think Facebook is going to open source their image recognition algorithm just because you want it to.
You do have a choice, use the service or not. If I ever found out a nude picture of me was leaked, I'd definitely trust FB more than whoever took it and leaked it in the first place. The whole point of the service is pretty much: your picture is already or about to be out there.
^High School (and below) sure would be interesting /s
I imagine for most people in most situations regarding their* unwanted explicit media being published, the opposite is true i.e. Facebook(social media) is the worst place for it to be
There was a thread about this on HN -- too lazy to look it up now or repeat some of the longer comments about it. But going into this assuming that FB has no ill-intentions, FB's proposal seems by far the best solution in a world of ugly and terrible solutions.
For starters, it's intended for victims of revenge porn, which is a fairly extreme category and one in which the harassment is distinctively aggressive and virulent. Because if it weren't, the way FB deals with abusive content generally would seem to be good enough. If you come into this thinking that FB is asking everyone to upload their nudes for "safekeeping", then you've missed the point.
Secondly, it's hard to think of an implementation that wouldn't create a potential disaster that justifies doing anything special for revenge porn victims. Using the Facebook app is the most secure channel for sending FB the photos because it is a secure app that all FB users know how to use. Having the user hash on their own requires either an external app or website.
I don't think it needs to be said how such ancillary applications can be spoofed. Even if only 0.5% of users are dumb enough to fall for these spoofs, each incident would be a complete fucking disaster, for the victims and for Facebook.
As for the prospect of FB owning people's nudes. Again, in the case of revenge porn victims, the horse is already far from the barn. If we assign the worst of motives to FB, that it's a way to secretly collect nudes from users. Again, horse, barn. This secret process would be less efficient by magnitudes compared to what FB can already do today.
- It sets a precedent for uploading nude photos to FB and for them asking for it.
- You need to trust FB to delete the photos when they receive it. Yes, I understand that they probably will, but really, how many systems are those bits going to touch? How many logs are going to have this information? Can you be really sure?
A better implementation would be for the FB client to hash the file and for the hash to be uploaded. Trust issues are still there but at least mitigated to devices that you have some control over.
Evil users that hash legitimate photos can be overcome with the same review system that is being tested today.
> - It sets a precedent for uploading nude photos to FB and for them asking for it.
Before we reduce this to a slippery slope, what scenario do you envision in which FB could coax its userbase to upload nude photos? I know the OP is about taking a selfie to prove existence. What would FB use as the basis to mandate the general user to send a self-nude?
> You need to trust FB to delete the photos when they receive it...can you really be sure?
No, never, of course. But that's why I point out that FB already has this potential vector of attack. Every time a user flags content for abuse, that is logged and presumably a copy of the asset made for manual verification. Nevermind all the sensitive content millions of users everyday send across Messenger or private groups.
> A better implementation would be for the FB client to hash the file and for the hash to be uploaded.
If the image is hashed before it reaches FB servers, then it gives every user the power and impunity to attempt to censor via a Content-ID like approach.
> If the image is hashed before it reaches FB servers, then it gives every user the power and impunity to attempt to censor via a Content-ID like approach.
Another commenter makes a good point that you could still have a human verify the first time a provided hash matches an image.
In the current setup, there is a human that verifies the image to be hashed is a nude photo instead of the McDonald's profile picture. In the proposed setup, you wait until a hash matches the photo and then have a human verify if it's pornographic content.
The main obstacle that I can think of is: where does that hashing get done? Is it a feature that can efficiently be part of the phone app? Keeping in mind that this is a feature that would only be used by a very, very small part of the general userbase.
Let's assume that it is possible, the other issue that might come up is that the system is still suspect to a sort of denial of service attack, in which a group (for whatever reason) floods FB with purported sensitive images, and FB is flooded with constant takedown requests to review.
The attack against the uploaded hash approach fails with default-deny, which seems like a good thing for users.
The current implementation can be attacked by uploading thousands of legitimate images delaying takedown requests - therefore any images that should be taken down will stay up longer.
I'll agree that whatever approach FB is doing to hash the photos may not work on a phone for technical reasons, but given FB's resources I'm not sure how far that argument really goes. But consider this - if you truly, deeply cared about user safety and privacy, would you implement this feature the same way?
> The current implementation can be attacked by uploading thousands of legitimate images delaying takedown requests
How would that work, exactly? A user uploads hundreds of fraudulent images to FB's revenge-porn-abuse queue. At some point, the human who verifies whether the image is legit is going to realize that the user account is fraudulent and then disable the account.
If images are hashed, FB has no way to know if a user who is uploading hundreds of hashes is a malicious user or is actually an incredibly unfortunate revenge-porn victim. And the price for being wrong is extremely high. Maybe it's possible for FB's auto-detection system to be robust if the hashes it has to scan for is now several orders of magnitude than ever expected, making this all a moot point. But I can't imagine that the system scales with no penalty.
> But consider this - if you truly, deeply cared about user safety and privacy, would you implement this feature the same way?
The wording of your question implies a false dilemma, and I think reveals how different the premises you and I have about it. What exactly about Facebook's implementation of this feature makes it any less safe for users and their privacy than not having the feature at all? When a user sees and reports an abusive image of themselves -- that photo and that user, and that user's connection to the photo are already in Facebook's system".
Every fear there is about this data being exposed to malicious human workers, or that FB is trying to harvest sensitive images for nefarious means -- that risk has always existed. How do you think abuse-takedown requests are currently handled?
So if my argument is accurate, that an evil-pervy Facebook wants to do a mass collection of sensitive/comprising images of its user, all the infrastructure and dataflow is already in place, then this revenge-porn initiative does nothing to make that process more efficient. Even worse for pervy-Facebook, the initiative's very existence, nevermind announcing it, reminds the entire world again that holy-shit-think-of-all-the data-Facebook-has-on-us-including-our-sexy-times -- which is generally the kind of PR you want to avoid when you're conspiring to mass-harvest illicit imagery and data.
And let's be real here: Facebook doesn't have to do anything special for revenge porn victims, in the way that the Postal Service isn't obligated to open everyone's mail to make absolutely sure there's no child porn being sent -- the act of prevention ends up causing far more harm to all users than it benefits the comparatively small number of potential victims.
The status quo seems to be to do nothing until reports come in, which is OK for most situations but inadequate for the kind of attack vector that revenge-porn victims suffer. Facebook could have accepted that, as everyone else does, but invested time/resources into coming up with a technical solution that only benefits a very small but high-suffering part of its userbase while not increasing invasiveness (FB already autoscans the content of user messages, including with the use of PhotoDNA [0]).
Call me Pollyannish, but I don't see this instance as yet another time of Facebook being heartless and devious.
They could jump straight to using ml and ai and the future where only you are in charge of who sees your FacebookEroticExpressions™ on any platform, all across the cloud!
But in the mean time they should probably tell her/him/them if they use the method that already exist to upload them we can use the method that already exist to have them gone in time for recess/study hall/wedding day/when Congress resumes/his state of the Union Address this evening/etc.
Also, if you just give us the other ones that undoubtedly exist we can nip this whole thing in the bud right now.
That's exactly the topic we're discussing now. Sorry, with "userbase" I meant "general userbase". This initiative they're proposing -- in coordination with a safety group in Australia -- is aimed at revenge porn victims. The general userbase of Facebook aren't in that group.
> Before we reduce this to a slippery slope, what scenario do you envision in which FB could coax its userbase to upload nude photos? I know the OP is about taking a selfie to prove existence. What would FB use as the basis to mandate the general user to send a self-nude?
Phishing attacks against users that this feature is supposed to protect are more likely to succeed.
Can you elaborate? People are worried (rightfully so) that this feature requires uploading via the Facebook Messenger App. But this means they don't have to visit a URL or download another application.
They don't necessarily know that. Imagine you've used this FB feature in the past. You get a mail from @facebookrnail.com that tells you you can just email them the photos without even opening the app now! Well isn't that convenient.
Now, it's fair to say that even in a technically safer implementation where the photos never leave the device, many users can't tell the difference, so this point doesn't hold a lot of water.
Still, I think making the uploading of scandalous photos to FB is a dangerous precedent to set in general. Will other services and startups that have users suffer from the same problem implement this feature in the same way, and guarantee user safety and privacy? That's a pretty high bar.
> - It sets a precedent for uploading nude photos to FB and for them asking for it.
umm, no. If I understand correctly, the user believes that their nude may end up on FB. The user takes initiative to upload their copy of the nude to FB to prove ownership or damages.
FB has a no porn policy so wtf do you have verify you are you to get a revenge porn picture removed?
"We remove photographs of people displaying genitals or focusing in on fully exposed buttocks. We also restrict some images of female breasts if they include the nipple, but our intent is to allow images that are shared for medical or health purposes. We also allow photos of women actively engaged in breastfeeding or showing breasts with post-mastectomy scarring. We also allow photographs of paintings, sculptures, and other art that depicts nude figures. Restrictions on the display of sexual activity also apply to digitally created content unless the content is posted for educational, humorous, or satirical purposes. Explicit images of sexual intercourse are prohibited. Descriptions of sexual acts that go into vivid detail may also be removed."
https://www.facebook.com/communitystandards#nudity
It might prevent them from being sent in the first place. If it matches a fingerprint photo, the message stops. It's especially important in revenge porn scenarios where you don't want it seen by anyone, and even the short amount of time between report and take down can have a lot of views.
FB removes pornographic content, but it doesn't always do so proactively.
If a user shares pornographic content, it may be visible on the site for a short period of time before the content is removed by Facebook.
The goal of the initiative Facebook was attempting to implement was to prevent any revenge-porn photos from being shared on the website at all. Even for a brief period of time.
From a user's standpoint, one hour of pornographic content available is disturbing but not catastrophic. However, one hour of images of pornographic content of themselves being shown to their friends and family is catastrophic.
I realize this is a late reply, but I just saw this today.
I think the idea isn't that this is a "just in case" service. More like, if someone sends you a message that they've got compromising photos of you and will release them unless you pay $XX,XXX then you would use the service.
In the analogy, it'd be more like having the police live at your house after having a credible threat made against your life. Which is a thing that happens
I've never been a revenge porn victim but I think you might oversimplify the problem. For instance, if a former lover of mine has nude photos of me and distributes them to other people in private channels, I'm not able to even see them to report them. And that's just a one-time stoppage of the harassment.
This is for FB messenger, not FB posts. Messenger has no such rules for no pornography. Plus, this allows them to be proactive, rather than waiting for people to report posts.
> ... it's hard to think of an implementation that wouldn't create a potential disaster that justifies doing anything special for revenge porn victims.
> ...Having the user hash on their own requires either an external app or website.
What? Why can't the hashing take place in the FB Messenger app on your phone? Why does the picture need to be uploaded to FB's servers? That just goes to what Troy Hunt was saying earlier today in his testimony to Congress about how corporations are collecting data that they shouldn't even want to have. They should collect only the hash from your phone, not the picture. The hashing should be done locally to your device.
Open source a desktop and mobile tool that generates the image hash w/o the image leaving the device and have the code publicly reviewed seems like one potential approach.
People doing this frustrates me greatly. You had to type 30 key strokes. ⌘+t "ycombinator fb revenge porn" ↵ and paste the results back. How many people are going to read your comment? Maybe 10% of them want to read those links. You spared yourself a trivial amount effort by offloading it to tens or hundreds of others.
I wish people would provide links and source what they say more often. Maybe more discussion sites should let others suggest edits, like on stackoverflow. We could save some of that wasted effort.
I sometimes misunderestimate how long it'll take me to write out my thoughts or how much time I'll have before I need to get back to work. By the time I finished my comment I realize I wrote enough for the comment to be a fleshed-out enough argument; anyone wanting to see those past threads could look them up if necessary. I guess I could have saved you some angst by jumping up to the top of my comment and removing the sentence about me being lazy.
If would be kinda cool if HN were to start working on an algo to find those links automatically (a "possibly related" section at the top, for instance).
misrepresents the idea of the service. It's supposed to tackle the 'revenge porn' issue. If someone sees their own nudes show up on facebook, they can send a picture to facebook, which I assume get's fed into an image detection service that then tries to remove the already circulating nudes automatically.
The only safe way is to not take nude pictures in the first place.
What does this FB thing protect against? In practice? Upload the images as a zip file, message "hi, here's X in the nude, remove all fruit from htappletp://linkbananashortener.copearm/84395708345643785 and use "theytrustmedumbzuckerbergfucks" minus the FB founder as password". Yeah, it's a hurdle, but those who want to see the images will take it in a heartbeat, and abusers can still do things like message all mutual "friends", right there on FB. So you gained nothing, and pre-emptively gave FB your nude photos.
Besides, Barrin92 said "If someone sees their own nudes show up on facebook" -- well, nude images aren't allowed on FB anyway, are they? So if you or anyone else can see them, if they're not posted in a private group or private messages, you can just report them, no need to even give away that it's you in them. Then what would stop FB from generating perceptual hashes of these images and removing them site-wide and in the future?
There's so many holes in this that I can't really get mad on behalf of the people who fall for it.
This is the right answer. If FB is going to automatically remove revenge porn, and all porn is forbidden, then they should just automatically remove all porn. They don't need specific porn to remove all porn.
This reads more like a gullibility challenge than a true rationale.
It really does seem like something an elite private club would do to get a laugh out of fooling the proles while smoking a cigar and browsing the pictures.
It's more like oblivious high-paid product managers and devs who can't fathom why people wouldn't trust Facebook or that Facebook might not be a purely beneficial organization.
Sexting is quite common about young adults and so is the subsequent sharing of those photos. Being able to stop that from happening is incredibly valuable.
I am just surprised Facebook didn't instead look at a way of running these image hashing algorithms on the device instead.
I think they look for modifications of the photo as well so a cropping/tilt/text overlay/etc would still be flagged. Their model might require per-processing beyond what they can or want to deploy on-device.
While this totally blows my mind, I can see how those a decade younger might not bat an eye, they might have sent digital nudes in the past, or even have one on the phone. A decade younger than that (late teens), and they might be so impatient about having to upload a digital nude, they just wonder why you can't just use one of the ones they sent last week. The difference in how the generations view technology and digital exposure in vast. "Sexting" came into the public consciousness as a common term almost a decade ago, but that's just when it was publicly popularized.
I mean, I consciously think about every time I actually give a close approximation on my age in a public forum (like I did above), and make a call on whether it's needed or not. I feel like a dinosaur because of it sometimes, but I also feel like there's only so much privacy you're allotted, and once it's lost, there's really no getting it back. :/
So you hope. But you have to upload your picture first. If they were serious about this they would allow you to compute the hash locally and then only to upload the hash.
Note that they explicitly state that a human will look at the image and then hash it. So they are storing it at least for a while. And you will not be able to verify whether they really delete it or not.
> presumably an attacker will just use the local process to observe and develop a method to defeat it.
If they can do that then the whole method fails anyway.
Besides, I think the set of your average revenge porn idiots intersected with those that are capable of defeating the hashing scheme in order to do their dirty deed is going to be exceedingly small.
> If they can do that then the whole method fails anyway.
i'm not that sure. the memory of any application can be observed, which means the process of fingerprinting - whatever it is - can be observed. executing the process enough times with a range of inputs will provide a load of analyzable data, via recording the transformation of bytes.
this is (loosely) why basically all software can be "cracked" and drm methods defeated.
> intersected with those that are capable of defeating the hashing scheme
they don't really have to, much. think of the attackers as software crackers/warez groups, and the perps as visitors to pirate bay. you don't need to know how to break drm to download cracked software, or use nefarious push-button software. there might even be (possibly illicit) business opportunities here.
Fair enough, but when weighed against the risk of a FB employee going rogue and making copies of those files or FB 'accidentally' forgetting to wipe your images I'd take my chances on this uber elite perv and would not upload my stuff to FB.
This all of course besides the best defense against all of this: do not create such images in the first place and do not allow others to create them.
Can someone explain if / how this prevents someone from just changing a single pixel and thus circumventing the hash? Are they using some kind of probabilistic hashing? Or are they just relying on the offenders to not be so clever?
To be fair the type of people who are sharing naked photos via Facebook aren't likely to be all that technologically sophisticated to enough understand hashing.
Pedophiles would be using something encrypted and anonymous anyway.
Can't they just train their system to do whatever it is they propose that it do in response to alleged revenge porn, but with any nude image/nude images generally? Will their system prevent the propagation of revenge porn videos?
Instagram did this same bullshit to me when I signed up for an account to to follow some photographers.
'Please provide a clear photo of yourself holding your government issued ID, and a piece of paper with the following code handwritten
on it'.
No, fuck that for a joke.
There's this growing trend of everyone wanting your photo and some ID, and then you have no way to verify that information is being kept securely or used for appropriate reasons.
I was trying to change my email address for the game Black Desert Online upon closing down all of my old Gmail stuff and encountered the same sort of thing!
> A photo of your photo ID card, taken in front of today's physical newspaper clearly showing the date, or over your screen showing your open ticket needs to be added as an attachment to the ticket.
> Please also ensure that the ID is a valid government-issued document and has not expired.
>> A photo of your photo ID card, taken in front of today's physical newspaper clearly showing the date, or over your screen showing your open ticket needs to be added as an attachment to the ticket.
I thought you were being sarcastic, but the page you linked indeed states that.
What is that for, a hostage negotiation? Proof that the time machine works?
How many people buy physical daily newspapers in this day and age?
I moved to Singapore. Their purchases are region locked (and may they burn in hell for that alone). Tried to change my account's country, but that requires submitting your ID and your address, the latter has to be verified by either being shown on the ID or by .. submitting a utility bill or something related.
For a game account. For a change that is irrelevant and only matters because of their shitty business practices in the first place.
I had the same happen to me when I moved countries, but I kind of understand it too as there is a whole industry of stealing and selling WoW accounts for a lot of money.
So I steal your password and account, sell it off (who buys stolen accounts? What do I know): Why would you need to change the account country?
I literally just wanted to change the account country to .. give Blizzard money. Anyone with access to one of my (now) two Battle.Net accounts can play my games and use anything in-game. What is this address verification actually protecting?
Yeah but once you're verified you're not asked again - if anything, stealing an ID verified game account only makes them worth more, because they can't be generated using bots anymore.
No, that's not normal (asking for my ID or address).
In fact, I created a new battle.net account. Located in Singapore. No address or government ID needed, I just curse the people at Blizzard everytime I have to log out of the SG account to play a game that is linked to my original (DE) account.
So they believe me, without verification or anything, when I create an account in Singapore. They consider me a liar and hold my account hostage before I provide details that I'm not willing to share when I tell them that my existing account should follow me.
That is _not_ normal. That's bizarre and stupid. If they'd require this for every account, new or changed, then it would be consistent (but .. I still would think that's not okay and wouldn't create an account with them). Handling these cases differently is just hurting the existing account in good standing.
(for the record: I don't have any utility bill, electricity or otherwise. That policy is not only brain dead, it wouldn't even work for me if I wanted to hand over anything like that. Singapore: My utilities are covered by my rent, I don't get bills for that. I don't get any physical bills anyway and literally the only bill I have would be an online mobile bill. Coming from Germany: I get a utility bill (water, electricity) once a year. This whole idea of proving my address is not only utterly stupid, it also seems to expect things to work in ways they don't outside of Blizzard's home country)
That is totally not "normal", at least not where I live (UK).
Banks have asked me for a utility bill as a proof of address when I opened an account, but never after that and nobody else ever has.
Specifically, I have never been asked for proof of identity, or proof of address, when purchasing a service, online or otherwise. Even when I purchase flight, train or boat tickets online, where I have to enter my identity and address details, I've never been asked for proof of those, like a scan of my passport or a utility bill.
That is not normal, stating it is normal is not normal and accepting it as normal is not normal either.
Considering nearly every adult provides a photo of higher quality to their respective governments it does seem a tad conspiratorial and like a lot of unecessary effort.
Also being a part of the 14 eyes means if the United States wanted my biometrics all they need to do is ask. My country is already moving towards real time cctv identification in airports etc. using biometrics from any issued license / passport.
This is a pretty good point. If you're the kind of person who has avoided getting a photo ID on purpose, you probably are staying well away from Facebook. The only targets I can think of would be homeschooled teens with no student ID.
It does however make Facebook even harder to use for anti-government organizing, though Facebook is already pretty clear about not wanting truly anonymous profiles.
>My country is already moving towards real time cctv identification in airports etc.
Your country must be very gullible and have poor standards of proof. Facial recognition is nowhere near capable of any such feat. Even in cases where the picture is taken under controlled circumstances, with ideal lighting, in good resolution, without extraneous background details, etc, facial recognition is still utterly terrible at matching people against a database of known images.
Tell your government to prosecute whatever contracting company lied to them and said such a thing could work for fraud and instead look into technologies that work, like those touchless optical fingerprint scanners that can take a persons prints from several feet away. Heck, I wouldn't be surprised if gait recognition had a higher ID rate than facial recognition. Prints and irises are both very good biometrics, facial is less reliable than having a schizophrenic practice phrenology on people one by one.
So I have a friend who worked the security control room of large commercial building in Sydney, would visit him on the weekend and hang out there. This was not an important place, random corp offices of minor companies.
These cameras were 360 degree and nicely hidden away, you could zoom in and see the pimples on people across the street at decent resolution.
This was 10 years ago, betting against cameras improving since then or in future probably isn't a good idea.
Seems woefully ignorant to dismiss this concern especially post Snowden revelations. Theses are business records subject to mass surveillance with special access tailored for government agencies with rubber stamp "oversight"
I didn’t dismiss it at all. There is a big risk of it being misused by any number of parties, and I agree that we should have additional protections (in our Constitution, perhaps) for personal information that is shared, yet subject to be withheld at our request.
But the movement to identify and share information is not being successfully pushed by governments, but by private industry, largely for personal use. That is where my comment came from.
Zuckerberg is aiming to become president. They might not even need to pay him (outside of political favors, I'd imagine), because he's probably convinced he's building it for his future self.
While I cannot condone or support the recently publicized explicit depictions my opponent claims to the subject of, both myself and the people at Facebook have the utmost respect for user privacy. As always users are encouraged to create locally computed perceptual hash values of the content in question using the publically available SDK tools and submit them to the proper support channel for assistance.
He doesn't need to; if he / they want to, they can already exert more influence on the world than the US itself does. Facebook's population dwarfs that of the US.
Facial recognition is utter garbage and until every phone has Apple's FaceID stuff, it always will be. The TLAs know this and aren't going to waste efforts in that particular snake oil.
This is the case with almost all crypto currency exchanges as well. Even in that case, which I can sort of understand (even tho I still hate it), I was pretty off put by it. If Facebook or some other service starts doing this to me, I will use something else.
Oh, right yes I forgot that. I was going to sign up for Coinbase to demonstrate some stuff as a live demo - buy a few $ worth of BTC and transfer it between wallets/show it in the public blockchain/etc.
They wanted me to scan and upload a government issued ID in order to give them money. Nooope, not a chance in hell.
It’s pretty much all governments, not just the US. “KYC” regulations are kind of brutal in some ways. Especially when the security of that data comes last to everything else...
coinbase and other exchanges that exchange cryptocurrency for US dollars run into this issue because of Know Your Customer anti-money-laundering laws in the US. I've had to go through the same process when signing up for bank accounts online.
why the hell instagram would need this, I have no idea.
Yes all the currency exchanges around the world (at least the reputed ones) do this as a part of their operating country's AML policy. The exchanges are required to report the transactions which raises a flag to the operating country's central banking system too.
AML rules can be complex, but it also has trivial rules like if the occupation of a person and the amount of the money he exchanges do not make sense, the transaction will be flagged and reported to the central bank. The transaction will go through, but will be reported to the central bank using a webservice or as a monthly report. Its upto the central bank to investigate it further.
> There's this growing trend of everyone wanting your photo and some ID, and then you have no way to verify that information is being kept securely or used for appropriate reasons.
This is why in the EU there are laws about that sort of thing
Terrible idea. When a unique identifier is necessary, it should be minted for that purpose and only for that purpose. There is absolutely no reason, and substantial disincentive, to create any singular identifier that could ever be cross-referenced across disparate organizations/systems/etc. No, you really don't need to be able to find out exactly who voted for your party in the last election and has an income over 6 figures, nor do you need to know how many cancer survivors choose Budweiser. If what you actually need to make sure the person closing an account is the same as the one who opened it, then that is absolutely the only fact which that identifier should be capable of ever being used for.
There are plenty of times you do need to authenticate someone's true identity. If you're opening a bank account, or offering a loan or credit account (of any sort).
Personally, I'd go so far as even if you're opening an email or social media account under your purported real name (so someone else can't set up a Twitter or Instagram account pretending to be you).
To clarify, given how easily people trust social media, identity verification should probably be required unless it is clearly a pseudonym.
I think you misunderstood me. I'm not saying being able to identify someone is bad, just that it should be tied to what actually matters. When you're opening a bank account or something, you don't care if that person buys beer, and it makes no sense for the same identifier to be used for both of those things and for a multitude of other random things. That only invites a great deal of trouble. You care about whether they have the collateral they claim or the job they claim and things like that, each of which could have their own unique identifier signed by the employer or similar. At each need for identification, the possible correlations need to be considered and minimized.
This is the KYC (Know Your Customer) laws. Online-only banks like Ally and Simple require the same thing.
In fact, every bank requires this. If you opened a bank account in-person some years ago you may not remember but they asked you for your government-issued ID card.
Not yet...
Preemptively fulfilling KYC requirements probably eases the barriers to incorporating financial services into the platform. I'm not a fan of this, just thinking out loud
It seems like a stretch to think Instagram is entering the financial services space. Facebook owns Insta and Messenger already offers payments. It's probably more likely that Instagram is using traditional KYC tools, probably developed for other parts of the Facebook ecosystem, to influence other KPIs such as the number of bots in the system. But that's just my own guess as well.
SSN and knowledge based auth are sometimes sufficient depending on where you fall against the company's risk model.
KYC processes are a type of situation where it is not a good idea to extrapolate all people's experience from one person's. The underlying mechanisms intentionally vary from person to person.
I think it does. Couple of online banks I tried out all needed a short video of my face saying my name, not just photo of ID, that’s not enough anymore.
In Norway it does unless you use the national BankID electronic signature. I'm pretty sure this is standard in most of Europe.
To get my BankID I had to submit an authenticated copy of my passport done by the postal service as they didn't accept regular drivers license or anything else.
I also work in a bank, breach of KYC procedures could in the worst case land the caseworker and/or manager in jail for two years and all employees are required to undergo AML training at least once every two years.
Edit: was to be a response to the parent comment as there is no requirement for video here.
wording like this is really tricky (intentionally so!), it exploits our naive notion of digital information as files.
One aspect is the machine learning you mention but basically they can also create a new "file" from your image with metadata about your image. Metadata rich enough to render the original image unnecessary. There is no way out of this relatiohship with FB, however they formulate words to comfort you, there is always a different unexpected side to what they are saying..
The more data they keep about these images that they are promising to delete, the more they would make themselves liable to be sued for privacy violation (especially in the EU).
It is a blurred line, but one that most rational companies (=not Uber) would prefer to avoid in the first place by only storing data that is necessary for their main functionality.
But that is going to change in the new year with the new laws. with up to 2% of their global revenue in fines. And the fines IIRC can be issued backwards as well.
Monetizing on people’s relationships and social addiction, collecting and providing detailed data on all user activities, sentiments and connections, selling sets of such data to ...?
Advertisers. If you've ever bought an ad on facebook and see you can pick your target audience by which types of food they shop for or their income range from 10k increments alongside thousands of other filters.... fuck man they have a helluva ad platform
Not facebook's main functionality. The feature's functionality.
e.g if you are building a captcha test feature, it should not keep more data than is needed for it to function.
This is the policy where I work at least, and I believe this is shared by most privacy-conscious companies (which facebook is... as far as the legal definition of privacy is concerned :)).
... and the new iphone also wants to "set up FaceID", and Samsung's S8+ literally says "Face Recognition: Register your face." / "Fingerprint Scanner: Add your fingerprints." / "Iris Scanner: Register your irises".
It's also something directly related to a face. "Want to unlock your phone by looking at it? It needs to know your face". But "Want to post a meme to Facebook? It needs your face" feels quite detached.
Apple doesn’t backup TouchID or FaceID data to the cloud (just as they don’t backup your iOS device passcode). You need to set up each device independently.
Under the EU GDPR (eg https://www.eugdpr.org/) it appears that they'll need explicit consent to retain the data (all of it, anything related and traceable to the person) and users will both have a right to view it and to have it deleted.
> How exactly does the EU propose to apply its laws to people dealing with Europeans outside the EU?
The EU can seize hard- and software of Facebook on their EU servers, the EU can seize Facebooks cash holdings in Ireland, the EU can seize (and sell off) Facebook’s patents held in the EU.
Stuff like this has happened before, and european governments have previously even seized airplanes from airlines right after landing to pay customers for a refund for a flight that the airline refused to pay.
He is saying that that law applies to EU citizens even when they are not located in the EU. Much more difficult then determining which country the user is connecting from.
The GDPR penalties can reach 4% global (i.e. sum all related companies) annual turnover per infraction. If anyone ever found out Facebook had a widespread violation, Facebook would be likely be dissolved.
Nothing of value will really be lost, replacements would spring up super quickly and I'd like to think that we would have learned enough about the downsides of the current social-network systems for the next generation to avoid the pitfalls and hopefully make something much better.
Are there any instances of otherwise financially sound major global corporations being dissolved as a result of EU fines / regulatory action? I suspect if the rubber hits the road in this situation, FB would prevail.
That's insane. What's a "single" infraction?
It doesn't distinguish between revenue and profit? 4% of global revenue could be more than the total of all profit and salaries.
There are lots of things that constitute an infraction: the GDPR aims to broadly protect EU citizen rights. Some infractions can be per individual harmed.
And European courts consider intent carefully. Mere aggregation of personal data isn’t necessarily a violation, but using it commercially almost certainly is.
Yes, I guess it depends on what "picture" covers legally. Is a bijectively transformed version of your picture the same "picture"? How about a degraded version? How about a 3D model of your face?
Exactly. "We don't store your picture, we store an abstracted entity that allows us to recognize your picture in the future." The latter part is what people are concerned about anyway - the abstracted representation of "their face", not some particular photograph.
As I understood it you are making the argument that they will have information about you from uploading this picture even if they delete the actual picture. I'm genuinely asking if it is possible to recover an individual training example from a model which is shown vast amounts of data. I'm only aware of being able to recover stuff like this - https://ars.els-cdn.com/content/image/1-s2.0-S08936080173020... which is hardly PII
The goal is to be able to identify you, not recover an image of you.
This is just as much PII as your fingerprints. Fingerprint based devices store metadata not images. I can't look at a fingerprint and connect it to a person, but a machine can. Similarly the output of a NN designed to recognize people will be useful for validation. Otherwise biometrics would not work.
Remember if they are using your photo as training data the NN is going to learn you vs someone else or it's useless. If they can find you in any photo that's as effective as handing a person the uploaded photo and giving the same task.
I understood your question. What I am pointing out is that when you submit an artifact to FB, it is a trivial matter for them to store as 'extended-attributes' your facial biometrics. This has nothing to do with the ML pipeline that munches on data in the aggregate.
As to whether it is possible to extract such information, per my understanding the internals of ML pipelines are rather opaque, so it would be non-trivial, if not impossible. I have no idea either way.
I think there are two separate questions being raised. My understanding (EDIT: my understanding of the concern being raised in this discussion) was not that they would try to recover individual readings from a trained model, but that they would run a model on your face to extract features, then save an array of features that defines your face. Only those features are needed to identify your face - after data extraction the photo is no longer needed.
This article talks about how those data points are extracted using a different method.
Most humans don’t even have a facebook account. Did you mean “most people in my observable social circle and subculure”? Just because many of my generation willingly give their identity away does not mean this behavior is normal or that this is an excusable form of exploitation.
I used to think richard stallman was just a paranoid lunatic...
Well, the more time goes by, the more i think he was just 100% right. It's time to be much much more careful and radical with the path technology is taking us.
It's time we all invest a bit of our time to provide real open and benevolent alternatives to facebook, google, amazon, and all the rest, because they're steering internet toward an orwelian nightmare.
Network effects. You and me care. Sadly the vast majority around us bow unquestioningly to demands made by powerful groups like their govt or huge mega-corps. People seem to think these power-centres will always be benevolent and flawless... which history shows is a big mistake to assume.
I wouldn’t put a democratic government on the same level of danger than profit driven mega corps. I’m pro free market, but some corps have reach state-level impact on people lives, and they remain without any kind of overview.
I also don’t want state-run web sites to try to provide alternatives to Google ( like europe tried to do 10 years ago), so i think the foundation model is the perfect structure for that. Economically responsible, but not profit driven.
Fake News! Can't we have some trusted institution like ycombinator put these kind of comments through their truth filter before allowing them to be posted? They could even partner with Facebook to do this. /s
I will never do this. I don't even post photos of myself on FB, or use "Messenger" or their mobile app, so it probably wouldn't work anyway, but I have no doubt they're looking to monetize the feature and that businesses and governments are their target markets.
I wonder if they are using their camera permissions in Messenger to automatically grab an image when someone opens the app...is there any way to verify that?
They still have your face from scanning your friends' photos and contacts. Combine datestamp, location, messages, and posts over years and Facebook knows who was together for a photo whether or not you've uploaded any or been tagged. Facebook knows the face of everyone who is on Facebook or who knows a few people who are.
I can pull a fingerprint off the subway stair rails but that doesn't mean I can identify who it is. We anonymously reveal ourselves everyday but the danger comes from confirming which data is ours. The goal is to make it difficult not impossible.
They don't need you to confirm it. This isn't a criminal case. They just need to be reasonably confident. Your face is in a picture with four other people. They can see that phones belonging to four people, who regularly chat with each other, were all at the same location at the time the photo was taken. They are now reasonably confident that your face is one of those four. Do that five or six times and their robots will know your face. Whether you confirm it or not doesn't really matter.
No, not really. There are not many of photos of me on the web and any that are were taken from a distance and too low res to id me.
I've been stingy about that for a long time. FB doesn't have my phone number and I disabled the email address I used to sign up. I delete all my cookies often and use different browsers and turn off and reset my modem to get a new IP address.
I have no apps on my phone, don't use iCloud, and don't use the native "Contacts" app, I made my own for that.
I know I'm still being tracked, but not as much as most and I don't think FB could ID me right now with photos they have.
Yeah this is the real rub; even if you yourself do not give Facebook information, it's been proven that they collect profiles on you. Anyone who has messaged you, has you in their contacts, sent you invites, added you to groups, all this gets added to a hidden profile. You are being tracked whether you gave out info or not.
I doubt iOS lets a custom contacts app respond to a request from another app for contacts though. It'll presumably respond empty data (since they have no contacts in the native app), but with no option to give it the actual contacts if you wanted to.
Perhaps a workaround would be to pick a random celebrity, somebody with a lot of photos online, and upload one of those.
Facebook would have no way of knowing that it's not you, and they aren't likely to complain that you look like somebody else, since facial recognition will likely always pick up loads of duplicates when used on a global scale.
Alright, they are using it when you've been locked out of your account to verify that you are who you say you are.
You'd have to seed the account in advance by uploading some photos and identifying them as yourself, and you'd have to be able to find photos that haven't been uploaded to Facebook previously.
Accounts using celebrities as a profile picture may actually be easier to hack, since the hacker can just upload another picture of that celebrity when challenged.
I may be exceptionally bad at attracting and maintaining friendships. But my experience is that my time between friends is not fungible, and the strength of my friendships are not constant.
For example, my best friends are on the opposite coast. It's not financially realistic for us to hang out in person frequently. The 5 hours a week I could spend sending them letters or calling are not the same as 5 hours I could be spending turning acquaintances who live next door into good friends. Maybe even best friends. It's not theoretically a zero-sum game, of course, but we all know that's not how life actually works and that for one friend to have more of your time/attention means that another loses some time and attention.
What FB has helped me do is to remain easily connected to good friends and acquaintances everywhere. When moving to a new town, finding acquaintances who live there who have common interests with FB is much easier than calling those acquaintances out of the blue and hoping they'll hang out. Most of these acquaintances remain acquaintances, but some become friends -- and the cost of making those friends was substantially lowered.
For remote best friends, FB gives us a way to passively share our lives beyond calling and writing letters. Even if I had unlimited time to spend creating and sending scrapbooks and doing FaceTime, my other friends may not. It's not as good as being together, but I love the option to browse a friend's albums of past recent events on my own time, and then being able to at least experience those memories in a small way.
So I guess I see Facebook has being a very interactive rolodeck. It's not where I conduct my friendships (although I do, to some degree), it just makes maintaining friendships much more efficient. To the point that I keep connections that I would've otherwise dropped, because Facebook has reduced the long-term "maintenance cost". But since it's ultimately a rolodeck to me, I find it easy to ignore and not care what anyone is doing if I don't feel like it, and I have lowered expectations of what I should be getting from FB
> my friend count went way down, my friend quality went way up.
I too quit around then, and I think I feel that most on my birthday. I only hear from probably a tenth the people but now when they remember on their own and text/call it means a lot more.
> my friend count went way down, my friend quality went way up.
This is an important distinction. FB has programmed us to believe quantity is the goal, when quality probably should be. How many FB "friends" does the average user have meaningful relationships with? I'd speculate not many more than it's possible to maintain offline/without FB's "help".
I'm not sure how true that is; virtually everyone I've encountered distinguished unqualified “friends” from “Facebook friends”, the two groups being distinct, usually overlapping, categories. There are certainly people who treasure their number of social media followers, just as there have always been people who treasure the offline equivalent, and social media like Facebook makes it easier to quantity and compare than the offline world. But I don't think it's really that common for people to conflate social media network sizes with genuine friendship.
It's just a tempting, low-hanging way for HNers to feel superior to others.
"I bet all these idiots cannot tell the difference between real friends and the Facebook friend counter like I can. ;) BTW I've deleted my Facebook because I had a crippling addiction to it."
A few years ago I walked away from Facebook. Three months later, I got a phone call from my mother (who's not online) because she heard through the grapevine that I was dead. My stupid Facebook "friends" took my silence for something sinister and jumped to conclusions.
I ended up returning to Facebook and purging my "friend" list down to just a dozen actual friends.
Why are they stupid? All of a sudden, with no warning or explanation you stopped posting and responding? If I had a friend like that I might be concerned and reach out to friends or family to see if everything is okay. To me that signifies that they care about you, not that they're stupid. It's not like they filed a police report or anything.
My account went inactive and eventually was disabled. To get back into the "nightclub" now, I must submit a copy of my photo ID to verify my identity. No thanks to the Facebook bouncer thugs guarding the entrance...
I had a similar experience, but submitted a picture of my congressman (who was standing in front of a billowing American flag) which was accepted without comment by Facebook.
FB is most likely trying to ward off spammers with this idea. Generating unique faces per spam-bot is easy now, for sophisticated spammers, then for every spammer in a few months. The poster is trying to say that this FB effort won't work well and is already only adversely affecting the most stupid and trusting.
Good point, I was only considering the case where FB uses this technique to verify an existing user's identity using facial recognition (which seems like a "logical" use case).
I missed the part where they said they might use it for account creation as well (which makes less sense and seems like a pretty baroque, intrusive and ineffective captcha).
>"Please upload a photo of yourself that clearly shows your face. We’ll check it and then permanently delete it from our servers."
>"To determine if the account is authentic, Facebook looks at whether the photo is unique."
The two statements are a bit contradictory. They might delete the photo but they won't delete its signature/fingerprint, because they need the later to check for uniqueness of other accounts.
Count your blessings. They could have additionally asked for a picture that "clearly shows your genitals".
Anywho, once the 'revenge porn' crowd starts hacking around this by chopping the said head from the said images, the central servers of FB are sure to ask for pics of genitals.
I uploaded a cropped/mirrored picture of Jack Nicholson in the Shining. After 48 hours the account was working again, no idea what happens in that process.
Well, it's rather disappointing that they didn't catch something as simple as that ;)
It's already nontrivial to create fake Facebook accounts. For those serious about it, who already deal with IPs and mobile numbers, I can't imagine that creating novel photorealistic faces would be all that problematic.
Knowing fb, this is for their benefit, not the users i.e, they're trying to catch spammers and fake accounts, not see if you're the one accessing your own account or not.
I fear it's much more insidious than that. Facebook has a tendency to start doing some invasive stuff and then announce it to the world only 5 years later.
They probably plan to use your facial profile for a number of things, none of which have to do with authenticating you to the Facebook website. I even see them sharing the profiles with the DHS to build more accurate facial recognition at airports, and other stuff like that. But of course they wouldn't admit it now, because it would mean everyone refusing to use it from day one.
No wonder Facebook's attempt at getting people to give them their credit cards to enable ecommerce on the platform has been such an utter failure. The most popular searches on Google on this issue are whether or not you can trust Facebook with your credit card data.
That's happening for a reason - Facebook has consistently tried to build a reputation of "shady-as-fuck" company throughout the years, and it's going to pay the price for it, either through stuff like people rejecting its ecommerce platforms, which means fewer billion-dollar monetization opportunities for Facebook in the long-term, or simply stopping using it when they get tired of the company's practices.
> I fear it's much more insidious than that. Facebook has a tendency to start doing some invasive stuff and then announce it to the world only 5 years later.
> They probably plan to use your facial profile for a number of things, none of which have to do with authenticating you to the Facebook website. I even see them sharing the profiles with the DHS to build more accurate facial recognition at airports, and other stuff like that. But of course they wouldn't admit it now, because it would mean everyone refusing to use it from day one.
This validates my little project of uploading a ton of stock photos to my Facebook account and tagging myself in them.
> Facebook has consistently tried to build a reputation of "shady-as-fuck" company throughout the years, and it's going to pay the price for it
Unfortunately, I think the fact that many people don't realize platforms like Instagram and WhatsApp belong to Facebook will allow them to avoid paying the price.
Last I checked 2fa on Facebook REQUIRES a phone number to be provided so that your phone is an sms based backup method. Apparently all the talented engineers at Facebook didn't think "oh, let them download a few spares backup codes like ever other service" or that sms based backups are a stupid idea.
Well... I don't want to legitimize their "need" for my phone number (which granted they already figure out through contact books of others, they don't even hide that they've inferred it). Phones and 2fa are not dependent upon one another.
They say they will permanently delete it off of their servers, but the numerous cases where they lied about deleting account information doesn't exactly engender a feeling of trust.
A friend does HW for Seagate servers and has talked a bit about the 'math' of servers. The reality is the FB may or may not delete the pics, no matter what they think they really did. With the amount of servers they utilize, one is failing about every second, permanently erasing all the data on it. Yes, back-up servers, and copies of the data. Still, a lot of data is getting trashed every second. So, they may be trying to weasel these pics, but do not trust that they are smart enough to get around the hard facts of server decay. If they are being honest, then there is still no way to determine if some back-up somewhere actually is storing the data, despite their best efforts. The code stack for something like FB is so huge, there really is no way of knowing where a piece of data is or isn't. Kafka would be beaming at it's absurdity.
In addition, they also have a lot of 'fresh' servers out there, spinning about for months, not being written to, as some algorithm works it's way around using the fresh servers. These also fail, having never been used. Seagate does not mind when companies like FB do this nonsense.
Question: is Wired actively blocking archive.is? I tried to archive this article just to get a repeated "Network error" [1] which may be a sign of a Layer 3 block. A trivial search [2] shows no page is successfully archived since the end of September.
I was recently locked out of my FB account. The only way to unlock it was to identify my friends' pictures. Apparently sending me an e-mail was too much trouble ...
Haven't been able to log in since, as I refuse to partake in that sort of bullshit. Can't say I miss it.
Consider designing an account security system that can deal with all the kinds of problems that arise when 2 billion people are using the service (email forgotten, email stolen, password compromised, blah blah) and then tell us email verification is the one-stop solution to fixing all of that.
The last time that happened to me I was asked to choose whether my acquaintance I had all but forgotten was one of two random people, a dog, a tree or part of a waffle that claimed to be five other people as well. Fun times.
Maybe by now FB actually has knowledge about people in the image but back then my peers just tagged random things as other friends or themselves for fun.
Exactly, the point it validates and updates the current status of friendship you have. Further it informs facebook who are the people you really know opposite to your "facebook friends" .
FB doesn't necessarily have the names of everyone in all those photos.
I wonder if they would ever do something like show you three photos of friends as the captcha, but they only know the answers for two of them and are using the third to get you to give them info they didn't already have without you knowing.
I read somewhere that actively deleting the content might give you more legal protection in getting it deleted, whether now or in the future. I don't know for sure, but I did it anyway. Also, FB can't show fake endorsements to your friends if all of your likes have been removed.
I mean you need to pick your poison here. You either give facebook more data to validate your authenticity or you allow botnets to influence the social networks. It seems impossible to have it both ways, although I'm willing to hear alternatives in the general case.
Authenticity validation could be completely external from Facebook, and if they gave a shit about your privacy they would not only be OK with it but even push for it.
Many countries already have crypto-secure solutions for verifying their identity when submitting things like tax forms. Facebook just needs to start using them, but they won't, and it's obvious why.
The US isn't the world. "Government Internet IDs" already exist, and there is nothing inherent in them existing that eliminates online anonymity, it just removes it on Facebook (which is exactly the point).
Imagine having to use your government ID to even connect to the internet. Having all your data logged and cataloged to your ID. At one point in time that is what the US government wanted to do. It starts by normalizing the action. Not a world I want to live in.
You're completely de-railing the discussion and trying to respond to this nonsense in any constructive way would only de-rail it further. No amount of strawmen and US-centric "the government wants to enslave us all" propaganda will make me think that the now existing, well working digital ID systems available in the world (not the US) are the work of the illuminati, should be abolished, and/or will be surgically inserted into my brain in the near future.
I made a point that there are already ways for Facebook, in certain countries, to verify the identity of their users, if they so please; and that Facebook will never use these because they'd rather control the system of verification themselves (prioritizing profit, not privacy and integrity of users). They could use the systems already in place and say "you know what, the state of Estonia is telling us this guy is who he says he is, we'll accept that". What you're doing is arguing against government-backed digital identification in any broad definition of the term, and you can have that opinion but it's off topic.
Because most countries don't have crypto-secure identity verification, which means Facebook would still need something like this? I suppose they could contract out to Experian (or other local providers) and ask you to answer some questions from your credit report, but I'm not sure that's really a good solution either.
I thought it was clear enough that I mean't that it's obvious why [Facebook will never implement it even in countries where it already exists]. It's not in their interest to do so, because it's not in their interest to protect the privacy of their users.
Just because you could only think of a sarcastic reason why Facebook doesn't want to do it doesn't mean that other reasons for it don't exist. That's precisely what the GP is saying—FB needs to come up with systems that work for literally the whole world.
How is my reason sarcastic? I'm being completely serious. And yes, of course it needs to work world-wide but I think it's naive to think that's the primary reason FB would be doing it. They'd be doing it because they want complete control of the authentication ecosystem.
> You realize Facebook's algorithm is most likely keeping those posts well hidden from your "friends", right?
Yes, but not completely. I do get a handful of likes/comments on them and one friend commented in person about how often I post them. If they're suppressed by the algorithm, they're not getting suppressed too much.
In any case, I don't put a ton of effort into them. They're just articles I've discovered naturally with a pull quote or short summary.
> You either give facebook more data to validate your authenticity or you allow botnets to influence the social networks
My basic logic/truth table says there's another (possibly more likely scenario): that you give FB more (intimate) data AND botnets still influence your social networks.
I mean, a bot could simply designate using a given stock image for each of their imposter personas, non?
If it can't be fooled by a picture found online (but not uploaded to facebook) then that must mean facebook is indexing images across the internet and analysing all the faces they find... and if it can, what's the point (edit: from a user point of view)?
I find their claim that the picture will be deleted from their servers at odds with their claim that they will make sure that photo is unique.
I suspect that what they're not saying is that they will keep some signature/hash/data about the photo, which I'm sure they will use for much more than just verifying uniqueness.
Yep, that's in the article. They say they'll hash it and delete it (the photo). They don't claim they'll delete the hash. Having the hash is enough to check for uniqueness. Although it's sort of an over-strict definition of uniqueness. Example: I change one pixel → That produces a drastically different hash → That makes the almost-identical photo seem unique.
I'd be surprised if they weren't extracting other info from the photo or training ML models on it as others suggest here.
> I change one pixel → That produces a drastically different hash
If they're hashing the raw bytes, sure. If they're hashing a representation of the face that encodes meaning (like the relative locations of facial features), then changing one pixel is unlikely to affect that.
Are they allowed to sell the "fingerprint" packaged with your name to advertising agencies? If so it should only be a matter of time before those Minority Report inspired advertising screens start popping up everywhere..
The advertisers can always protect that system with some DRM to get the mighty DMCA on their side. That way they don't need to build a robust solution and can still wipe their hands clean if they loose it all to a 12 year old script kiddie.
>> “Please upload a photo of yourself that clearly shows your face. We’ll check it and then permanently delete it from our servers.”
>> To determine if the account is authentic, Facebook looks at whether the photo is unique.
I assume this means that the photo itself is deleted but a one-way hash of it created to test against later.
However, if I change one pixel of a picture of my face, A, to produce a new picture of my face, A', the hash of A' will not be the same as the hash of A, correct? And I can repeat this process n x m times, where n, m the dimension of the image, ja?
Additionally- when they say "unique", do they mean "known as unique" to Facebook, or "unique in the entire world"? If I take a picture of myself and put it on, dunno, my blog on Blogger, what's stopping someone copying it and uploading it to Facebook to pretend it's me? Will Facebook search the entire web for images potentially matching an uploaded image?
For the record, I don't use Facebook. And there are no pictures of myself anywhere on the internets. As if.
Hashes don't necessarily work on the raw bit representation of something. You can have a hash which works on higher level constructs, which for images one solution would be called a perceptual hash.
OK, thanks- I didn't know that. It looks like perceptual hashing can even identify a heavily modified version of an original (e.g. with noise added etc). So what I say about changing a single pixel would definitely not work.
I'm sure it's still possible to steal a picture from someone else's online presence outside of facebook though and upload it as "unique" and facebook won't have any way to know that, unless they scan every other social site (and the rest of the internet, possibly).
And someone could always take a picture of someone else's face, with a bit of social engineering, and use it to get access to their facebok- although I don't think that's the kind of thing facebook want to detect here.
Hashes produce fingerprints(including perceptual hashes). Most people are talking about cryptographic hash functions when talking about 'hashing' but that is just one type.
There is a wikipedia page for 'perceptual hashing' as well as a number of libraries available which claim to do perceptual hashing, so it may just come down to what kind of crowd you roll with :)
Tineye has been able to do this black magic for a while [1]. I understand what they're saying they do, but it's too high level for me to grasp what algorithms are being used.
>"The company declined to share details to prevent the system from being manipulated. Suspicious activity might include someone who consistently posts from New York and then starts posting from Russia."
So in others words people who travel or go on vacation?
Numerous systems that scrutinize IP-based geolocation information already throw up additional security checks when detecting changes.
Personally I welcome it, especially when banks do it. It's low-hanging fruit TBH. It massively fucks with those using VPN or Tor but I'm fine with that.
Except that FB isn't a bank and doesn't store one's financial data.
Also this argument falls flat as we are an increasingly mobile population whether that be travel for pleasure or business.
I remember watching in amusement/horror as a friend I was traveling with in Thailand wasn't able to access her Facebook account unless she verified via her phone that she was actually attempting to log into her account from Thailand. It's like you're supposed to check in with Facebook and let them know where you are. So creepy.
What sort of ridiculous nitpicking is this? First, FB does have a Payments feature, so if you are being pedantic, it does deal with one's financial world.
Second, for a lot of people, their FB profile has a lot of confidential information, and is tied to their personal identity so they can't have the risk of it being compromised based on leaked passwords or whatever. It may not matter for you, but I am happy that they are safeguarding my account just the way a bank would.
There's nothing nitpicky in anything I wrote. FB is not a bank. And it can not be used as someone's financial institution.
>"Second, for a lot of people, their FB profile has a lot of confidential information, and is tied to their personal identity so they can't have the risk of it being compromised based on leaked passwords or whatever."
Who puts "confidential information" on FB?
FB is based on the premise of voluntarily sharing" information. If people are willing to share information on a social networking site it can't really be considered "confidential" or even "sensitive" to you can it?
Also FB is not one's identity, despite what FB would like you to believe that it is.
I have to say: as much as I probably agree with every critic of this… The damn thing is called FACEbook, so it would almost be wrong for them not to do this!
(I don't know a mark for whatever this is, it's not sarcasm or irony, something about tragic comedy fate…)
AirBnB does this today, had to go through the process a few weeks ago to create an account to book a room! And of course the process failed,.. and of course we thought we were going to miss out on our room booking,.. and of course I now really dislike airBnB
Sorry, I don't understand, but wouldn't someone ( suspicious ) also have a photo of yours if they have the access to your account. They could easily upload that photo and pass the captcha test .. Or am I missing something?
If I understand it correctly they would have to provide a photo with your face that FB hasn't seen before. Still rather easy to obtain for the determined.
Facial verification, together with cell phone number, which requires an ID to purchase, has been the standard practice for internet companies in China for a while now, e.g. for WeChat, Alipay, etc.
Applies to most of the biggies: Since i am a behemoth of tech, give me all your personal info and surrender yourself. I will tell you what you should see, eat, buy, think & whom you should be-friend! In return your everyday personal data is my asset, my asset only! Its up to me how i use your personal data. NO QUESTION SHOULD BE ASKED! Remember you have signed the agreement (Privacy Policy)!
So right around we when start exploring the dumpster fire of of SSNs and personal data collection with Equifax and the entire credit reporting / financial industry, a new form of personal data collection as well as mixing up identification and authorization arises ... (I blame Apple too...)
Apart from possible biometrics collection, it is Facebook that now decides how do you behave and what to grant you for your behavior. Like Santa for adults.
Maybe it’s time to open your eyes and see that there is no Santa since 1984?
This isn't very future-proof. Computer graphics today can produce human faces that are extremely realistic. Someone will make a random-face generator, and a bot will pipe random faces to Facebook.
You would think that for captcha purposes (ie. a unique, quickly produced, creative item) it would suffice to say "draw a picture of a dog" (or "upload a clear photo of your hand").
> The process is automated, including identifying suspicious activity and checking the photo. ... The Facebook spokesperson said the photo test is one of several methods, both automated and manual, used to detect suspicious activity.
So I don't get if this "captcha" process is automated or manual (...or both)? Somewhere else in the article it says that users are apparently locked out of their accounts until the pic is verified. Seems odd that there should be a lock-out period if the process is automated (as the first sentence of the above quote implies).
ALSO, for a captcha, isn't this dead easy for bots? Get a DeepDream/Generative Adversarial Network instance to generate faces for you. Bam. This is not a barrier.
Where was it specified that lock-outs are to prevent repeated attempts? It is a plausible reason but the premise of my question (and confusion as to whether this process is automatic or not) stems from:
> ...users are locked out of their accounts _while the photo is being verified_. A message said, “You Can’t Log In Right Now. We’ll get in touch with you _after we’ve reviewed your photo_. You’ll now be logged out of Facebook as a security precaution.”
Emphasis mine.
"after we've reviewed your photo" --> FB can autotag people as pictures are uploaded. Surely they can verify face similarity instantly?
> The face has to match the face of the account, so generating a new face won’t work.
It quite easily could work well if the account has, or is tagged in, any public images and those are used as input to the generation process. (Or if it's associated with a person of whom there are public images in other sources, off of Facebook.)
I'm assuming the process is sophisticated enough that you can't just post an existing public Facebook image of the user without modification, but that may be too generous.
i signed up with facebook (with a unique email), and encountered this. i used a picture (a painting, rather) of george washington.
it seemed to work at first, but when i tried logging in a month or two later my account was disabled. perhaps i wasn't the only person using that image and they were displeased.
Skynet is entertainment. A real ASI will be much more proficient at removing threats.
Both nanobots and a genetically engineered super virus, for example, would be very well suited to extinguish humanity in a timeframe that makes resistance / retaliation impossible.
Now if I as a dumb human can come up with that, just imagine what a being incredibly smarter than anything we could imagine could come up with.
> In a statement to WIRED, a Facebook spokesperson said the photo test is intended to “help us catch suspicious activity at various points of interaction on the site, including creating an account, sending Friend requests, setting up ads payments, and creating or editing ads.”
They're basically training people to provide PII/biometrics on-demand as the price of using their service.
Sounds pretty weird. So if I want to steal someones account I just either A) take a picture of them or B) download a picture of their face from facebook? What's the point exactly?
Seems as silly as uploading your nekid pics so facebook can prevent them from being distributed. If they really wanted to protect you they would just let you download an app that could analyze the files and upload just enough to detect similar video/pictures.
What has facebook done in the past to earn such trust? Seems just the opposite. Isn't it crazy to assume facebook will keep anything you upload secure?
I have long thought if I'm missing anything by not perusing Facebook event pages or anything. Fortunately within my interests (hifi audio, local jazz, photography and computer special interest group meetings) they all still operate within self-hosted forums or blogs and not in facebook.
>Oh boy, what's bound to be another thread full of "delete your Facebook", the new and improved "I don't own a television" statement.
Maybe you're being downvoted because this statement is just as cliché, if not moreso (given the amount of FB users vs non FB users) than the "delete your FB" cliché
This doesn’t seem too weird considering the service is called “face” book, and the point is to upload photos of your face. They have been using facial recognition for years now, this is a natural extension of that.
I am not a user, but I see the value. It’s creepy as hell, yes, but I see the value.
"The new authentication scheme is the second in recent weeks that relies on photos. Earlier this month, Facebook asked users to upload nude photos to Facebook Messenger, as part of an effort to prevent revenge porn. Facebook said it would use the nude photos to create a digital fingerprint against which to compare future posts."
Wait what? I had to check whether today was April 1st.