Thursday, November 21, 2013

Session 4 Discussion: Social

Q: how should we view privacy? From today's session, on one hand you have privacy preservation, then on the other, you have logging, and Ben is saying how nobody is telling the truth.
A: [Ben] I'm agnostic. My work doesn't say anything about privacy, and since the truth isn't out there, it's almost irrelevant.
A: [Anmol] It's difficult to enforce privacy, so our approach is to provide transparency as a "different" approach to privacy preservation.
Q: [to Anmol] is your chrome extension going to be available?
A: [Anmol] Yes, maybe next week.
A: [Ben] We can all take proactive approaches to preserving privacy. For example, if I was  FourSquare user, I'd add a script to obfuscate my own data.
A: [Anmol] most tools to help anonymize things today are point solutions. There is a SIGCOMM paper that talks about unifying these tools across the "user profile" stack (this type of work) that might help.
A: [Karen] there is some work about payment schemes were users (those targeted) can monetize the ads themselves.
Q: observation that the pull model might actually help simplify privacy, where you can just ask for, say, any 1000 samples of something, without attributing those samples.
A: [Harsha] Good though, but latency might be a concern (see his talk)
Q: As computer scientists, we already understand some of these things, but the average user might not be as sophisticated. We might want to be careful about giving these users another tool who may mistakenly reveal something private.
A: [Harsha] Overall, we are targeting users who are currently sharing nothing, and the pull model may help with this. But the proof will be in the pudding (upon deployment).
A: [Ben] Where should responsibility lie: users or providers? For the former, you're assuming sophistication, and for the latter, you're assuming the right incentives, etc. It's unclear if one answer will work.

No comments:

Post a Comment