Stop The Presses!
Hertz puts cameras in its rental cars, says it has no plans to use them, Fusion, March 13, 2015
“This week I got an angry email from a friend who had just rented a car from Hertz: ‘Did you know Hertz is putting cameras in rental cars!? This is bullsh*t. I wonder if it says they can tape me in my Hertz contract.’ . . . ‘Hertz added the camera as a feature of the NeverLost 6 in the event it was decided, in the future, to activate live agent connectivity to customers by video. In that plan the customer would have needed to turn on the camera by pushing a button (while stationary),’ [Hertz spokesperson Evelin Imperatrice] explained. ‘The camera feature has not been launched, cannot be operated and we have no current plans to do so.’” Also, Hertz cries lack of ability to even utilize these cameras, explaining that only “one in 8 Hertz cars has a camera inside” and that they “do not have adequate bandwidth capabilities to the car to support streaming video at this time.”
If Hertz were to officially enable the feature, the FTC would be likely to get involved quickly: “Not notifying customers that they might be on candid camera is generally frowned upon legally.” The FTC has already had to crack down on both a rent-to-own company which “failed to warn customers that it had put spyware on their laptops” and caught some, um, personal activity, as well as force GM to warn consumers when GM installed “nanny cam”s in its vehicles, since it’s “legally problematic to spy on people in your car without their knowing about it.”
Via @Fusion @TheRealFuture @KashHill
Note: Yeah, even if right now I’m only facing 1 in 8 odds, and Hertz claims they’re not spying on me, the Evil Hackers sure might be and I’m therefore never renting from Hertz again…
Back To Basics
Montana and Washington State Propose Amendments to Data Breach Legislation, Hunton Privacy Blog, March 13, 2015
HB 1078 available here. “On March 4, 2015, the House of Representatives of Washington passed a bill (HB 1078), which would amend the state’s breach notification law to require notification to the state Attorney General in the event of a breach and impose a 45-day timing requirement for notification provided to affected residents and the state regulator. The bill also mandates content requirements for notices to affected residents, including (1) the name and contact information of the reporting business; (2) a list of the types of personal information subject to the breach; and (3) the toll-free telephone numbers and address of the consumer reporting agencies. In addition, while Washington’s breach notification law currently applies only to “computerized” data, the amended law would cover hard-copy data as well.”
Note: The blog piece is short and goes into some more detail regarding the bill – it’s worth at least a once-over. It’s nice that the bill applies to that old-fangled tree-killing stuff that some people use.We (I) so often tend to associate “Data” Breaches with computers than we often forget that rooms full of hard copy backups and/or originals yet to be digitized even still exist.
Outsmart the Evil Hackers
Yahoo’s plan to get Mail users to encrypt their e-mail: Make it simple, The Washington Post, March 15, 2015
Google’s announcement available here. “End-to-end encryption, a feature which locks up message contents so that only the sender and receiver can read them, can be a much more cumbersome process for e-mail [than the already available SSL encryption for Web mail users — meaning data can be seen by the service, as well as the senders and recipients of messages]. [End-to-end encryption] often involve[es] specialized software and looking up encryption keys.. . . But in the wake of reports from Edward Snowden . . . [many] tech giants, [including Google and Yahoo], have pursued technological solutions to shore up customers trust, including an expansion of end-to-end encryption.”
Via @TheWashingtonPost @TheSwitch @KansasAlps
Note: They are SO winning the game today. I’m excited! Happy Monday!!
How to make it harder for hackers to assemble your personal information., Slate, March 16, 2015
“It is nearly impossible to participate in modern society without entrusting your most sensitive personal information to countless Internet-based systems.. . . So the question . . . is: How can you keep your personal information secure while continuing to participate in a society powered by the extensive sharing of personal information? . . . To address the challenges posed by the always-on sharing economy, we need to shift the way we think about personal security.. . . Pieces of personal information should by themselves not be allowed to unlock anything. Instead, they should act like puzzle pieces lying on a table: visible to everyone, but very difficult to fit together without additional information.” The key recommendations Jordan McCarthy makes are for users to use:
- “Dual-factor authentication”
- “Establish a credit freeze”
- “Make local copies of all important account statements—and check them every month”
- “Consider how your accounts are linked, and use bogus information for security questions” [Note: for instance, perhaps choosing your father’s pet name for your mother would be more secure than simply her maiden name]
- “Keep important secrets … secret” [Note: you would think that this would be a no-brainer]
Via @Slate @FutureTense
Note: This is a great guide for some simple tips to protect your safety online.
Hey Twitter, Killing Anonymity’s a Dumb Way to Fight Trolls, Wired, March 13, 2015
“Tor users started reporting last week that they are being prompted more frequently than ever for a phone number confirmation when creating a new Twitter account—or in some cases when using a long-standing account. This development is disastrous for the free speech the platform generally stands for, and will likely not curb the abuse for which it has come under fire. If this change was targeted at that harassment—addressing the leaked acknowledgment from CEO Dick Costolo that “We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years”—it’s a dangerous example of the Politician’s Syllogism: we must do something; this is something; therefore, we must do this.. . . Unfortunately, that undermines the anonymity of the people who need it most, without necessarily providing protection for targets of harassment.. . . Asking these people to provide a phone number puts them at risk: in many places they will be forced to tie any phone number to a real-life identity.. . . Cracking down on anonymity tools may seem like something to do, but Twitter—and the other online platforms we count on—need to do better than just doing something.”
Via @Wired @Xor
Note: This is an extremely complex issue, on which I do not yet have a solution to. But I thought this article was worth your attention.
Search and Destroy
Facebook Clarifies Rules on What It Bans and Why, NYTimes, March 16, 2015
Current Community Standards available here. The article says that “On Monday, the company will clarify its community standards to give its users more guidance about what types of posts are not allowed on the service.”
“Facebook walks a delicate line when it tries to ban violent or offensive content without suppressing the free sharing of information that it says it wants to encourage. Its audience is vast, with a huge variance in age, cultural values and laws across the globe. Yet despite its published guidelines, the reasoning behind Facebook’s decisions to block or allow content are often opaque and inconsistent.” For instance:
1) “The company flip-flopped repeatedly on whether to allow beheading videos on the service before recently deciding to ban them.”
2) “Facebook has always banned pornography and most other nudity, but it is now diving into the nuances” and
3) “The company is for the first time explicitly banning content promoting sexual violence or exploitation, including so-called revenge porn, which it defines as intimate images “shared in revenge or without permission from the people in the images.””
But, most importantly, “One thing that has not changed [is that] Facebook has no plans to automatically scan for and remove potentially offensive content”, leaving Facebook to continue to “rely on users to report violations of the standards” causing the take-down effort to take “typically 48 hours on matters of safety”.
Via @NYTimes @VinduGoel
Note: Though we obviously want Facebook to continue to be a place where people are free to voice their opinions, users need to feel safe doing so. Facebook is a large enough company at this point that they should not be able to hide behind the limited size of their review team – they could easily hire more reviewers. I imagine the company is so concerned with complying with take-down requests from governments (the second half of the article) that the company simply does not have the man power to make user take-down requests a priority. But its not as if the code does not already exist. As I mentioned in my blog piece regarding The 4th Annual Privacy Law Forum: Silicon Valley and in a live tweet during the conference, Danielle Citron shared that there is Code used during child molestation cases to automatically delete content from the Internet. I tracked this article down to explain how it works:
(Old but related to the above story)
Google tipped off police over emailed child abuse images, The Guardian, Aug. 4, 2014
“Images are hashed, a process that creates a unique identifier (known as a hash) while rendering it impossible to recreate the initial image, and the hash is compared to a database of known child abuse images. The technology used by Google to hash the image is unique, and was developed specifically to solve this problem. The hashes are then compared with a database of known child sexual abuse images, and if they match, the image is passed on to the NCMEC, or its British counterpart the Internet Watch Foundation. At that point the first human – a trained specialist at one of the two organisations – sees the image, and decides whether or not to alert the authorities.”
Via @GuardianTech @AlexHern
Note: At least Google and AOL use this technology, and the article implies that Microsoft as well as other ISPs may as well. The ACLU, understandable, is concerned about the existence of the technology, but due to its potential for abuse. Remember that the Coders tend to create solutions for one problem without thinking about all of the potential ramifications- if you had to wait for us to think of every single possible outcome from using the programs we write, you would never, ever get new software.
Note: As for applying the technology in an effort to enforce Community Standards on Facebook, the Code would need to be revised such that the Code did not rely upon an already existing database. Further, unfortunately, the Code does still require human review. But at least the Code would cut out the first level of enforcement in that it would remove Facebook’s current reliance on users for flagging content.