Privacy

The US Senate Dives Into Privacy Issues

August 4, 2010
| Blogs & Op-eds

ITIF Senior Research Fellow Richard Bennett explains the changing nature of personal privacy in the Internet era and why Congress should take such changes when attempting to regulate the Internet.

Data Privacy Principles for Spurring Innovation

June 10, 2010
| Reports

As data on individuals or their actions increasingly is collected and stored electronically, it is important for policymakers to consider the effect this has on privacy. The Department of Commerce's notice of inquiry provides a welcome opportunity to explore the best ways of protecting individual privacy while avoiding constraints on business innovation and unintended negative impacts on consumers as a whole. Privacy is important, but it must be balanced against competing goals including usability, cost and future innovation. While many technologies can be misused, they should not be banned simply because they come with some risk. Privacy fundamentalists often overstate privacy concerns as a rationale for opposing certain innovations: we have seen this in everything from RFID to biometrics to electronic health records. Moreover, restrictive privacy regulations for the private sector would likely result in less innovation, fewer free services for the average user, and higher costs for consumers. Instead, policymakers should embrace principles that support consumer privacy, but not at the expense of productivity and innovation.

ITIF Statement on Facebook's New Privacy Controls

Statement by Daniel Castro, senior analyst for the Information Technology and Innovation Foundation on new Facebook Privacy features:

"These new features give consumers more choice and more control over their information--a win for both Facebook and its users. Facebook's latest changes show that companies are responding appropriately to their customers' concerns about privacy.

ITIF Comments on Draft Privacy Legislation

May 25, 2010
| Testimony and Filings

Comments submitted by ITIF Senior Analyst Daniel Castro to the House Energy and Commerce Subcommittee on Communications, Technology and the Internet Subcommittee Chairman Rick Boucher and Ranking Member Cliff Stearns on the discussion draft of privacy legislation.

Google’s WiFi Controversy Deserves a Slap on the Wrist, Not a Punch in the Nose

May 24, 2010
| Blogs & Op-eds

On May 14, 2010 Google announced that while developing its location-based services, it had been inadvertently collecting samples of payload data from wireless network traffic since 2007. This means that Google had captured data packets sent over unencrypted wireless networks—data packets that could include sensitive Internet traffic such as email and web browsing activity. The disclosure from Google followed an internal review prompted by the German data protection authority’s request for an audit of Google’s data collection for wireless networks. The company has apologized for its mistake and it has halted its wireless network data collection indefinitely.

Not surprisingly, this mistake has prompted outrage and criticism among privacy advocates and consumer protection agencies in the United States and Europe. While no company should be illegally collecting information about citizens without their permission, this situation is not as straightforward as some would like to make it seem. Certainly the legality of Google’s actions should be explored, as with any alleged crime, but this incident should not be used to impose punitive sanctions on Google unless it is found that the company caused consumer harm or did not act in good faith. To do so, would send a chilling warning to other companies that they face severe risks if they engage in fast-paced innovation of information-based services.

The initial facts show that Google has acted responsibly. For example, Google publicly disclosed the unintentional data collection and asked a third-party to review the incident and ensure data is properly destroyed. More importantly, no user harm has been identified. Of course the magnitude of the incident remains unclear. Some questions remain unanswered or unverified including how much information was collected, how the data was used during this time (Google says it was not used in any Google products), and whether any copies of this data were made. These facts should determine how government agencies proceed.

Unauthorized recording of electronic data should not be condoned or encouraged. However, while we may wish that companies never make mistakes, this is simply an unreasonable expectation given the complexity of many applications today. This does not mean that companies should be given a free pass to break laws or be absolved from negligent behavior, but it does mean that issues such as harm and intent should weigh heavily when mistakes are made. The goal of government should be to strike a balance that protects consumers while still encouraging socially-responsible and economically-beneficial innovation.

Unfortunately, as ITIF has argued previously, the current debate on electronic privacy has been driven largely by privacy fundamentalists who object to virtually all instances of for-profit companies providing unregulated services in the information economy. Yet, overall, consumers have benefited widely from the unbridled creativity of the private sector with innovative products and services. For example, Google collected this wireless network traffic as part of the development of its location-based services. Google’s location-based service allows users to pinpoint the location of their computer or mobile device on various Google products and services. Google also has created a Geolocation application programming interface (API) that developers can use to integrate Google’s location-based services in non-Google products. Consumers have benefited from this service with many useful location-aware applications from location-based social networking to online maps that show users their location and nearby points of interest to “location-based security” that locks or unlocks certain features on a mobile device based on where it is (e.g., unlocked at home).

To determine a device’s location, Google uses a variety of signals and data sets, including GPS, cell towers, and wireless access points. Different location indicators are used depending on the situation and the device, for example, some devices may not use GPS if they do not have a GPS receiver or cannot receive a signal. By “fingerprinting” wireless networks, Google is able to use this data to determine the geographic location of a mobile device. Other companies, such as Skyhook Wireless, have created similar products that map wireless networks to specific geographic locations.

Google collected the wireless data using the same vehicles it uses to collect imagery for its Street View service, although the two programs are otherwise unrelated. Specifically, Google used wireless receivers in its vehicles to record the beacon frames broadcast by wireless networks as its vehicles traveled through different areas. These data packets include the Service Set Identifier (SSID) which identifies the network name and the Media Access Control (MAC) address which is a unique identifier for each network device. Google also recorded data on signal strength, broadcast channel, and the wireless networking standard used (e.g., 802.11g, 802.11.n, etc.). Google does not share its data set about wireless networks with third parties but rather provides an API to interface with this data. This means that Google does not make public some potentially sensitive information, such as a list of all open wireless networks in a neighborhood.

Google’s accidental data collection also raises the question of whether users should have an expectation of privacy when transmitting unencrypted information wirelessly. Arguably if users transmit data without encryption they should not expect the information to remain private since the data packets can be visible to others. This should be especially true today now that encryption is standard and easy to use on consumer-grade wireless routers and web browsers. However, in general, the existing law protects both encrypted and unencrypted data transmissions equally. This differs from oral communications where legal protections only apply to “oral communication uttered by a person exhibiting an expectation that such communication is not subject to interception under circumstances justifying such expectation.”

An analogy here provides a useful comparison: are wireless transmissions more like a mailbox or a window? If they are more like a mailbox, then wireless transmissions should be protected. A mailbox may not be locked, but that does not give others permission to look inside. However, if wireless transmissions are more like a window, then—unless individuals take action to cover the windows—we do not have a high expectation of privacy. (Of course, even with windows we have Peeping Tom laws, which vary by jurisdiction, that protect people from intrusive actions by others. However, in many cases, these laws require the victim to demonstrate harm.)

Objections have been raised about this for many years. For example, in 1986 when Congress updated the Electronic Communications Privacy Act (ECPA), a New York Times op-ed highlighted the legislation’s failure to distinguish between wired and wireless transmissions: “To disregard the medium is to ignore the essence of the privacy issue. Some media, such as wire, are inherently private. That is, they are hard to get at except by physical intrusion into a residence or from a telephone pole. Other media, notably radio signals, are inherently accessible to the public.”1 Given the ease with which electronic data transmissions can be encrypted, perhaps this incident should serve as a red flag that privacy laws should be clarified to give more legal protection to private data when it is encrypted (i.e., to prevent unauthorized decryption of encrypted data) and less when it is not encrypted. Such an update would align legal rights with technical realities.

Again, we are not excusing Google’s mistake or seeking to reduce the privacy of individuals. However, this incident should not be seen as just another opportunity to criticize Google for its use of information and demand more regulation of electronic data. Instead, it is an opportunity to highlight the need for the private sector to implement better internal controls, for consumers to protect their sensitive information, and for nations to ensure that their laws protect both consumers and the spirit of innovation.

Endnotes:

1. Robert Jesse, “How not to protect communications,” New York Times (September 13, 1986), p. 27.

Castro Discusses Facebook on NPR's All Things Considered

May 23, 2010

ITIF Senior Analyst Daniel Castro speaks with NPR's Guy Raz about recent changes on Facebook and the privacy implications on All Things Considered.

Facebook Is Not the Enemy

May 18, 2010
| Blogs & Op-eds

The criticism of Facebook has reached an all-time high this past week as privacy advocates have fanned the flames of discontent among Facebook users, some of who are confused and upset by recent changes in the service and new features for sharing data. This criticism centers around two new features Facebook debuted at its F8 Developer Conference in April--instant personalization and social plugins. The first feature, instant personalization, allows certain partner sites to use data from a Facebook user's profile to customize their online experience. For example, if a Facebook user visits Pandora, a customizable Internet radio website, instant personalization will allow Pandora to create a custom radio station for the user based on their likes and dislikes from their Facebook profile. The second new feature, social plugins, allows developers to place a Facebook widget on their website so that visitors can "Like" a page or post comments. These interests can then be shown on a Facebook user's news feed and users can see their friend's activity. It is important to note though that websites, like the Washington Post, that use Facebook's social plugin do not see any of the user's personal information--but Facebook users benefit by receiving recommendations about web pages that their friends like or recommend. Both of these new features users can opt not to use.

Much of the frustration among users seems to stem from a feeling that they must continually monitor their privacy settings as Facebook introduces new features, rather than having a "set it and forget it" privacy option. However some users paradoxically complain both that there are too many controls and that they are too confusing, and that they do not have enough control over their personal information. Some users are also unhappy with recent changes that make personal information public if Facebook users choose to share it. Certainly Facebook could have done a better job of explaining its recent changes to users and helping them update their privacy settings, but many companies struggle with this challenge and Facebook seems to be getting better at this over time.

Other users are simply confused about what Facebook is doing. Contrary to some misconceptions out there, Facebook is not selling consumer data, they are selling targeted advertising. Targeted advertising works by matching ads to users based on the information in their profile. So, for example, a wedding photographer in Dallas can pay Facebook to serve an ad to everyone in Dallas who switches their relationship from "single" to "engaged." But Facebook does this without ever revealing any personal information to the advertiser. This benefits everyone--the photographer gets more clients, the users get relevant ads, and Facebook is better able to fund its free services.

Moreover, some of the complaints about Facebook are just downright absurd. For example, the New York Times published an infographic criticizing the length of Facebook's privacy policy of 5,830 words ignoring the fact that its own privacy policy comes to a total of 4,857 words. And opponents of Facebook's policies seem to miss the irony of using Facebook as their primary tool to organize a protest since it only highlights how useful Facebook can be for users. For example, Diaspora--the anti-Facebook start-up--is raising money using Kickstarter, a grassroots fundraising tool that uses Facebook to help spread the word. And many blogs and news websites that host articles critical of Facebook have already implemented the new Facebook features which are at the heart of the controversy including social bookmarking tools and targeted advertising.

Let's face it--when you have 400 million users, you are not going to be able to make everyone happy. But privacy fundamentalists--those individuals who value personal privacy above all other values--don't just want to set the privacy rules just for themselves, they want to set them for everyone else. For example, Danah Boyd a fellow at Harvard's Berkman Center for Internet and Society, claims that Facebook is a utility and should be regulated like one. Others, like the developers of Diaspora, are trying to develop an open-source social networking tool where data is decentralized and in the hands of private citizens rather than being controlled by corporations. And still others want government to pass more regulations on how companies can use consumer data. All of these individuals share at least one thing in common--they want a world where for-profit companies are not providing largely unregulated services in the information economy. While they may see this as a noble goal, the end result for the average user would be less innovation and fewer free services. Facebook, and many other useful social networking tools, would not be in existence today if they had to rely on donors and grants, instead of investors and venture capital, and if their innovations were DOA because of strict privacy regulations.

If privacy fundamentalists only cared about their own privacy, they could simply opt to not use Facebook. But these people instead see the world as one in which everyone else needs to be saved from having Facebook and its army of evil programmers sell off their personal data to the highest bidder (something Facebook is not doing). Consider a recent statement by Chris Conley at the ACLU who said, "People are not necessarily thinking about how long this information will stick around, or how it could be used and exploited by marketer." It is this type of paternalistic view of Internet users that is at the heart of arguments in favor of government regulation to protect consumers from themselves.

However, as we've written previously, Facebook is neither a right nor a necessity. Certainly others agree with this sentiment. As Betty White recently joked on SNL, "Facebook... sounds like a huge waste of time. When I was young we didn't have Facebook, we had phone book, but you wouldn't waste an afternoon with it." It may be fun, useful and part of the daily routine for 50 percent of its 400 million users, but its popularity should not be grounds for government intervention.

Government intervention is not needed because there is already a simple solution--if you don't like Facebook's policies or services, then don't use it. Or use it, but use Facebook's privacy options to not share certain information. Facebook offers consumers a tradeoff--consumers can use its free service, and in return, Facebook can sell targeted advertising based on user data (which, again, is fundamentally different than selling consumers' data). Consumers face trade-offs all the time. I personally don't like paying $10 for popcorn at the movie theater, so I usually skip the popcorn line. Sure, I would be happier with the popcorn, but I'm not willing to make the trade-off. The trade-off Facebook offers is no different and consumers that do not want to make the trade can simply not use the service. This same idea was echoed last week by Facebook executive Elliot Schrage who (perhaps more tactfully) stated, "We are not forcing anyone to use it."

However, for as much media attention and criticism Facebook has received over the past few weeks, most people seem content to remain with Facebook. Some people will leave Facebook; indeed, some already have. For example, Leo Laporte the host of This Week in Technology generated much media attention by deleting his Facebook account during last Wednesday's podcast. But this does not seem to be catching on. The "Quit Facebook Day" website has attracted fewer than 3,000 pledges. And while some individuals (mostly privacy fundamentalists) will choose to delete their account (if they were using Facebook to begin with), for most users, the benefits of Facebook still seem to outweigh the costs of using it. Why? Because it is a great tool for making and keeping connections and most users get a lot of value out of using it.

This is certainly not the first time that Facebook has been faced with criticism about its service. In 2006, Facebook users protested the introduction of the "News Feed" which showed status updates from Facebook friends, yet a few years later, the News Feed is now a standard feature embraced by users. Even one of the leaders of the protest from 2006, has come to Facebook's defense now arguing that "Facebook...is the wrong target for our anger. It has done more to bring people together than any technology of the last five years, and the good it has brought far outweighs the bad. We made the decision to turn our personal information over to a private company, and for the most part Facebook made good use of it."

Social media is encouraging people to be more open about their lives. Some social networking tools like Twitter even make data public by default. But individuals still have control over what information they share and with whom they share it. Moreover, no user is forced to use a social networking tool. Facebook should not be criticized for trying to monetize its business model, which yes, involves serving users targeted ads based on their profile and personal data and getting as many users to join by offering innovative new features. But it is not selling user data to advertisers and it does not plan to.

Moreover, Facebook is not the enemy. With or without Facebook, people will continue to share information and use personal data information for work and for play, and users will have to learn to be responsible for their own actions and aware of what they do online. Some users may not like some of Facebook's recent changes, but it is not the bogeyman privacy fundamentalists have made it out to be and it should not be regulated like one. By and large, the changes Facebook has made have been designed to create a more useful and interesting Internet experience for its users. Before rushing to regulate social networks, policymakers should remember that not only do applications like Facebook provide users with real value in ways that do not violate their privacy, but that users have many ways of controlling their privacy on these networks.

Privacy fundamentalist will continue to insist that the government implement data privacy regulations, partly on the basis that consumers need to be protected from their own choices and that the kind of mass customization that new Facebook applications enable are not needed. But this kind of paternalism is not needed. People have learned in the offline world how to protect their privacy (e.g., they close the drapes at night before undressing), and they are learning it in the online world by not sharing sensitive information and controlling their privacy settings (and yes, exhibitionists both offline and online can share what you and I might consider too much information). Moreover, stringent new privacy regulations that effectively neuter all useful information sharing on the Internet would not just hurt Facebook (a U.S. company, it should be noted, that sells its services around the globe, creating jobs and export revenue for the U.S. economy), it would also stall future innovation and possibly eliminate many of the useful applications that people use today.

One Step Forward, Five Steps Back: An Analysis of the Draft Privacy Legislation

May 5, 2010
| Reports

Draft legislation on consumer privacy offer many opportunities for improvement.

FULL TEXT

On May 4, 2010, U.S. Representatives Rick Boucher (D-VA) and Cliff Stearns (R-FL) released a discussion draft of legislation governing data privacy.[1] The legislation would create specific data usage and handling requirements for nongovernmental organizations that collect, use or disclose consumer data. Organizations not following these requirements would be subject to penalties from enforcement actions brought forth by the Federal Trade Commission (FTC) or by state attorneys general and state consumer protection agencies. The legislation does not create a private right of action.

As consumer data increasingly is collected and stored electronically, it is important for Congress to consider the effect this has on privacy. The discussion draft provides a welcome opportunity to explore the best ways of protecting individual privacy while avoiding constraints on business innovation and unintended negative impacts on consumers. However, much of the concern over data privacy is speculative and consumers have experienced few, if any, harms because of the current privacy laws. Before Congress enacts new laws, it should first demonstrate that better enforcement of existing privacy regulations are insufficient to protect consumers. Enactment of this legislation as drafted would add yet another layer of complexity to the existing patchwork of federal laws regulating consumer privacy, including the Gramm-Leach-Bliley Act, the Fair Credit Reporting Act (FCRA), the Health Insurance Portability and Accountability Act (HIPAA) and the Fair Debt Collection Practices Act (FDCPA). Moreover, it represents yet another push for more government control over the private sector in the name of protecting consumers. Too often such legislation ends up imposing news costs on consumers and limiting innovation and the development of new online services.

This is not to say that a federal framework for consumer privacy would not be useful. However, policymakers should recognize that consumer privacy should not come at the expense of beneficial uses of individual data. For example, some organizations, such as LegiStorm, which provides salary information on Congressional staffers, and OpenSecrets.org, which tracks money in politics, use personal data to provide online tools to foster transparency and public accountability. Other organizations use consumer data for other beneficial purposes, such as providing a service or delivering targeted advertising.

In its current form, the draft legislation presents many problems including 1) raising costs for consumers while creating few benefits; 2) establishing affirmative consent (“opt in”) requirements for the collection, use and disclosure of certain types of information; 3) creating certain restrictions on behavioral target advertising; 4) granting the FTC the authority to establish a security standard to protect consumer information; and 5) failing to update privacy laws regarding government use of digital data. However, one positive element of this draft legislation is that it includes a preemption clause so that the proposed federal law would supersede any state regulations. To be effective, a federal framework for consumer data privacy should establish a single, nationwide standard for consumer privacy thereby reducing regulatory complexity for the private sector.

Bad: Raises costs for consumers while creating few benefits

The draft legislation includes certain provisions that create unnecessary costs for the private sector which will be borne by consumers.

First, in an effort to apply the rules not just to the Internet, the draft legislation mandates that, in certain instances, organizations provide offline notification of their privacy policy to consumers. The legislation states, “If the covered entity collects covered information by any means that does not utilize the Internet, the privacy notice required by this section shall be made available to an individual in writing before the covered entity collects any covered information from that individual.” The potential impact of this one sentence could be substantial as it would likely require many organizations to provide paper-based copies of their privacy policy to individuals. For example, this requirement would appear to limit the ability of organizations to collect registration forms or surveys from in-person events such as conferences or sporting events without first providing copies of the organization’s privacy policy. Not only would this requirement cost a significant sum of money to implement nationwide, it would also be a waste of paper.

Second, the draft legislation mandates that all organizations covered under the legislation (i.e. virtually all online businesses) must have a privacy notice on their website conforming to specific requirements. The legislation defines 15 specific items that each privacy policy must contain including, for example, “a hyperlink to or a listing of the [FTC’s] online consumer complaint form.” While privacy policies have been an industry best practice for many years, this legislation would impose a cost on organizations large and small as they would need to undertake a review of their privacy policy to ensure it conforms to this legislation.

Bad: Establishes affirmative consent (“opt in”) requirements for the collection, use and disclosure of certain types of information

Currently, organizations operate under a notice and choice regime, whereby consumers can review the privacy policies, if any, offered by an organization, and then decide whether to use the services offered by that organization. For example, if a new mobile application or online service does not provide a privacy notice on their website, consumers can decide that this does not meet their standards and not use the application or service. While many privacy advocates would like to see a more granular system in which consumers could opt out of specific types of data collection and use, the current privacy regime is effectively an opt-out system since consumers can decide whether or not to use a service based on the data usage and handling practices of an organization.

The draft legislation ends the current regime by establishing affirmative consent (“opt in”) requirements for certain situations including: collecting sensitive information and location-based information; sharing information with third-parties; and modifying an organization’s privacy policy. Others have shown how “opt-in is a rhetorical straw-man that cannot really be implemented by regulatory policies without creating a number of unintended side effects, many of which are suboptimal for individual privacy.”[2] In addition, opt-in requirements create an administrative burden on organizations as they must ensure that every user take a proactive step before they can offer their customers a specific service. Policymakers should endeavor to understand the costs of opt-in before enacting this requirement.

For example, the draft legislation unnecessarily restricts the collection of certain types of information related to an individual’s location or deemed “sensitive.” In addition, the restriction on sharing information with third parties would limit the ability of organizations to integrate their services with other providers. For example, organizations would find it more difficult to partner with outside entities to create a combined service. Mash-ups—remixing data across multiple external service providers—are one of the hallmarks of the Web 2.0. Organizations using services provided by another entity that require consumer information, for example an online mapping service, would possibly not be allowed without affirmative consent. Similarly, the requirement that covered entities obtain affirmative consent from users before making any material changes in their privacy policies would restrict the ability of service providers to rapidly develop and deploy new services, such as the changes recently introduced by Facebook.[3] These types of restrictions would effectively create speed bumps to innovation.

Finally, by requiring organizations to obtain affirmative consent for every material change in their privacy policy, this legislation would create an incentive for organizations to establish unrestrictive privacy policies so that future development would not be needlessly constrained by their own policies.

Bad: Creates certain restrictions on behavioral target advertising

While there is support in Congress for behavioral targeted advertising, the draft legislation includes provisions that would restrict this beneficial type of online advertising which provides consumers more relevant ads.

First, the restriction on the collection and disclosure of certain types of information categorized as “sensitive” means there is an entire class of targeted advertising that cannot be used. The draft legislation defines sensitive information as data that relates to an individual’s medical information, race or ethnicity, religious beliefs, sexual orientation, finances, and precise physical location. Collection of this information would require organizations to first obtain affirmative consent. In particular, the restrictions on using data related to medical information, sexual orientation, race or ethnicity, and religious beliefs without affirmative consent would restrict many types of potentially beneficial forms of advertising. For example, these restrictions could potentially prevent marketers from effectively creating targeted ad campaigns for services like online Christian bookstores, Brazilian music stores, or gay dating websites.

Second, the draft legislation requires organizations to follow specific guidelines regarding the use of data in profiles used to provide services such as targeted advertising. For example, the legislation requires organizations to provide “a readily accessible opt-out mechanism whereby, the opt-out choice of the individual is preserved and protected from incidental or accidental deletion.” This requirement goes against current industry practice. The Network Advertising Initiative (NAI), the online advertising industry organization that has been developing most of the standards for third-party ad networks, currently use cookies (small data files stored on a user’s computer) to allow consumers to opt out of participating online advertising networks’ behavioral advertising programs.[4] However, cookies do not fit the technology requirement as stated in the legislation since they can be accidentally deleted by a user, and so these third-party advertising networks would not be in compliance with the new legislation.

The legislation also requires websites to place a “symbol or seal” near every targeted ad that links to information about their advertising partner and information about any data associated with that user profile. Requiring targeted ads to have a special mark identifying them as such would unfairly disadvantage targeted ads against non-targeted ads. Given that targeted ads generate more than two times the revenue of non-targeted ads, this would have a negative impact on revenues for online publishers and service providers and would harm the Internet ecosystem, particularly the so-called “long tail” of small websites supported by ad revenues.[5] In addition, policymakers concerned with the decline of print media should note that greater revenue from targeted online advertising will likely be necessary for journalism to survive in the Internet age.

The legislation also states that users should be able to “review and modify” any preference profile created by an online ad network or other service provider. This requirement would force websites to build front-end systems to allow consumers to interact with data saved in their profile. Currently if consumers want to “opt out” of targeted advertising, they avoid websites that use this form of advertising or use various technical controls, such as web browser plug-ins, that block ads. This requirement would pose an unnecessary and unneeded cost on service providers (and ultimately consumers) and would generate little to no real benefit to consumers. Users that choose to opt out of targeted advertising but still access a website’s content or services are free riders, getting all of the benefits of a free service without bearing any of the costs. It does not make sense to require service providers to build a system to make it easier for users to free ride by opting out of targeted advertising. Unfortunately, this type of requirement reflects the prevailing message of privacy fundamentalists that privacy trumps all other values. However, policymakers should recognize that privacy, as with any other value, must be balanced against other competing interests and can, as it will here, comes at a real financial cost.

Bad: Grants the FTC the authority to establish a security standard to protect consumer information

The draft legislation grants the FTC authority to “establish, implement, and maintain appropriate administrative, technical, and physical safeguards” that it deems necessary. Such a broad authority over every nongovernmental organization maintaining consumer information effectively gives the FTC a far reaching authority over the information security practices of the private sector. Using this authority, the FTC could effectively set the standard for the security practices of private sector systems and networks. While the federal government does have a role in fostering good information security practices, the private sector is in a better position than the federal government to manage risk for its own systems and networks.

Bad: Fails to update privacy laws regarding government use of digital data

While the draft legislation attempts to enhance privacy for consumers, no mention is made of government use of consumer data. The legislation exempts government agencies from maintaining the same privacy standards that it would require from the private sector. Improper use of consumer data by government is arguably the greater threat preventing more widespread use of technologies like cloud computing. As ITIF and others have argued previously, Congress should act to reform laws such as the Electronic Communications Privacy Act (ECPA) to ensure that citizens have a right to privacy for their electronic data whether it is stored at home on a PC or remotely in the cloud.[6]

Good: Includes a preemption clause that supersedes state requirements

While legislators should look carefully at the concerns outlined here, the discussion draft does have a one positive element: it includes a preemption clause. As the legislation states, “This Act supersedes any provision of a statute, regulation, or rule of a State or political subdivision of a State that includes requirements for the collection, use, or disclosure of covered information.” If Congress does move forward with privacy legislation, it should ensure that any new regulations preempt state laws, otherwise online service providers will find themselves facing competing, and possibly contradictory, data use and handling requirements for consumers.

[1] “Staff discussion draft,” 111th Congress, 1st Session, May 3, 2010, http://www.boucher.house.gov/images/stories/Privacy_Draft_5-10.pdf.

[2] Nicklas Lundblad and Betsy Masiello, “Opt-in Dystopias,” SCRIPTed 7, no. 155 (2010), http://www.law.ed.ac.uk/ahrc/script-ed/vol7-1/lundblad.asp.

[3] Daniel Castro, The Right to Privacy is Not a Right to Facebook (Washington, DC: The Information Technology & Innovation Foundation, April 30, 2010), http://www.itif.org/publications/facebook-not-right.

[4] “Opt Out of Behavioral Advertising,” Network Advertising Initiative (2010), http://www.networkadvertising.org/managing/opt_out.asp.

[5] “Study finds behaviorally-targeted ads more than twice as valuable, twice as effective as non-targeted online ads,” Network Advertising Initiative, press release, March 24, 2010, http://www.networkadvertising.org/pdfs/NAI_Beales_Release.pdf.

[6] “ITIF Calls for Updates to Privacy Laws,” Information Technology and Innovation Foundation, press release, March 30, 2010, http://itif.org/pressrelease/itif-calls-updates-privacy-laws.

The Right to Privacy is Not a Right to Facebook

April 30, 2010
| Reports

Don't like Facebook's privacy policy? Then don't use it. But don't ask government to run Facebook.

FULL TEXT

On April 27, four senators—Charles Schumer (D-NY), Michael Bennet (D-CO), Mark Begich (D-AK) and Al Franken (D-MN)—sent a letter to Facebook expressing concerns about Facebook’s current privacy policy. Specifically, the authors of the letter criticize Facebook’s decision to make certain data from a user’s profile public and to allow third-party partners to use and store this data. This criticism centers around two new features Facebook debuted at its F8 Developer Conference earlier this month—instant personalization and social plugins. The first feature, instant personalization, allows certain partner sites to use data from a Facebook user’s profile to customize their online experience. For example, if a Facebook user visits Pandora, an Internet radio website, instant personalization will allow Pandora to create a custom radio station for the user based on their likes and dislikes from their Facebook profile. The second new feature, social plugins, allows developers to place a Facebook widget on their website so that visitors can “Like” a page or post comments. These interests can then be shown on a Facebook user’s news feed and users can see their friend’s activity. Both of these new features users can opt not to use.

For those who have been following the debate on online privacy, this letter should come as no surprise—countless advocacy groups have criticized companies like Facebook and Google for what they see as the erosion of user privacy online. However, contrary to what critics may say, the latest offerings from companies like Facebook and Google do not herald the end of privacy as we know it on the Internet. Instead, it reflects the natural evolution of online applications as they increasingly make use of user data to offer more personalized products and services and find ways to monetize an otherwise free service. Yet unfortunately privacy fundamentalists (e.g., those individuals and organizations who place the protection of privacy above all else, refusing to see it as one value competing against others) continue to generate headlines by raising objections to the efforts of these companies by arguing that they “violate user expectations” and “diminish user privacy.”

There are two different questions central to this debate: first, should Facebook be able to use private information to deliver products and services to its customers; and second, should any company be able to do this?

From a policy perspective, the first question is less interesting. The answer is one that will likely be settled by legal action (or the absence of it). Privacy policies exist for a reason: they tell users of a website what an organization can and cannot do with your personal data. If an organization deviates from its policy—if it uses private data for purposes that are in direct violation of is stated policy—then it can and should be held liable. Whether Facebook violated its stated privacy policy and whether it engaged in unfair and deceptive business practices is something that the FTC and other nations’ consumer protection agencies will have to decide.

The second question—should organizations be able to use private data for new types of products and services—is the more interesting question. Privacy fundamentalist routinely argue that consumers have an expectation of privacy regardless of what the privacy policy states and that when organizations use personal data, for example to recommend music or supply targeted advertising, they have violated this expectation of privacy. They argue that privacy policies are too difficult for consumers to decipher or that consumers do not read them and so government regulation is needed. It is this misguided notion, that consumer preference (or rather the preference of privacy fundamentalists) trumps business prerogative, that is central to the arguments made by privacy fundamentalists when calling for government to intrude on the business decisions of the private sector.

Yet even if you accept the premise that consumers had an expectation of privacy, the last few years of debate over online privacy should make it clear to even the most casual user that this is no longer true. Many Internet companies clearly intend to continue to find innovative ways to use personal data to deliver products and services to their customers. While Facebook CEO Mark Zuckerberg may or may not “believe in privacy”, it is clear that Facebook thinks that companies should respond to changing social norms on privacy and that the overall trend is towards more sharing and openness of personal data. So going forward, no Facebook user (or privacy fundamentalist) can continue to use the service without admitting that the benefits of using the website outweigh any reservation the user has about sharing his or her personal data. As the saying goes, “Fool me once, shame on you. Fool me twice, shame on me.”

Certainly some users may still object to this tradeoff. But if you don’t like it, don’t use it. Facebook is neither a right nor a necessity. Moreover, it is a free tool that individuals can use in exchange for online advertising. In fact, one high-profile Facebook user, the German Consumer Protection Minister Ilse Aigner, has already threatened to close down her Facebook profile in protest of Facebook’s new privacy policies. Users that feel this way about Facebook’s changes should vote with their mouse and click their way to greener pastures. Companies respond to market forces and consumer demands, and if enough users object to the privacy policy of Facebook, these individuals should be able to find a start-up willing to provide a privacy-rich social networking experience.

Even Facebook responds to public opinion and consumer pressure. In December, Facebook modified its privacy settings so that certain information including friends list, gender, city, and profile photo, would be public information. In response to complaints from some users, Facebook modified its interface to give users more control over the privacy of different types of information. Neither was this the first time that Facebook revised its policies in response to consumer behavior. In 2006, Facebook altered its policy regarding its “news feed” feature that updates users about their friends’ activities.

This is not to say that online privacy is not a topic worthy of government oversight and legislative action. As ITIF has argued, existing protections for individuals from laws such as the Electronic Communications Privacy Act (ECPA) are woefully outdated and in need of reform. Citizens should have a right to privacy for their electronic data and safeguards should be the same regardless of whether data is stored at home on a PC or remotely in the cloud.

So the next time Facebook changes its privacy policy, let’s not act like this is a national emergency. Companies do things that the some members of the public do not like all the time. When Coca-Cola introduced New Coke, we did not need the U.S. Senate to step in to right this wrong, and neither do consumers need government to police every feature or policy tweak that websites make.

ITIF Calls for Updates to Privacy Laws

The Information Technology and Innovation Foundation today called for updates to federal laws regarding the privacy of electronic data and communications to bring our communications laws into the 21st century to reflect changes in technology over the last decade. Read more »