Tag Archive | "Internet"

shutterstock_RTBF_195176492 (1)

Re-Writing History: The right to be forgotten

shutterstock_RTBF_195176492 (1)

Credit to: http://www.indexoncensorship.org/wp-content/uploads/2014/07/shutterstock_RTBF_195176492.jpg

Scientific research suggests that the act of forgetting memories fosters a healthy state of mind. The act of forgetting may be more difficult to achieve in a world where internet companies collect and store a broad range of information about their users’ lives and daily activities. Is it fair for individuals to ask everyone else to forget information that they do not want remembered? On May 13th, 2014 the Court of Justice of the European Union ordered Google to delete search results linking to a 1998 auction notice of a Spanish man’s repossessed home. Since the ruling went into effect, Google has received over 225,000 requests for the removal of links. This controversial ruling, labeled the “Right to be Forgotten,” puts into sharp focus the competing interests of global Internet companies and individual Internet users. The rule also raises a debate between the personal appeal in purging the Internet of undesirable information and the danger in creating a system that allows for censorship and the re-writing of history.

The ruling by the Court of Justice has three major holdings. First, the European Union’s 1995 Data Protection Directive applies to search engines because they are controllers of personal data. Second, even though Google Spain’s data-processing servers are located in the United States, the Court of Justice can apply European Union rules to Google Spain because it is located in a European Union Member State and it sells advertising space within that jurisdiction. Third and most importantly, under certain circumstances individuals have the right to request that search engines remove links containing “inadequate, irrelevant or no longer relevant” personal information about them.

The European Union is the most aggressive jurisdiction when it comes to protecting personal privacy rights. The “Right to be Forgotten” rule maintains Europe’s position as the champion of personal privacy. Other countries with more balanced privacy regulations are considering whether Internet forgetfulness could benefit their citizens. A Japanese man brought a case in a Tokyo Court because Google did not comply with a request to remove information relating to him from search results. The Hong Kong Court of Appeals will hear a petition from Google on the “Right to be Forgotten” in early 2015. Privacy organizations in Asia are strongly advocating for the “Right to be Forgotten” to apply in Asian countries. Critics warn that establishing such a rule could undermine corporate and political transparency in a region with a history of powerful people that manipulate information flows.

In the United States, the debate around the “Right to be Forgotten” rule has support on both sides of the argument. Critics say that the rule is vague, prone to abuse and amounts to censorship in violation of the First Amendment. On the other hand, eighty-eight percent (88%) of American citizens in a recent survey said that they would support a “Right to be Forgotten” rule. When opposing experts discussed the same argument in front of an American crowd as a part of an Intelligence Squared event, fifty-two percent (52%) of the crowd voted against a “Right to be Forgotten” law. As other countries ponder the merits of the rule, the European Union is pushing for it to apply worldwide and not just on websites for European countries. A worldwide imposition of European privacy standards could result in the rest of the world losing the “Right to Remember.”

The ability of information technologies to collect and store endless amounts of individuals’ personal information raises legitimate concerns regarding surveillance and personal privacy. The “Right to be Forgotten” carries a powerful emotional appeal for many people that wish to leave their past behind. Despite the fact that forgetfulness may have its benefits, our memories of the past have a great deal to do with what we can learn in the future. When individuals request that Google “forgets” information undesirable to them, they re-write the collective story we share as a society. The processes the brain uses to facilitate information recall demonstrate the appropriate way to handle past information. Forgetting is not as easy as flipping a switch, ask anyone who has tried to forget an embarrassing moment from their youth. Instead, forgetting has more to do with the brain’s ability to accumulate enormous amounts of fresh information that crowd out old memories. In a world where every moment is stored forever, the brain teaches us that forgetting may be easier with more information, not less.

Matthew Aeschbacher is a 4LE law student at the University of Denver Sturm College of Law and a staff editor for the Denver Journal of International Law & Policy.

Posted in DJILP Online, DJILP Staff, Featured Articles, Former DJILP Staff, Matthew AeschbacherComments (1)

Internet graphic

Critical Analysis: Determining the Boundaries of the Internet

Cloud Computing and Internet Surveillance

Since the rise of the internet, lawmakers and courts have struggled to create legal rules for a computer network that disregards geographical boundaries. Issues concerning internet governance have only grown more complex with the recent trend towards cloud computing and revelations of internet surveillance by government agencies. U.S. companies host massive amounts of data from customers around the world, with much of that information being stored overseas. These same U.S. companies have come under fire for giving U.S. government agencies access to customer data. Many countries responded to these revelations by enacting legislation designed to protect the privacy of their citizens’ data. Now we are left with a segmented, country-by-country approach, to govern an internet that has no borders. The lack of a unified international framework for data protection has made it impossible for global internet companies to comply with all of the contradicting demands of their various stakeholders.

global network graphic

Image Source: wonderfulengineering.com

Microsoft Refuses to Give Foreign Hosted Data to U.S. Authorities

A court decision determining the circumstances under which U.S. law enforcement agencies may obtain digital information stored outside the U.S. has become the most recent example of the difficulty in reconciling the notion of sovereignty with a globally distributed network. During the summer of 2014, a United States court ordered Microsoft to produce the content of email-data stored on servers in Dublin, Ireland. Microsoft complied with the warrant to the extent of producing the metadata of the email stored on U.S. servers but has refused to turn over the foreign hosted content. Microsoft claims that U.S. courts do not have the power to issue warrants for extraterritorial search and seizure. In the courts view, extraterritoriality does not apply to warrants issued pursuant to the Stored Communications Act (SCA) because the information is within the control of Microsoft.

Stored Communications Act (SCA)

Part of the purpose of the SCA was to address the difficulty in applying Fourth Amendment protections to information communicated and stored electronically. The court argues that a section 2703(a) SCA warrant operates like a hybrid between a subpoena and a warrant. With a subpoena the test for compulsory production of information is whether or not the information is in the possession, custody, or control of the subpoena recipient. Extraterritoriality does not apply because, like a subpoena, an SCA warrant does not involve government agents entering the premises of the ISP to search its servers and seize information. One of the problems in allowing the SCA warrant hybrid to defy jurisdictional boundaries is that it creates a situation where Microsoft cannot comply with both the order and the laws of the host country simultaneously.

The Business of International Internet Companies

Microsoft, with the support of several other tech giants (including AT&T, Apple, Cisco, and Verizon among others), is claiming that this court order could set a precedent that might encourage Europeans to avoid using Microsoft products out of a fear that expansive U.S. discovery rules could expose all of their information. To maintain its European customers and avoid possible liability abroad, Microsoft has a very strong incentive to push back against this order. Microsoft has argued that if it complied with this order, it could decimate the U.S. cloud computing industry – which would cost both jobs and massive tax revenue. To protect its growing business in countries outside the U.S. Microsoft is urging the U.S. government to abide with its mutual legal assistance treaties, or MLATs. This approach would allow for more cooperation between the requesting and host countries, ensuring that the local laws of the host country are not disregarded in the process of acquiring the requested information.

Internet graphic

Image Source: techpolicydaily.com

The Cloud Computing Industry Fights Back

While this case has played out in the court systems members of the United States Congress have been working to find an appropriate solution to the issues presented by U.S. based companies hosting data abroad. On September 18, 2014 a bipartisan group of senators introduced the Law Enforcement Access to Data Stored Abroad Act, or LEADS Act. The LEADS Act would implement the warrant-for-content rule, meaning that the account of a U.S. citizen held overseas would only be accessible to law enforcement with a judicial warrant. The goal of the bill is to balance the needs of U.S. law enforcement with consumer privacy rights. Microsoft is supportive of the new bill as a way to continue the conversation over the control of data, but was adamant that it would not be the conversation’s conclusion.

Matthew Aeschbacher is a 4LE law student at the University of Denver Sturm College of Law and a staff editor for the Denver Journal of International Law & Policy.


Posted in DJILP Online, DJILP Staff, Featured Articles, Matthew AeschbacherComments (0)


One Size Won’t Fit All: Multinational Corporations’ Compliance with Privacy Regulations (Part 2 of 3)

Part 2: Privacy Approaches Applied

This is the second post in a three-part blog post examining privacy issues confronting multinational corporations in a global economy. The first post explored privacy generally by analyzing privacy as the concept is understood and applied in the European Union, in China, and in the United States. This post will assess the experiences of Google and McDonald’s in adhering to privacy regulations while operating on a global level in attempting to comply with the three privacy regimes described in the first post.  The third post will provide recommendations on privacy strategies companies can implement to mitigate some of the issues identified in the second post. These posts do not attempt to provide an exhaustive list of privacy issues multinational corporations encounter, but they are intended to show the importance of privacy concerns and to highlight the need to confront compliance issues in a proactive manner.



“[I]n times of globalized business operations, a company’s business strategy in one market might affect the standard against which the company is measured in other markets and jurisdictions.”[1]

 As the first post in this series discussed, privacy regimes vary according to geography, societal values, and historical contexts. Companies operating in multiple jurisdictions have to function in these varied privacy regimes, and it is not always a simple task. As the following case illustrations demonstrate, compliance with one privacy scheme raises the possibility of violating the privacy regulations in another jurisdiction. The first case illustration depicts Google’s troubles in Italy following its activities in China. The second case illustration explores McDonald’s struggles in complying with mandatory whistleblowing requirements in the U.S. that were in violation of E.U. privacy laws.

Unexpected Consequences: Google in Italy and China

The Setting

google office

Google execs were convicted for sharing of information related to a video (Bloomberg)

On February 24, 2010, three Google executives were found guilty of violating the privacy of a child. The controversy started in 2006 when a video was uploaded to a site owned Google featuring a group of teenagers insulting and assaulting an autistic boy, specifically calling the boy a “mongoloid.”[2] After it was uploaded, the video became popular enough that it was ranked as “the funniest video on Google Italia. It was rated 29th of the most downloaded videos on Google Italia.”[3] Although Google removed the video within hours after being notified that it infringed on the victim’s privacy, the damage was already done.

During the trial, the Google executives were charged with, among others, violating the victim’s privacy rights, though the Google employees were only found guilty of the privacy charge. At the heart of the ruling was Google’s AdWords program, which placed advertising on the side of the screen when users watched videos on the Google-operated site. The court found that the video contained personal information based on the use of the word “Mongoloid.” According to Directive 95/46/EC of the European Parliament and of the Council, discussed in Part I, personal information is prohibited from being shared without the subject’s unambiguous consent. Because Google permitted the content of the video to be shared and derived a profit from sharing such information in the form of revenue generated from the AdWords program, the court determined that Google had violated the victim’s right to privacy.

The Google executives unsuccessfully argued that they fell under an exemption for personal liability found in the Directive 95/46/EC of the European Parliament and of the Council. Paragraph 47 of Directive 95/46/EC excuses liability for those who merely serve as a vehicle to transmit personal data, as opposed to those providers who actually control the transmission of personal data.[4] The court dismissed this argument, finding that Google has increasingly taken on a more active approach in the services it provides.[5] The Court relied primarily on the fact that Google’s revenue from its AdWords program is proportionate to the popularity of a given video. Because the video was popular and because Google had the potential of deriving greater profits based on that popularity, the court reasoned that Google obtained profit, through it AdWords program, at the expense of a violations of the victim’s privacy rights.[6] Google’s active approach to providing services, rather than simply its role as a passive vehicle for the transmission of data, is evidenced by its activities in China, on which the prosecution rested its case.[7]

Google as a Content Provider

When Google launched its services in China in 2005, the company modified its search algorithm to exclude controversial topics, such as information relating to Tiananmen Square or the Falun Gong movement. The main draw of the Chinese internet market is its colossal size; the population of internet users in China was estimated at 384 million in 2010, which was more than the entire population of the United States at the time. In order to tap into such a massive market, Google had to comply with China’s internet censorship protocol, known colloquially as “the Great Firewall.” The Great Firewall is but a part of the Chinese government’s attempts to censor information domestically and abroad, and tens of thousands of Chinese workers are employed to ensure that sensitive information is restricted from general access. In order to adhere to such China’s censorship regime, search engines in China, like Google, are prevented from linking to sensitive information. In 2010, Google moved its services for operations in China to Hong Kong, which allowed Google to stop its self-censorship, though the content accessed through Google’s services was still filtered in mainland China. The move to Hong Kong was seen as a partial retreat from Google’s stance of filtering the content it provided. Thereafter, Google actively sought to promote freedom of information on the internet by informing the Chinese population that they would likely experience short breaks in their connection when searching for prohibited content, although this practice was quietly abandoned in January 2013.

Although there was general disagreement with Google’s censorship policy in China, resulting in claims that Google’s modifications in China contradicted Google’s core value of “don’t be evil,”[8] the decision to restrict user access to the content Google provided also had another, more insidious component; it pushed Google’s activities from a “mere conduit of information” toward becoming a “full-fledged media company.”[9] Google has a long-standing tradition of insisting that it “is not a media company, that its [sic] organizes and manages content, but stays away from producing it.” This mantra is being tested, however, as Google expands into offering more services and products. “[I]t may be time to retire the trope,” says a Forbes article, indicating that any argument over Google’s media company status is now moot. Google’s image as a passive conduit for unfiltered media has been questioned when it attempted to buy a social-networking site, its launch of a magazine, and its operating of a recipe-sharing site. However, it was Google’s censorship activities in China that raised serious questions to the Italian court about Google’s passive role in the provision of internet content.

The Court Decision and the Aftermath

David Thorne, the American ambassador to Italy during the time of the 2010 case against the Google executives, stated in response to the Italian court’s decision that he disagreed with the idea that “Internet service providers are responsible prior to posting for the content uploaded by users . . .” During the case, Google argued in its defense that their and other search engines’ activities would be significantly impacted if an internet company could be liable to for the content uploaded by third parties. The winning argument for the prosecutors took a contrary view; if Google was able to filter the content it provided in China, it could do the same in Italy to “protect human dignity.” Alfredo Robledo, prosecutor against Google, stated that the case was not about the freedom of the internet, but rather human dignity; “[t]he rights of a business enterprise cannot take precedence over the dignity of the individual.”

The Italian court’s decision finding the Google executives guilty was overturned in December 2012. The initial guilty verdict had raised concerns about internet freedom in Italy. Under E.U. law, internet service companies that merely serve as a conduit for information are exempt from liability for the content uploaded by third parties.[10] Under the lower court’s decision, this exemption from liability would be significantly narrowed to those few internet service companies who do absolutely nothing more than provide access to information. The appeals court rejected the narrow reading of the hosting exemption and instead adopted a position imposing liability only for companies that “host user-generated content” and fail to act once illegal content had been uploaded to the provider’s site. In the Google case, this meant that the executives would only be liable if they failed to remove the video despite having received notice that it violated the victim’s privacy rights. Because Google removed the offensive video within hours of receiving notice of a violation of the victim’s privacy, the appeals court reasoned that Google was not liable. The reasoning of the appeals court was upheld by Italy’s highest court in December 2013.

Clash of Regulatory Schemes: McDonald’s in France

The Setting

mcdonalds france

Le McDonald’s (Alamy)

In January 2005, McDonald’s France, the French division of McDonald’s global operations, sought an opinion from France’s privacy regulatory body, the Commission Nationale De L’informatique et des Libertés (“CNIL”), in regard to creation of a system of “professional integrity.”[11] The professional integrity plan would have permitted McDonald’s France employees to report any misconduct anonymously. Any reported misconduct, including questionable accounting practices and internal control over accounting or auditing methods, would have been processed in the U.S. and reported to the general counsel of McDonald’s France. McDonald’s France requested the opinion for its professional integrity plan at the behest of its U.S. parent corporation in an attempt to comply with provisions of the Sarbanes-Oxley Act (“SOX”). Although McDonald’s France requested the opinion before it had actually implemented its proposed professional integrity plan, the CNIL refused to authorize any such “whistleblower” hotline. The CNIL’s decision to reject McDonald’s France’s proposal made it impossible for its U.S. parent corporation to comply with its obligations under SOX.


To truly understand the obstacles McDonald’s France was facing, it is important to explore SOX in more depth. Following the Enron and WorldComm scandals, Congress enacted SOX in order to improve the accuracy and reliability of corporate disclosures. Among the many provisions Sox introduced, of particular importance to McDonald’s was the SOX requirement that companies must create and apply procedures for the confidential, anonymous reporting of questionable accounting or auditing controls.[12] Further, SOX mandates that employees reporting on such practices must be protected from retaliation for their disclosure activities.[13] That these requirements apply to U.S. companies is apparent, but it is far less certain whether these requirements apply extraterritorially as well.[14] Because of this uncertainty, many multinational corporations, such as McDonald’s, determined that it would be prudent to act as if SOX applied to all of their operations, including subsidiary operations in foreign jurisdictions.[15] Therefore, McDonald’s France’s professional integrity plan, calling for anonymous reporting of confidential information regarding misconduct, is best understood in the context of an American parent corporation, McDonald’s in the U.S., attempting to comply with the SOX requirements in every geographic region of its operations.

French Agency’s Determinations

The CNIL review of McDonald’s France’s proposed professional integrity plan found that the plan involved the collection of personal information and that McDonald’s France was a “controller” of personal data. According to Article 2(d) of the E.U. Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data and French law implementing the Directive, controllers of personal data are permitted to “collect and process personal data in order to satisfy legal obligations to which they are subject.”[16] Because McDonald’s France employees participated in the professional integrity plan, the CNIL determined that McDonald’s France was a controller of personal data and that the CNIL had authority to make findings on whether the professional integrity plan complied with the law.[17]

The CNIL ultimately concluded that McDonald’s France’s professional integrity plan involved violations of the law. Of primary concern to the CNIL was that individuals alleged to have participated in misconduct, as disclosed by whistleblowers, would be unable to “hear or reply to the accusations made against them.” The CNIL determined that the policy behind French data protection laws, and the E.U. laws by extension, are to ensure that citizens know who possesses their personal information, to be informed about who has access to that information, and that citizens can take remedial measures to correct any false information. Because anonymous and confidential reporting of personal information would not allow for the requisite transparency in regard to personal information, the CNIL determined that the professional integrity plan “could lead to an organized system of professional denunciation.”

The CNIL also determined that McDonald’s France’s system was disproportionate to the objectives it sought to accomplish. Noting that “other legal means exist to guaranty [sic] compliance with legal provisions and company rules,” the CNIL found that the risk of professional denunciation and the “stigmatization of employees” was greater than the need for the professional integrity plan’s reporting system.[18] Although the CNIL was aware of the obligations imposed by the SOX provisions when it denied McDonald’s France’s application for permission to implement the professional integrity plan, the decision did nothing to ameliorate McDonald’s conundrum of seeking to comply with SOX and French privacy laws.

Aftermath of Determination

After the McDonald’s France ruling, the CNIL attempted to provide some guidance in how to comply with SOX whistleblowing requirements and French privacy laws. In November 2005, the CNIL indicated that whistleblowing procedures may be implemented but only as long as they are voluntary and are a supplement to other means of communication within a corporation. Further, the November 2005 guideline document stated that “a whistleblowing system may only be considered as legitimate if it is necessary to comply with a legal obligation.” Because the November 2005 guidance document was limited and left important issues unresolved, the CNIL released a whistleblowing directive in December 2005. The directive explains that whistleblowing procedures are permissible so long as they strictly comply with the directive’s requirements. Among the many items addressed in the December directive, one important requirement is that whistleblowers are obligated to identify themselves, and that this identification remains confidential.[19] The directive also allows for two instances where a whistleblower may remain anonymous: when precautions are properly taken in processing the information and when the company does not promote anonymous whistleblowing.[20] Although the December 2005 directive obviates some of the confusion surrounding compliance with French laws while still adhering to the SOX requirements, McDonald’s France still must ensure that the SOX compliant whistleblower procedure it adopts is similarly compliant with French regulations concerning privacy.


Both Google’s and McDonald’s experiences illustrate the complications that arise when operating in a global marketplace. In Google’s experience, its actions in China had a direct impact on the liability it faced in Italy for privacy issues entirely unrelated to its operations in China. In McDonald’s experience, its attempts to comply with U.S. regulations resulted in a direct conflict with the privacy regulations in France. Although both of these examples have been ameliorated to a certain extent, Google’s executives were relieved from liability by Italy’s highest court and McDonald’s is able to better comply with French privacy regulations due in large part to clarifications of the law, these examples serve to illustrate the complexities inherent to operating in multiple jurisdictions with many varied, sometimes even competing, privacy regulations. This dilemma, encountered by every company multinational corporation, must be addressed, and the final installment in these blog posts will offer potential methods for addressing privacy issues in an effective manner.


Greg Henning is a 3L at the University of Denver Sturm College of Law and a General Editor for the View From Above.

[1] David Scheffer & Caroline Kaeb, The Five Levels of CSR Compliance: The Resiliency of Corporate Liability Under the Alien Tort Statute and the Case for a Counterattack Strategy in Compliance, 29 Berkeley J. Int’l L. 334, 394 (2011).

[2] See Raul Mendez, Google Case in Italy, Int’l Data Privacy L., Feb. 25, 2011, http://idpl.oxfordjournals.org/content/early/2011/02/25/idpl.ipr003.full#xref-fn-1-1.

[3] Id.

[4] See Council Directive 95/46/EC, ¶ 47, 1995 O.J. (L 281) 31, 36.

[5] See Mendez, supra note 2.

[6] See id.

[7] See Sheffer & Kaeb, supra note 1.

[8] Google has limited its activities in China but still complies with Chinese authorities in restricting content. See Mic Wright, Google Shows China the White Flag of Surrender, The Telegraph (Jan. 7, 2013),  http://blogs.telegraph.co.uk/technology/micwright/100008624/google-shows-china-the-white-flag-of-surrender/

[9] Sheffer & Kaeb, supra note 1.

[10] See Council Directive 95/46/EC, ¶ 47, 1995 O.J. (L281) 32, 36.

[11] Marisa Anne Pagnattaro & Ellen R. Peirce, Between a Rock and a Hard Place: The Conflict Between U.S. Corporate Codes of Conduct and European Privacy and Work Laws, 28 Berkeley J. Emp. & Lab. L. 375, 411 (2007).

[12] See 15 U.S.C. § 78j-1(m)(4)(B) (2010).

[13] See 18 U.S.C. § 1514A (2010).

[14] See Donald C. Dowling, Jr, Sarbanes-Oxley Whistleblower Hotlines Across Europe: Directions Through the Maze, 42 Int’l Law. 1, 7 (2008) (“But our SOX hotline question here is international: Whether SOX’s mandate of “confidential, anonymous” employee reporting “procedures” extends as well to “employees” of SOX-regulated companies (and their subsidiaries) who work and live abroad.”).

[15] See id. (“But contrary to the widespread assumption of countless U.S.-based multinationals examining this issue, a viable argument exists that the Section 301 “complaint procedure” mandate is confined to “employee” populations working on U.S. soil.”).

[16] See Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data art. 2(d), Jan. 28, 1981, E.T.S. 108.

[17] See Pagnattaro & Pierce, supra note 10, at 412.

[18] See id. at 413 (“In other words, the harm that could be caused by a slanderous accusation–to which the employee may not be able to adequately respond–was too great a burden and outweighed the justifications for the hotlines.”).

[19] See id. at 421.

[20] Id.

Posted in DJILP Online, DJILP Staff, Featured Articles, Greg HenningComments (1)

bush signs Sarbanes oxley

One Size Won’t Fit All: Multinational Corporations’ Compliance with Privacy Regulations (Part 1 of 3)

Part 1: What Does “Privacy” Mean?

This is the first post in a three part series examining the issues multinational corporations face in complying with privacy regulations in the U.S. and abroad. This post will explore privacy generally by analyzing privacy as the concept is understood and applied in the European Union, in China, and in the United States. The second post will review two case studies to introduce specific issues multinational corporations have run into in attempting to comply with the three privacy regimes described in the first post. The third post will provide recommendations on privacy strategies companies can implement to mitigate some of the issues identified in the second post. These posts do not attempt to provide an exhaustive list of privacy issues multinational corporations encounter, but they are intended to show the importance of privacy concerns and to highlight the need to confront compliance issues in a proactive manner.



Privacy is a value so complex, so entangled in competing and contradictory dimensions, so engorged with various and distinct meanings, that I sometimes despair whether it can be usefully addressed at all.” – Robert C. Prost

The amount of personal data that is available via the internet is astounding, and that data is valuable. Stores are eager to employ “predictive analytics” in order to understand “not just consumers’ shopping habits but also their personal habits, so as to more efficiently market to them.” The more information a store can obtain about an individual, the easier it is to send them individualized advertisements geared specifically to that person’s needs. For instance, it is now possible, based off consumer purchasing habits, to track an individual’s pattern of purchases and predict when that individual is experiencing a major life change. Once an individual’s purchasing patterns change, the company can respond with targeted advertising to the changed circumstances. Another example is how GPS information in your car has the potential to be shared with businesses to provide targeted advertisements for nearby restaurants.


Louis Brandeis, circa 1890, was one of the first scholars to attempt to define the principle of privacy

Although businesses are eager to use information on consumer habits, many people view this kind of information gathering and dissemination as an invasion of privacy. Unsurprisingly, legislators in the U.S. have sought to introduce laws curtailing the ability to collect consumer information without the consumer’s permission. But laws aimed at protecting consumer information must first answer a fundamental question: what exactly is “privacy”? Although it is beyond the scope of these posts to provide an exhaustive list of the ways in which scholars have defined “privacy,” it is important to understand the context in which debates over privacy occur in order to better understand the conflicts multinational corporations face in complying with differing privacy regimes.

Definitions of “Privacy”

One of the earliest and most influential attempts to define privacy in the U.S. was The Right to Privacy, authored by Samuel Warren and Louis Brandeis. Published in 1890, The Right to Privacy attempted to discern whether the law recognized a “principle which can properly be invoked to protect the privacy of the individual . . .”[1] The article broadly defined privacy to include those things which “concern the private life, habits, acts, and relations of an individual,” those things which do not concern an individual’s fitness for a public office, and those things which do not concern an individual’s acts performed in a public place.[2] Privacy was defined in terms of a right, the “right to be left alone.”[3]

The definition of privacy has greatly expanded since The Right to Privacy was first published. One scholar has recently claimed that “[c]urrently, privacy is a sweeping concept, encompassing (among other things) freedom of thought, control over one’s body, solitude in one’s home, control over information about oneself, freedom from surveillance, protection of one’s reputation, and protection from searches and interrogations.”[4] Other interests identified as falling under the privacy umbrella include the protection of consumer data, credit reporting, workplace privacy, discovery in civil litigation, the dissemination of personal images, or shielding criminal offenders from public exposure.[5]

Privacy is so broad because “[c]onceptualizing privacy not only involves defining privacy but articulating the value of privacy. The value of privacy concerns its importance – how privacy is to be weighed relative to other interests and values.”[6] Such a balancing of competing interests contemplated by the term “privacy” is going to depend on the cultural and historical context in which the interests are examined.[7] For example, a right to privacy for most Americans would include the right to choose the names of their children without any interference. In contrast, it is permissible for French and German courts to determine that a name given to a newborn is contrary to the child’s best interests.[8] Similarly, Americans cleave tightly to the notion that a “broadly defined freedom of the press assures the maintenance of [America’s] political system and an open society.”[9] In China, in contrast, the notion of an independent press is absent; the majority of “print media, broadcast media, and book publishers were affiliated with the [Chinese Communist Party] or a government agency.”[10] Whether privacy means ensuring parents’ ability to name their own children or the right to an independent press, how privacy is defined is largely dependent on cultural influences.

Same Principle, Different Approaches: Privacy in the E.U., China, and the U.S.

The European Union

Privacy laws in Europe have been shaped by the continent’s social and political history. According to James Whitman, a professor of comparative and foreign law at Yale University, the European privacy regime is a direct product of the hierarchical structure of society endemic to Europe’s past.[11] Whitman argues that Europe’s privacy laws are a “form of protection of a right to respect and personal dignity,” focusing on the “rights to one’s image, name, and reputation . . . [and] the right to informational self-determination–the right to control the sorts of information disclosed about oneself.”[12]

The E.U.’s basic regime for protecting privacy rights is found in the European Convention for the Protection of Human Rights and Fundamental Rights (“E.U. Convention”) of 1953. Article 8 of the E.U. Convention provides that “[e]veryone has the right to respect for his private and family life, his home and his correspondence.” The Article further states that:

There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.

European privacy rights were expanded by the Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data (“E.U. Data Convention”). Mindful “that it is desirable to extend the safeguards for everyone’s rights and fundamental freedoms, and in particular the right to the respect for privacy,” the E.U. Data Convention sought to ensure that every individual was afforded “respect for his rights and fundamental freedoms, and in particular his right to privacy, with regard to automatic processing of personal data relating to him.”

eu commissioner reding

E.U. Commissioner Viviane Reding, circa 2012, defends a bill meant to improve data protection (Reuters)

Privacy in the E.U. is further protected as a result of the adoption of Directive 95/46/EC of the European Parliament and of the Council (“E.U. Directive”). The E.U. Directive creates a legal floor for the minimum amount of privacy protection member states must afford to their citizens,[13] and it specifically limits processing of personal data.[14] Significantly, the E.U. Directive allows member states to craft laws penalizing parties for non-compliance with its provisions[15] and laws ensuring that processing personal information is only permissible after the subject “unambiguously” gives his or her consent.[16]

One final piece of E.U. privacy legislation relevant to this discussion is the Charter of Fundamental Rights of the E.U. (“E.U. Charter”). The E.U. Charter expressly protects personal data by stating that every person has the right to protect their personal data, to access the data that has been collected about them, and to be afforded the opportunity to rectify any incorrect information.[17] The E.U. Charter further states that any personal data “must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law.”

These four pieces of legislature form the basis of privacy rights in the E.U. They affirm an individual’s right to privacy, which in turn provides a right to “respect and dignity” concerning what personal information is disclosed, the method whereby that information is disclosed, and the ability to control personal information. Multinational corporations operating in the E.U. must be cognizant of the E.U.’s omnibus approach to privacy, which incorporates laws “in which the government has defined requirements throughout the economy including public-sector, private-sector and health-sector.”

The United States

Just as the development of privacy law in Europe was governed by Europe’s historical social context, so too has America’s privacy been determined by its unique social history. Conceived in the context of overthrowing the monarchical control Britain held over its colonies, it is no surprise that privacy in the U.S. is rooted in a deep mistrust of the government.[18] Therefore, the primary privacy concern of Americans might be generalized as protection of the sanctity of the private home against government interference.[19] Because such a privacy concern is defined broadly, U.S. approaches to privacy have focused on specific remedial efforts rather than comprehensive action.[20]

In contrast to the omnibus approach of the E.U. toward privacy protection, the U.S. has adopted a sectoral approach to privacy regulation. The sectoral approach places significance on industry self-regulation while trusting to case law and highly specific legislation to protect particular aspects of privacy law.[21] For example, U.S. Supreme Court cases have recognized a right to privacy regarding family planning[22] and intimacy[23] as “penumbras” emanating from the Bill of Rights despite the lack of an enumerated right of privacy.[24] Industry self-regulation must give way, however, when Congress perceives a failure on the part of industry to adequately protect privacy. Although there are many examples of interest-specific protections, such as the Health Insurance Portability and Accountability Act, one example of specific legislation with particular importance to these posts is the Sarbanes-Oxley Act (“SOX”).

bush signs Sarbanes oxley

President Bush signs the Sarbanes-Oxley Act in 2002 (Ketan Rathod)

Although SOX amended many government statutes, of primary concern here is the Whistleblower Protection for Employees of Publicly Traded Companies provision.[25] The whistleblower provision delivers employees a cause of action against employer retaliation for the employee’s disclosure of the employer’s illegal conduct.[26] Further, SOX amended the Securities and Exchange Act of 1934 to require procedures for receiving whistleblower complaints and ensuring that whistleblowers are able to make communications in a confidential, anonymous manner.[27]


China’s privacy policy, similar to the U.S. and the E.U., is the product of its past but, like the E.U. and unlike the U.S., China has focused on omnibus regulations rather than adopting a sectoral approach. To many, China is perceived as an authoritarian government that closely monitors its citizens, effectively depriving them of any meaningful expectation of privacy. However, China has more than 200 laws or regulations referencing privacy in some manner,[28] but the privacy protections are viewed as “more aspirational than descriptive.”[29]

The Chinese Constitution provides citizens with privacy protections by stating that the “personal dignity,” residence, correspondence, and ability to criticize the government are given to the people. In the case of correspondence, the Constitution permits the suspension of private communication “to meet the needs of State security.” China’s General Civil Code also provides for certain privacy protections, including the “right of portrait,” the use of which without the owner’s permission is not permitted. However, despite the promise of these privacy rights, they are frequently violated.[30] As a condition of foreign companies operating in China, the Chinese government requires compliance with its monitoring activities.[31]


The interests protected under the term “privacy” will vary between jurisdictions because of unique historical and social contexts. The E.U.’s omnibus approach to privacy protection traces its inception to the need to protect human dignity, which is furthered only if people have access to and control over their personal information. In contrast, the sectoral approach adopted in the U.S. is the offspring of a mistrust of government intervention; the government should not be permitted to intrude into a citizen’s homes or intrude in how companies operate, so long as companies are acting fairly. China, like the E.U., has adopted an omnibus privacy regulatory scheme, but the protections enumerated in its laws are frequently in conflict with the government’s censorship regime. Although derived from cultural and ideological differences, the differing interests protected by the various privacy regimes have practical consequences for companies operating in multiple jurisdictions. The next post in this three part blog series will use two case examples to illustrate the issues companies must face in operating in the global economy.


Greg Henning is a 3L at the University of Denver Sturm College of Law and a General Editor for the View From Above.

[1] Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193, 197 (1890).

[2] See id. at 216.

[3] See id. at 195 (internal citations omitted).

[4] Daniel J. Solove, Conceptualizing Privacy, 90 Cal. L. Rev. 1087, 1088 (2002).

[5] See James Q. Whitman, The Two Western cultures of Privacy: Dignity Versus Liberty, 113 Yale L.J. 1151, 1156 (2004) (referring to the types of interests European privacy laws seek to protect) (internal citations omitted).

[6] Privacy book, page 42.

[7] See Helen Nissenbaum, Privacy as Contextual Integrity, 79 Wash. L. Rev. 119, 156 (2004) (“[N]orms of privacy in fact vary considerably from place to place, culture to culture, period to period . . ..”).

[8] See id. at 1216

[9] Time, Inc. v. Hill, 385 U.S. 374, 389 (1967).

[10] Country Reports on Human Rights Practices for 2012: China (Includes Tibet, Hong Kong, and Macau), U.S. Dept. of State (last visited Feb. 17, 2014), http://www.state.gov/j/drl/rls/hrrpt/humanrightsreport/index.htm?year=2012&dlid=204193.

[11] See Whitman, supra note 5, at 1165.

[12] Id. at 1161.

[13] See Council Directive 95/46/EC, art. 13 1995 O.J. (L 281) 31, 42.

[14] See id. arts. 6-9.

[15] Id. art. 23.

[16] Id. art. 7.

[17] Charter of Fundamental Rights of the European Union, art. 8, 2000 O.J. (C 364), 1, 10.

[18] See Whitman, supra note 5, at 1211.

[19] See id. at 1161-62.

[20] See Ryan Moshell, 373

[21] See Anna E. Shimanek, Do You Want Milk With Those Cookies?: Complying with the Safe Harbor Privacy Principles, 26 J. Corp. L. 455, 465-66 (2001).

[22] Griswold v. Connecticut, 381 U.S. 479 (1965).

[23] Lawrence v. Texas, 539 U.S. 558 (2005).

[24] See Griswold, 381 U.S. 479, 484.

[25] 18 U.S.C. § 1514A (2010)

[26] See Id.

[27] See 15 U.S.C. § 78j-1 (2010).

[28] See Ann Bartow, Privacy Laws and Privacy Levers: Online Surveillance Versus Economic Development in The People’s Republic of China, 74 Ohio St. L.J. 853, 855 (2013).

[29] Id. at 856.

[30] See Country Reports on Human Rights Practices for 2012: China (Includes Tibet, Hong Kong, and Macau), U.S. Dept. of State (last visited Feb. 17, 2014), http://www.state.gov/j/drl/rls/hrrpt/humanrightsreport/index.htm?year=2012&dlid=204193.

[31] See David Scheffer & Caroline Kaeb, The Five Levels of CSR Compliance: The Resiliency of Corporate Liability Under the Alien Tort Statute and the Case for a Counterattack Strategy in Compliance, 29 Berkeley J. Int’l L. 334, 389-90 (2011).

Posted in DJILP Online, DJILP Staff, Featured Articles, Greg HenningComments (0)


Critical Analysis: The ECJ Saves the Internet


ECJ decides that if the work is already freely available, providing access to the freely available work is not infringement.

Very little in international intellectual property is more controversial than the internet. Hosted on the internet are millions of pieces of intellectual property – articles, blogs, software, logos, and trade names, just to name a few. Balancing protection for the authors with the aim of widespread information access has proven difficult, to say the least. Last Thursday, February 13, 2014, the European Court of Justice (“ECJ”) decided that clickable hyperlinks to a protected but free work on another website do not infringe copyright law.

Nils Svensson and Others v. Retriever Sverige began in the Swedish courts. Retriever Sverige is a Swedish company that operates a website. The site provides hyperlinks to articles published on other websites. The plaintiffs were Swedish journalists that had press articles published on Göteborgs-Posten, a Swedish newspaper’s website. Retriever Sverige had linked these articles without authorization from the authors. The Swedish Court of Appeals requested a preliminary ruling from the European Court of Justice on the interpretation of the EU Copyright Law.

The ECJ first decided that a hyperlink is an “act of communication of a work to the public” as required by Art. 3(1) of Directive 2001/29/EC. However, the ECJ went on to say that to be protected as a communication to the public, the communication must be “directed at a new public.” A new public is one “that was not taken into account by the copyright holders when they authorized the initial communication to the public.” Simply adding a clickable link to the original article that was provided freely does not qualify as a communication to a “new” public. Therefore, if the users could have accessed the works directly on the original site, the hyperlink is not infringing the journalists’ copyrights. The ECJ does clarify that if a hyperlink is not to a freely available website, but to one that is protected, restricted, or only available to subscribers, that link would not be to a new public and would therefore be a violation of copyright rights.

As the European Union is often instrumental in the proliferation of new copyright regulations, anyone who provides hyperlinks to websites (such as The View From Above) are currently breathing a sigh of relief. In reality, though, the ECJ’s opinion is fairly common sense and consistent with other copyright regulations. If the work is already freely available, providing access to the freely available work is not infringement.  If you would usually have to pay for it, providing access is infringing. Any other decision would be inconsistent with established law – meaning it would create confusions and probably not be followed by other states – as well as impractical – meaning it would probably not be enforced.

While many quirks about the internet and intellectual property are still being worked through, this is one for which many site operators should be quite grateful. If lucky, the hyperlinks question will remain resolved as the ECJ has done it, but if not, Svensson serves as good precedent for other courts.

Samantha Peaslee is a 2L and the Managing Editor on the Denver Journal of International Law & Policy.

Posted in DJILP Online, DJILP Staff, Featured Articles, Samantha PeasleeComments (0)


Critical Analysis: Data Breaches Signify Need for Unified Data Protection Laws

If you are reading this blog post then you have access to the internet, a network that you are currently sharing with 2.4 billion other people, some of which may not have your best interests at heart. Many people use this network for daily activities, ranging from shopping to social networking. As internet users interact with the web they leave behind data that, if acquired by people with malicious intent, can leave them vulnerable to identity theft, credit card fraud, and embarrassment. While internet users can and should take precautions to avoid scams, interacting with the internet necessarily requires leaving personal information in the hands of others. This fact of the internet presents many challenging legal issues regarding the responsibilities of the parties that acquire personal data.

Privacy protections on the internet need to be addressed on a global scale. Image Source: shutterstock

Privacy protections on the internet may need to be addressed on a global scale to establish cross-border data access rules. Image Source: shutterstock

Late last year, Target – a large American retail store with recently expanded operations in e-commerce – was hacked, compromising the credit card information and personal data of millions of customers. Within a month of Target’s hacking disclosure, Neiman Marcus announced that hackers exposed the customer payment card data collected by their systems. While data breaches seem to be occurring more frequently than ever, these particular incidents caught the attention of enough influential people to make this issue a political priority in the United States.

In early February the US Congress met twice to discuss whether the Federal government needs to take action concerning the increasing prevalence of major data breaches. One of the main issues discussed during the hearings was the lack of a unified policy regarding companies’ responsibility to disclose data breaches to their customers. Currently, laws requiring disclosure exist in forty-six U.S. states, but differences in the law of each state provide companies with a complex and unclear view of how to handle data breaches. Staying true to their recent form, Congress has yet to take any legislative action with regard to the issues discussed during the hearings.

In order to avoid being accused of taking a US-centric view of the problems posed by internet information governance I should note that many countries besides the US are acting quickly to legislate around issues concerning data breaches. In Russia, data collection is regulated under the Personal Data Law which was implemented on July 27, 2006. This body of law requires e-commerce companies to obtain written consent before they can collect certain private personal information and also ensures these companies take the appropriate technical measures to protect their customers’ data. The European Union identified the advantages of a unified data protection scheme back in 1981 when it proposed the Data Protection Directive. In 2012 the European Union announced its intent to remain at the forefront of data protection when it proposed a currently pending major reform to the data protection legislation in place.

If the increasing frequency of data breaches is any indication, the time for a more comprehensive and global legal framework to data protection is approaching rapidly. At the world economic forum in early 2014 Brad Smith, Microsoft’s chief legal officer, called for an international convention to establish cross-border data-access rules.  Many challenges to an international legal framework for data protection remain, including the many separate legal issues with varied stakeholders, the technical complexity and continuous innovation of the internet, and the difficulty of international agreement. Despite these challenges, the internet is a global system which at some point will require international legal solutions.

Matthew Aeschbacher is a 3LE law student at the University of Denver Sturm College of Law and a staff editor for the Denver Journal of International Law & Policy.

Posted in DJILP Online, DJILP Staff, Featured Articles, Matthew AeschbacherComments (0)


Critical Analysis: The Internet: The Land of the Free?

“The Great Firewall of China” is well-recognized around the world as referring to China’s closed-internet policy.  Edward Snowden’s leaks advertised to the world that privacy online in America is more of a myth than an actuality.  But perhaps all of this is just leading to the next stage of internet freedom – not actual freedom, but more transparency in how the internet is being monitored.

A woman sits in a cybercafé in Beijing. Photo: Dan Chung/The Guardian

A woman sits in a cybercafé in Beijing. Photo: Dan Chung/The Guardian

Beijing News recently released a report that more than two million Chinese people are employed by the government and private companies to monitor web activity.  These “internet opinion analysts” are hired to search through opinions related to particular key words, gather the opinions, and then compile reports on these opinions.  However, a previous study indicates that these internet opinion analysts do more than just report on opinions.  This study of one specific site, Sina Weibo, discovered that these monitors will also delete posts that include particular keywords or that are posted by frequently-censored users.  Some of the most commonly censored topic during this thirty day study included “support Syrian rebels,” “judicial independence,” “one-child policy abuse,” and “human rights.”

Censorship in China is based mainly on government laws. The Sina Weibo study, for example, understood that “[i]f Sina Weibo had insufficient controls, the government may take action against the company.  If their controls were too rigid, users might abandon them for one of their competitors.” China is not the only state that uses laws, regulation, and general technology to regulate and monitor internet-use by its citizens.  Iran uses filtering and slow connections to attempt to censor internet use.  India actually has laws against monitoring, but apparently the government has violated its own rule by monitoring the activities of almost 160 million Indian internet users.  And of course, the United States’ NSA monitors the internet activity of millions of Americans.

Perhaps instead of using national laws to inhibit freedom on the internet through censoring or monitoring, as has apparently become the trend over the last three years, it is time to promote privacy instead.  While the UN’s International Telecommunication Union (ITU) recently attempted to negotiate a new treaty for states to sign, the treaty focused more on the rights of governments in telecommunications than individual privacy rights.  If the UN is not helping to promote an international standard, it may be best for a state or group of states to design a Model Law for states to adopt to promote internet privacy.  If a Model Law existed and was shown to be effective for some states, other states, that hold onto monitoring and censoring as necessary for security, would see a viable – and more politically palatable alternative.

Until then, China might at least be making strides in being more frank about how it is monitoring its citizens.  Although a long way from a lack of censorship, this could be a very important step towards more internet privacy – hopefully one that other states will be willing to adopt.

Samantha Peaslee is a 2L at Sturm College of Law and Managing Editor for the Denver Journal of International Law and Policy.

Posted in DJILP Online, Featured Articles, Samantha PeasleeComments (0)

University of Denver Sturm College of Law