16.3 Issues and Trends

Learning Objectives

  • Define information superhighway as it relates to the Internet.
  • Identify ways to identify credible sources online.
  • Define net neutrality.
  • Describe some of the effects of the Internet and social media on traditional media.

By 1994, the promise of the “information superhighway” had become so potent that it was given its own summit on the University of California Los Angeles campus. The country was quickly realizing that the spread of the web could be harnessed for educational purposes; more than just the diversion of computer hobbyists, this new vision of the web would be a constant learning resource that anyone could use.

The American video artist pioneer Nam June Paik takes credit for the term information superhighway, which he used during a study for the Rockefeller Foundation in 1974, long before the existence of Usenet. In 2001, he said, “If you create a highway, then people are going to invent cars. That’s dialectics. If you create electronic highways, something has to happen.”[1] Paik’s prediction proved to be startlingly prescient.

Al Gore’s use of the term in the House of Representatives (and later as vice president) had a slightly different meaning and context. To Gore, the promise of the Interstate Highway System during the Eisenhower era was that the government would work to allow communication across natural barriers, and that citizens could then utilize these channels to conduct business and communicate with one another. Gore saw the government as playing an essential role in maintaining the pathways of electronic communication. Allowing business interests to get involved would compromise what he saw as a necessarily neutral purpose; a freeway doesn’t judge or demand tolls—it is a public service—and neither should the Internet. During his 2000 presidential campaign, Gore was wrongly ridiculed for supposedly saying that he “invented the Internet,” but in reality his work in the House of Representatives played a crucial part in developing the infrastructure required for Internet access.

Figure 1: A photo of former Vice-President Al Gore standing at a podium and presenting. Although Al Gore did not invent the Internet, he did popularize the term information superhighway in an effort to build support for Internet infrastructure and neutrality.

However, a certain amount of money was necessary to get connected to the web. In this respect, AOL was like the Model T of the Internet—it put access to the information superhighway within reach of the average person. But despite the affordability of AOL and the services that succeeded it, certain demographics continued to go without access to the Internet, a problem known as the “digital divide,” which you will learn more about in this section.

From speed of transportation, to credibility of information (don’t trust the stranger at the roadside diner), to security of information (keep the car doors locked), to net neutrality (toll-free roads), to the possibility of piracy, the metaphor of the information superhighway has proved to be remarkably apt. All of these issues have played out in different ways, both positive and negative, and they continue to develop to this day.

Information Access Like Never Before

In December 2002, a survey by the Pew Internet & American Life Project found that 84 percent of Americans believed that they could find information on health care, government, news, or shopping on the Internet.[2] This belief in a decade-old system of interconnected web pages would in itself be remarkable, but taking into account that 37 percent of respondents were not even connected to the Internet, it becomes even more fantastic. In other words, of the percentage of Americans without Internet connections, 64 percent still believed that it could be a source of information about these crucial topics. In addition, of those who expect to find such information, at least 70 percent of them succeed; news and shopping were the most successful topics, government was the least. This survey shows that most Americans believed that the Internet was indeed an effective source of information. Again, the role of the Internet in education was heralded as a new future, and technology was seen to level the playing field for all students.

Nowhere was this more apparent than in the Bush administration’s 2004 report, “Toward a New Golden Age in Education: How the Internet, the Law, and Today’s Students Are Revolutionizing Expectations.” By this time, the term digital divide was already widely used and the goal of “bridging” it took everything from putting computers in classrooms to giving personal computers to some high-need students to use at home.

The report stated that an “explosive growth” in sectors such as e-learning and virtual schools allowed each student “individual online instruction .”[3] More than just being able to find information online, people expected the Internet to provide virtually unlimited access to educational opportunities. To make this expectation a reality, one of the main investments that the paper called for was increased broadband Internet access. As Nam June Paik predicted, stringing fiber optics around the world would allow for seamless video communication, a development that the Department of Education saw as integral to its vision of educating through technology. The report called for broadband access “24 hours a day, seven days a week, 365 days a year,” saying that it could “help teachers and students realize the full potential of this technology).”[4]

Rural Areas and Access to Information

One of the founding principles of many public library systems is to allow for free and open access to information. Historically, one of the major roadblocks to achieving this goal has been a simple one: location. Those living in rural areas or those with limited access to transportation simply could not get to a library. But with the spread of the Internet, the hope was that a global library would be created—an essential prospect for rural areas.

One of the most remarkable educational success stories in the Department of Education’s study is that of the Chugach School District in Alaska. In 1994, this district was the lowest performing in the state: over 50 percent staff turnover, the lowest standardized test scores, and only one student in 26 years graduating from college.[5] The school board instituted drastic measures, amounting to a complete overhaul of the system. They abolished grade levels, focusing instead on achievement, and by 2001 had increased Internet usage from 5 percent to 93 percent.

The Department of Education study emphasizes these numbers, and with good reason: The standardized test percentile scores rose from the 1920s to the 1970s in a period of 4 years, in both math and language arts. Yet these advances were not exclusive to low-performing rural students. In Florida, the Florida Virtual School system allowed rural school districts to offer advanced-placement coursework. Students excelling in rural areas could now study topics that were previously limited to districts that could fill (and fund) an entire classroom. Just as the Interstate Highway System commercially connected the most remote rural communities to large cities, the Internet has brought rural areas even further into the global world, especially in regard to the sharing of information and knowledge.

The Cloud: Instant Updates, Instant Access

As technology has improved, it has become possible to provide software to users as a service that resides entirely online, rather than on a person’s personal computer. Since people can now be connected to the Internet constantly, they can use online programs to do all of their computing. It is no longer absolutely necessary to have, for example, a program like Microsoft Word to compose documents; this can be done through an online service like Google Docs or Zoho Writer.

Cloud computing” is the process of outsourcing common computing tasks to a remote server. The actual work is not done by the computer attached to the user’s monitor, but by other (maybe many other) computers in the “cloud.” As a result, the computer itself does not actually need that much processing power; instead of calculating “1 1 = 2,” the user’s computer asks the cloud, “What does 1 1 equal?” and receives the answer. Meanwhile, the system resources that a computer would normally devote to completing these tasks are freed up to be used for other things. An additional advantage of cloud computing is that data can be stored in the cloud and retrieved from any computer, making a user’s files more conveniently portable and less vulnerable to hardware failures like a hard drive crash. Of course, it can require quite a bit of bandwidth to send these messages back and forth to a remote server in the cloud, and in the absence of a reliable, always-on Internet connection, the usefulness of these services can be somewhat limited.

The concept of the cloud takes into account all the applications that are hosted on external machines and viewed on a user’s computer. Google Docs, which provides word processors, spreadsheets, and other tools, and Microsoft’s Hotmail, which provides e-mail access, both constitute aspects of the “cloud.” These services are becoming even more popular with the onset of mobile applications and netbooks, which are small laptops with relatively little processing power and storage space that rely on cloud computing. A netbook does not need the processing power required to run Microsoft Word; as long as it has a web browser, it can run the Google Docs word processor and leave (almost) all of the processing to the cloud. Because of this evolution of the Internet, computers can be built less like stand-alone machines and more like interfaces for interacting with the larger system in the cloud.

One result of cloud computing has been the rise in web applications for mobile devices, such as the iPhone, BlackBerry, and devices that use Google’s Android operating system. 3G networks, which are cell phone networks capable of high-speed data transfer, can augment the computing power of phones just by giving the phones the ability to send data somewhere else to be processed. For example, a Google Maps application does not actually calculate the shortest route between two places (taking into account how highways are quicker than side roads, and numerous other computational difficulties) but rather just asks Google to do the calculation and send over the result. 3G networks have made this possible in large part because the speed of data transfer has now surpassed the speed of cell phones’ calculation abilities. As cellular transmission technology continues to improve with the rollout of the next-generation 4G networks (the successors to 3G networks), connectivity speeds will further increase and allow for a focus on ever-more-comprehensive provisions for multimedia.

Credibility Issues: (Dis)information Superhighway?

The Internet has undoubtedly been a boon for researchers and writers everywhere. Online services range from up-to-date news and media to vast archives of past writing and scholarship. However, since the Internet is open to any user, anyone with a few dollars can set up a credible-sounding website and begin to disseminate false information.

This is not necessarily a problem with the Internet specifically; any traditional medium can—knowingly or unknowingly—publish unreliable or outright false information. But the explosion of available sources on the Internet has caused a bit of a dilemma for information seekers. The difference is that much of the information on the Internet is not the work of professional authors, but of amateurs who have questionable expertise. On the Internet, anyone can self-publish, so the vetting that usually occurs in a traditional medium—for example, by a magazine’s editorial department—rarely happens online.

That said, if an author who is recognizable from elsewhere writes something online, it may point to more reliable information.[6] In addition, looking for a trusted name on the website could lead to more assurance of reliability. For example, the site krugmanonline.com, the official site of Princeton economist Paul Krugman, does not have any authorial data. Even statements like “Nobel Prize Winner and Op-Ed Columnist for the New York Times” do not actually say anything about the author of the website. Much of the content is aggregated from the web as well. However, the bottom-left corner of the page has the mark “© 2009 W. W. Norton & Company, Inc.” (Krugman’s publisher). Therefore, a visitor might decide to pick and choose which information to trust. The author is clearly concerned with selling Krugman’s books, so the glowing reviews may need to be verified elsewhere; on the other hand, the author biography is probably fairly accurate, since the publishing company has direct access to Krugman, and Krugman himself probably looked it over to make sure it was valid. Taking the authorship of a site into account is a necessary step when judging information; more than just hunting down untrue statements, it can give insight into subtle bias that may arise and point to further research that needs to be done.

Just Trust Me: Bias on the web

One noticeable thing on Paul Krugman’s site is that all of his book reviews are positive. Although these are probably real reviews, they may not be representative of his critical reception at large. Mainstream journalistic sources usually attempt to achieve some sort of balance in their reporting; given reasonable access, they will interview opposing viewpoints and reserve judgment for the editorial page. Corporate sources, like on Krugman’s site, will instead tilt the information toward their product.

Often, the web is viewed as a source of entertainment, even in its informational capacity. Because of this, sites that rely on advertising may choose to publish something more inflammatory that will be linked to and forwarded more for its entertainment value than for its informational qualities.

On the other hand, a website might attempt to present itself as a credible source of information about a particular product or topic, with the end goal of selling something. A website that gives advice on how to protect against bedbugs that includes a direct link to its product may not be the best source of information on the topic. While so much on the web is free, it is worthwhile looking into how websites actually maintain their services. If a website is giving something away for free, the information might be biased, because it must be getting its money from somewhere. The online archive of Consumer Reports requires a subscription to access it. Ostensibly, this subscription revenue allows the service to exist as an impartial judge, serving the users rather than the advertisers.

Occasionally, corporations may set up “credible” fronts to disseminate information. Because sources may look reliable, it is always important to investigate further. Global warming is a contentious topic, and websites about the issue often represent the bias of their owners. For example, the Cato Institute publishes anti-global-warming theory columns in many newspapers, including well-respected ones such as the Washington Times. Patrick Basham, an adjunct scholar at the Cato Institute, published the article “Live Earth’s Inconvenient Truths” in the Washington Times on July 11, 2007. Basham writes, “Using normal scientific standards, there is no proof we are causing the Earth to warm, let alone that such warming will cause an environmental catastrophe.”[7]

However, the website ExxposeExxon.com states that the Cato Institute received $125,000 from the oil giant ExxonMobil, possibly tainting its data with bias.[8] In addition, ExxposeExxon.com is run as a side project of the international environmental nonprofit Greenpeace, which may have its own reasons for producing this particular report. The document available on Greenpeace’s site (a scanned version of Exxon’s printout) states that in 2006, the corporation gave $20,000 to the Cato Institute (the other $105,000 was given over the previous decade).[9]

This back-and-forth highlights the difficulty of finding credible information online, especially when money is at stake. In addition, it shows how conflicting sources may go to great lengths—sorting through a company’s corporate financial reports—in order to expose what they see as falsehoods. What is the upside to all of this required fact-checking and cross-examination? Before the Internet, this probably would have required multiple telephone calls and plenty of time waiting on hold. While the Internet has made false information more widely available, it has also made checking that information incredibly easy.

Wikipedia: The Internet’s Precocious Problem Child

Nowhere has this cross-examination and cross-listing of sources been more widespread than with Wikipedia. Information free and available to all? That sounds like a dream come true—a dream that Wikipedia founder Jimmy Wales was ready to pursue. Since the site began in 2001, the Wikimedia Foundation (which hosts all of the Wikipedia pages) has become the sixth-most-visited site on the web, barely behind eBay in terms of its unique page views.

Organizations had long been trying to develop factual content for the web but Wikipedia went for something else: verifiability. The guidelines for editing Wikipedia state: “What counts is whether readers can verify that material added to Wikipedia has already been published by a reliable source, not whether editors think it is true.”[10] The benchmark for inclusion on Wikipedia includes outside citations for any content “likely to be challenged” and for “all quotations.”

While this may seem like it’s a step ahead of many other sources on the Internet, there is a catch: Anyone can edit Wikipedia. This has a positive and negative side—though anyone can vandalize the site, anyone can also fix it. In addition, calling a particularly contentious page to attention can result in one of the site’s administrators placing a warning at the top of the page stating that the information is not necessarily verified. Other warnings include notices on articles about living persons, which are given special attention, and articles that may violate Wikipedia’s neutrality policy. This neutrality policy is a way to mitigate the extreme views that may be posted on a page with open access, allowing the community to decide what constitutes a “significant” view that should be represented.[11]

As long as users do not take the facts on Wikipedia at face value and make sure to follow up on the relevant sources linked in the articles they read, the site is an extremely useful reference tool that gives users quick access to a wide range of subjects. However, articles on esoteric subjects can be especially prone to vandalism or poorly researched information. Since every reader is a potential editor, a lack of readers can lead to a poorly edited page because errors, whether deliberate or not, go uncorrected. In short, the lack of authorial credit can lead to problems with judging bias and relevance of information, so the same precautions must be taken with Wikipedia as with any other online source, primarily in checking references. The advantage of Wikipedia is its openness and freedom—if you find a problem, you can either fix it (with your own verifiable sources) or flag it on the message boards. Culturally, there has been a shift from valuing a few reliable sources to valuing a multiplicity of competing sources. However, weighing these sources against one another has become easier than ever before.

Security of Information on the Internet

As the Internet has grown in scope and the amount of personal information online has proliferated, securing this information has become a major issue. The Internet now houses everything from online banking systems to highly personal e-mail messages, and even though security is constantly improving, this information is not invulnerable.

An example of this vulnerability is the Climategate scandal in late 2009. A collection of private e-mail messages were hacked from a server at the University of East Anglia, where much of the Intergovernmental Panel on Climate Change research takes place. These e-mails show internal debates among the scientists regarding which pieces of data should be released and which are not relevant (or helpful) to their case.[12] In these e-mails, the scientists sometimes talk about colleagues—especially those skeptical of climate change—in a derisive way. Of course, these e-mails were never meant to become public.

This scandal demonstrates how easy it can be to lose control of private information on the Internet. In previous decades, hard copies of these letters would have to be found, and the theft could probably be traced back to a specific culprit. With the Internet, it is much more difficult to tell who is doing the snooping, especially if it is done on a public network. The same protocols that allow for open access and communication also allow for possible exploitation. Like the Interstate Highway System, the Internet is impartial to its users. In other words: If you’re going to ride, lock your doors.

Hacking E-mail: From LOVE-LETTER-FOR-YOU to Google in China

Another explosive scandal involving e-mail account hacking also occurred in late 2009, when Google’s Gmail service was hacked by IP addresses originating in China. Gmail was one of the primary services used by human rights activists due to its location in the United States and its extra encryption. To understand the magnitude of this, it is important to understand the history of e-mail hacking and the importance of physical server location and local laws.

In 2000, a computer virus was unleashed by a student in the Philippines that simply sent a message with the subject line “I Love You.” The e-mail had a file attached, called LOVE-LETTER-FOR-YOU.TXT.vbs. The suffix “.txt” is generally used for text files and was meant, in this case, as a distraction; the file’s real suffix was “.vbs,” which means that the file is a script. When run, this script ran and e-mailed itself across the user’s entire address book, before sending any available passwords to an e-mail address in the Philippines. One of the key aspects of this case, however, was a matter of simple jurisdiction: The student was not prosecuted, due to the lack of computer crime laws in the Philippines.[13]

The encryption that Gmail uses resulted in only two of the accounts being successfully hacked, and hackers were only able to see e-mail subject lines and timestamps—no message content was available.[14] Since the chaos that ensued after the “I Love You” virus, e-mail users and service providers have become more vigilant in their defensive techniques. However, the increased reliance on e-mail for daily communication makes it an attractive target for hackers. The development of cloud computing will likely lead to entirely new problems with Internet security; just as a highway brings two communities together, it can also cause these communities to share problems.

Can’t Wait: Denial of Service

Although many people increasingly rely on the Internet for communication and access to information, this reliance has come with a hefty price. Most critically, a simple exploit can cause massive roadblocks to Internet traffic, leading to disruptions in commerce, communication, and, as the military continues to rely on the Internet, national security.

Distributed denial-of-service (DDoS) attacks work like cloud computing, but in reverse. Instead of a single computer going out to retrieve data from many different sources, DDoS is a coordinated effort by many different computers to bring down (or overwhelm) a specific website. Essentially, any web server can only handle a certain amount of information at once. While the largest and most stable web servers can talk to a huge number of computers simultaneously, even these can be overwhelmed.

During a DDoS attack on government servers belonging to both the United States and South Korea in July 2009, many U.S. government sites were rendered unavailable to users in Asia for a short time.[15] Although this did not have a major effect on U.S. cyber-security, the ease with which these servers could be exploited was troubling. In this case, the DDoS attacks were perpetuated by an e-mail virus known as MyDoom, which essentially turned users’ computers into server-attacking “zombies.” This exploit—coupling an e-mail scam with a larger attack—is difficult to trace, partly because the culprit is likely not one of the original attackers, but rather the victim of a virus used to turn vulnerable computers into an automated hacker army. Since the attack, President Barack Obama has committed to creating a new post for a head of cyber-security in the government.

Net Neutrality

Most Internet users in the United States connect through a commercial Internet service provider (ISP). The major players—Comcast, Verizon, Time Warner Cable, AT&T, and others—are portals to the larger Internet, serving as a way for anyone with a cable line or phone line to receive broadband Internet access through a dedicated data line.

Ideally, ISPs treat all content impartially; any two websites will load at the same speed if they have adequate server capabilities. Service providers are not entirely happy with this arrangement. ISPs have proposed a new service model that would allow corporations to pay for a “higher tier” service. For example, this would allow AOL Time Warner to deliver its Hulu service (which Time Warner co-owns with NBC) faster than all other video services, leading to partnerships between Internet content providers and Internet service providers. The service providers also often foot the bill for expanding high-speed Internet access, and they see this new two-tiered service as a way to cash in on some of that investment (and, presumably, to reinvest the funds received).

The main fear—and the reason the FCC introduced net neutrality rules—is that such a service would hamper the ability of an Internet startup to grow its business. Defenders of net neutrality contend that small businesses (those without the ability to forge partnerships with the service providers) would be forced onto a “second-tier” Internet service, and their content would naturally suffer, decreasing inventiveness and competition among Internet content providers.

Net Neutrality Legislation: The FCC and AT&T

One of the key roadblocks to Internet legislation is the difficulty of describing the Internet and the Internet’s place among communication bills of the past. First of all, it is important to realize that legislation relating to the impartiality of service providers is not unheard-of. Before the 1960s, AT&T was allowed to restrict its customers to using only its own telephones on its networks. In the 1960s, the FCC launched a series of “Computer Inquiries,” stating, in effect, that any customer could use any device on the network, as long as it did not actually harm the network. This led to inventions such as the fax machine, which would not have been possible under AT&T’s previous agreement.

A key point today is that these proto–net neutrality rules protected innovation even when they “threatened to be a substitute for regulated services.”[16] This is directly relevant to a controversy involving Apple’s iPhone that culminated in October 2009 when AT&T agreed to allow VoIP (voice over Internet protocol) on its 3G data networks. VoIP services, like the program Skype, allow a user to place a telephone call from an Internet data line to a traditional telephone line. In the case of the iPhone, AT&T did not actually block the transmission of data—it just had Apple block the app from its App Store. Since AT&T runs the phone service as well as the data lines, and since many users have plans with unlimited data connections, AT&T could see its phone profits cut drastically if all its users suddenly switched to using Skype to place all their telephone calls.

Misleading Metaphors: It’s Not a Big Truck

Senator Ted Stevens, the former head of the committee in charge of regulating the Internet, said on the floor of the Senate that the Internet is “not a big truck…it’s a series of tubes” According to this metaphor, an e-mail can get “stuck in the tubes” for days behind someone else’s material, leading to poorer service for the customer. In reality, service providers sell data-usage plans that only set a cap on the amount of data that someone can send over the Internet (measured in bits per second, where a bit is the smallest measurement of data). If a service is rated at 1.5 million bits per second (megabits per second, or 1.5 Mbps), it may only reach this once in a while—no one can “clog the tubes” without paying massive amounts of money for the service. Theoretically, the company will then invest this service fee in building more robust “tubes.”

Net neutrality is difficult to legislate in part because it can be confusing: It relies on understanding how the Internet works and how communications are regulated. Stevens’s metaphor is misleading because it assumes that Internet capacity is not already regulated in some natural way. To use the superhighway analogy, Stevens is suggesting that the highways are congested, and his solution is to allow companies to dedicate express lanes for high-paying customers (it should be noted that the revenue would go to the service providers, even though the government has chipped in quite a bit for information superhighway construction). The danger of this is that it would be very difficult for a small business or personal site to afford express-lane access. Worse yet, the pro–net neutrality organization Save the Internet says that a lack of legislation would allow companies to “discriminate in favor of their own search engines” and “leave the rest of us on a winding dirt road.” For areas that only have access to one Internet service, this would amount to a lack of access to all the available content.

Digital Technology and Electronic Media

Senator Ted Stevens, the former head of the committee in charge of regulating the Internet, said on the floor of the Senate that the Internet is “not a big truck…it’s a series of tubes”[17] According to this metaphor, an e-mail can get “stuck in the tubes” for days behind someone else’s material, leading to poorer service for the customer. In reality, service providers sell data-usage plans that only set a cap on the amount of data that someone can send over the Internet (measured in bits per second, where a bit is the smallest measurement of data). If a service is rated at 1.5 million bits per second (megabits per second, or 1.5 Mbps), it may only reach this once in a while—no one can “clog the tubes” without paying massive amounts of money for the service. Theoretically, the company will then invest this service fee in building more robust “tubes.”

Net neutrality is difficult to legislate in part because it can be confusing: It relies on understanding how the Internet works and how communications are regulated. Stevens’s metaphor is misleading because it assumes that Internet capacity is not already regulated in some natural way. To use the superhighway analogy, Stevens is suggesting that the highways are congested, and his solution is to allow companies to dedicate express lanes for high-paying customers (it should be noted that the revenue would go to the service providers, even though the government has chipped in quite a bit for information superhighway construction). The danger of this is that it would be very difficult for a small business or personal site to afford express-lane access. Worse yet, the pro–net neutrality organization Save the Internet says that a lack of legislation would allow companies to “discriminate in favor of their own search engines” and “leave the rest of us on a winding dirt road.”[18] For areas that only have access to one Internet service, this would amount to a lack of access to all the available content.

Terms and Conditions

The film Terms and Conditions May Apply details the ways our private information, such as our emails and texts, can easily be related to our public information on social networks. The filmmakers note that the knowledge and hardware needed to snoop on people are bought and sold all over the world and are often unregulated. Are we becoming more open because of the ways social media function? Is there anything wrong with that? Are we surrendering our privacy in ways that cannot be undone?

One of the major cultural challenges of the network society will be to deal with people in power who would like to use our information against us as a means of control. It has already happened in some of the countries where the Arab Spring revolutions took place, such as Egypt.[19]

You never know what you might need to protest in the future, but we’re beginning to see tools deployed to pre-empt protests and other acts of dissent.[20] What this means for our efforts to define digital culture is that digital culture can free us as individuals, but it can also imprison us.

We can use the internet and smartphones to help us to get questions answered and to draw attention to ourselves in good ways. We can coordinate with others for fundraisers and to have parties. Digital communication networks are amazingly sophisticated tools that can help us connect as individuals to form groups to celebrate all sorts of interests, political and otherwise.

On the other hand, if individuals believe they have no privacy, digital networks could become virtual wastelands where innovative collaboration is hindered and where corporate commercial speech and government surveillance dominate.

Capitalism depends on risk-taking, and if you kill risk-taking online, you have hindered the entrepreneurialism that the network society offers. We scholars will study for decades to come how individual behavior changes and how relationships morph in a digital culture that discourages behavior we want to keep private while simultaneously encouraging levels of sharing that border on exhibitionism. How can we maintain privacy and gain attention, which is so often the currency of the open Internet? This is an interesting dilemma that arises in an individualistic digital culture.

Post-nationalism

Most simply, “post-nationalism” in digital culture means that one’s country appears to matter less as an influence on behavior and values online than it does in the tangible world, perhaps because we can be free of our national identities when engaging in digital networks with people from around the globe.

This does not mean that we should expect to see an end to nationalism in the tangible world. Quite the opposite seems to be true: As post-nationalism appears in digital spaces, nationalism is on the rise in global politics.[21] It might seem odd that people drop their nationalism online but demand it in physical spaces, but if you look at the way culture is expressed online, it is clear that for many people their nationality has little to do with their online identities.

For example, your country may be important to you, but it may not be one of the ways you define yourself in social media environments. You can love America without talking about it all of the time on Facebook or Twitter. Remember as well that national boundaries may be felt more readily in the daily lives of Africans, Asians, Europeans and others living in nations that are geographically smaller, more tightly packed and culturally distinct. In digital spaces, these cultural differences can evaporate.

Although war and immigration are highly influential on the current cultural climate in the physical world, the perception of evaporating culture in networked spaces may help drive the sense that physical world cultures are being threatened.

Recent political developments, however, make it somewhat more difficult to think of digital culture as post-nationalistic given the rise of online nationalism — particularly white nationalism in Europe and the United States. White nationalism is a brand of nationalism related to white supremacy, but it is an identity connected to the nation-state nonetheless. A nationalist’s primary modus operandi in digital culture may not reflect what nation states ultimately become in the 21st century, but rather what they wish it were. Even so, there is evidence that some factions will use digital spaces to promote a return to nationalism.

Does this mean that post-nationalism in digital culture is a false notion conceived in the early 2000s that has no bearing on culture today? Perhaps, but it is more likely that we are seeing a backlash against the rise of a global post-nationalist space online.

Globalization

Digital culture reflects a globalized or globalizing world.[22] Behaviors, interests, and relationships cross international boundaries. The economic structure of digital networks, including the mass media system, is global. For example, multinational conglomerate corporations tend to dominate the media industry, not just in the United States but around the world.[23] Books, academic articles and simple infographics show that most mass media companies fall under the ownership of large corporate firms. It is not accurate to say this represents all media or that “the media” are being controlled, but it is accurate to say a significant level of influence can be attributed to a handful of media corporations in most developed parts of the world.

Mass media consumers should be aware of the environment in which media products are produced, but this is not to say that the globalization of mass media is always a negative thing. When it comes to culture, globalization has its supporters. For example, K-Pop music originates in Korea, but the fanbase is spread worldwide and can reach a global audience only because of the global nature of digital networks. It works only because computer servers are connected by wires all over the globe to make this bit of culture, like many others, available to the entire globe.

There exists a global point of view in both the physical world and in digital culture which is open to all kinds of cultural production as long as it is interesting, funny and shows great talent. There are videos that go viral globally, although it is not always clear why. (If we had the formula, we’d include it here.) All we can say at this time is that you can reach the world with any online message and, for whatever reason, some things are globally likable and “shareable.”

Key Takeaways

  • On one hand, the information superhighway has opened up rural areas to global connections and made communication and trade much easier. One downside, however, is that illicit and unwanted information can move just as quickly as positive information—it is up to the recipient to decide.
  • The lack of authorial attribution on many online forums can make it difficult to find credible information. However, Wikipedia’s concept of “verifiability,” or citing verified sources, has provided a good check to what can become a matter of mere my-word-against-yours. It is important to gauge possible bias in a source and to judge whether the author has an economic interest in the information.
  • Net neutrality is a general category of laws that seek to make it illegal for service providers to discriminate among types of Internet content. One downside of content discrimination is that a service provider could potentially make competitors’ sites load much more slowly than their own.

Exercises

  1. Find a website about a single product, musician, or author. Does the site have a stated or strongly implied author? Look for a copyright notice and a date, usually at the bottom of the page. How might that author’s point of view bias the information on the site? How can one determine the author’s credibility?
  2. The text discusses the concept of net neutrality and how it could impact small businesses and individual users. What are your thoughts on net neutrality? Do you think it’s essential for maintaining a “free and open” Internet?
  3. Cloud computing has revolutionized the way we store and access data. What are some advantages and disadvantages of relying on cloud computing for personal and professional use? Create a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis for cloud computing. Share your findings in a presentation or written report, and discuss how the rise of cloud computing might shape the future of digital technology.
  4. One of the repeated promises of the Internet is that it is truly democratic and that anyone can have a voice. Has this played out in a viable way, or was that a naive assumption that never really came to fruition?
  5. How has the concept of verifiability changed the way that “truth” is regarded on the Internet—even in the culture at large? Has the speed and volume with which new information becomes available on the Internet made verifiable information more difficult to come by?

Media Attributions


  1. The Biz Media, “Video and the Information Superhighway: An Artist’s Perspective,” The Biz Media, May 3, 2010, https://web.archive.org/web/20110228233209/http://blog.thebizmedia.com/video-and-the-information-superhighway/.
  2. Jesdanun, Anick. “High Expectations for the Internet,” December 30, 2002, https://www.crn.com/news/channel-programs/18822182/high-expectations-for-the-internet.
  3. U.S. Department of Education, Toward a New Golden Age in American Education: How the Internet, the Law and Today’s Students Are Revolutionizing Expectations, National Education Technology Plan, 2004, https://web.archive.org/web/20110403131422/http://www2.ed.gov/about/offices/list/os/technology/plan/2004/site/theplan/edlite-intro.html.
  4. ibid
  5. ibid
  6. Kirk, Elizabeth E. “Evaluating Information Found on the Internet,” Sheridan Libraries, Johns Hopkins University, 1996, https://web.archive.org/web/20110410183820/http://www.library.jhu.edu/researchhelp/general/evaluating/.
  7. Basham, Patrick. “Live Earth’s Inconvenient Truths,” Cato Institute, July 11, 2007, http://www.cato.org/pub_display.php?pub_id=8497.
  8. Exxon, Exxpose. “Global Warming Deniers and ExxonMobil,” 2006, https://web.archive.org/web/20110907004507/http://www.exxposeexxon.com/facts/gwdeniers.html.
  9. Greenpeace, ExxonMobil 2006 Contributions and Community Investments, October 5, 2007, https://web.archive.org/web/20100213104323/http://research.greenpeaceusa.org/?a=view&d=4381.
  10. Wikipedia, s.v. “Wikipedia:Verifiability,” http://en.wikipedia.org/wiki/Wikipedia:Verifiability.
  11. Wikipedia, s.v. “Wikipedia:Neutral point of view,” http://en.wikipedia.org/wiki/Wikipedia:Neutral_point_of_view.
  12. Revkin, Andrew C. “Hacked E-Mail Is New Fodder for Climate Dispute,” New York Times, November 20, 2009, http://www.nytimes.com/2009/11/21/science/earth/21climate.html.
  13. Zetter, Kim. “Nov. 10, 1983: Computer ‘Virus’ Is Born,” Wired, November 10, 2009, http://www.wired.com/thisdayintech/2009/11/1110fred-cohen-first-computer-virus/.
  14. Zetter, Kim. “Google to Stop Censoring Search Results in China After Hack Attack,” Wired, January 12, 2010, https://web.archive.org/web/20140329071855/http://www.wired.com/threatlevel/2010/01/google-censorship-china/.
  15. Gorman, Siobhan and Evan Ramstad, “Cyber Blitz Hits U.S., Korea,” Wall Street Journal, July 9, 2009, http://online.wsj.com/article/SB124701806176209691.html.
  16. Cannon, Robert. “The Legacy of the Federal Communications Commission’s Computer Inquiries,” Federal Communication Law Journal 55, no. 2 (2003): 170.
  17. Curtis, Alex. “Senator Stevens Speaks on Net Neutrality,” Public Knowledge, June 28, 2006, https://web.archive.org/web/20131211075527/http://www.publicknowledge.org/node/497.
  18. Save the Internet, “FAQs,” 2010, https://web.archive.org/web/20090612054713/http://www.savetheinternet.com/faq.
  19. Hamzawy, Amr. “Legislating Authoritarianism: Egypt’s New Era of Repression.” Carnegie Endowment for International Peace, March 16, 2017. https://carnegieendowment.org/2017/03/16/legislating-authoritarianism-egypt-s-new-era-of-repression-pub-68285.
  20. Bamford, James. “Washington’s Ministry of Preemption.” Foreign Policy (blog), May 31, 2017. https://foreignpolicy.com/2017/05/31/washington-ministry-of-preemption-united-states-intelligence/.
  21. The Economist. “League of Nationalists.” November 19, 2016. https://www.economist.com/international/2016/11/19/league-of-nationalists.
  22. Deuze, Mark. “Participation, Remediation, Bricolage: Considering Principal Components of a Digital Culture,” 2006. https://scholarworks.iu.edu/dspace/handle/2022/3200.
  23. Bagdikian, Ben H., and Ben H. Bagdikian. The New Media Monopoly. Boston: Beacon Press, 2004.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Introduction to Communication and Media Studies Copyright © 2024 by J.J. Sylvia, IV is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book