Anthony J. Pennings, PhD

WRITINGS ON DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL COMMUNICATIONS

Symbolic Economies in the Virtual Classroom: Dead Poets and the Lawnmower Man

Posted on | November 4, 2021 | No Comments

The stark contrasts between the closed moral community of the preparatory Welton Academy in the Dead Poets Society (1989} and the emotional and intellectual capers of its new teacher Johnald “John” Charles Keating afford the opportunity to query the processes of energetic investments and signification in modernity’s educational spaces.

Likewise, the representation of educational subjectivity in The Lawnmower Man (1992 provides an ancillary contrast for exploring the technicalization of educational space. Particularly an interrogation of its operations on the body and its intellects could prove helpful in an analysis of the symbolic dynamics that operate in the “virtual classrooms.”[1] These are emerging through new multimedia communications technology and telecinematic simulation equipment.

This post examines two films that address the production of modern educational spaces and subjectivities. Through them we can begin to figure the symbolic and energetic configurations in the “virtual classroom” and other technological environments for learning and training. Note that this is from my PhD dissertation Symbolic Economies and the Politics of Global Cyberspaces (1993).

The boarding school’s repressed libidinous and spiritual “economies” invite a reading of The Dead Poets Society that focuses on socio-signifying practices. Notably, it figures the role of the teacher as what Goux termed a symbolic third. Drawing on his quest for a general economics based on symbolic energies, we can not only figure the teacher as representative of patriarchal but also logocentric significance. Like money, a condensation of value occurs that raises his position to the privileged subject and evaluator. Also, the chosen text rises to the select mode of signifying. Their role becomes the mediator and arbitrator of intellectual values and texts. Consequently, they develop a monopoly on the construction of facticity and truth.

The teacher, played by Robin Williams, is a “media event” in the sense that, by elaborating a series of emotionally and intellectually rich forms of signification, he disrupts the school’s anti-erotic sovereignties and traditional forms of educational worship. John Keating is a carefully constructed teacher-character who maintains a credible front to his peers while engaging his students in a series of revaluing exercises. His invoking the philosophy of “carpe diem,” for example, disrupts the ascetic delays of pleasure and self-gratification. Instead, these serve to channel emotional and intellectual investments into the subjectivities prescribed by the school’s bourgeois govern-mentality.

His unusual behavior and pedagogy invoke a curiosity in his students that addresses their subjugated desires and self-construction. His former pact with a secret society of self-proclaimed poets awakens their dormant dreams of social adventure and expressive identities. This secret knowledge, time-tested by the ancients of their alma mater, promises sexual conquest and alternative forms of imagination. “Spirits soared, women swooned, and gods were created.” By re-presenting literary classics of Shakespeare and Milton, but with the voice of macho film star and arch-American John Wayne, he distorts the distinctions between “high” and “low” cultures and encourages the dissolution of aesthetic boundaries that work to solidify not only class distinctions but the socio-symbolic rigidifications of emotional affect.

The reincarnated “Dead Poets Society” organize their meetings in a cave located off the campus in a nearby forest. There they read unauthorized poetry, smoke cigarettes, and mix with women – all the activities that are forbidden at the school. As Gebauer points out, the symbology of the cave has never been about the outside world, but about the inside one. “Our imagination remains captive in the cave. We do, in fact, repeatably seek out the cave in a different form.” Our ontology has its commencement in the topography of the cave and he points out: “In one way or another, all our notions of paradise are linked with situations of the cave.”[2] This is also the encapsulating trajectory of Virillio’s last vehicle.

However, Keating’s enthusiastic ideations soon conflict with other domains of symbolic controls, including the potent Oedipal dynamics that have proved to rein too tight a grip on one of his students. In his quest to act in a community play, the student goes against his father’s demands to cut down on his extracurricular activities, forges a permission slip, and performs the leading role of Buck in A Midsummer Night’s Dream.

The father inadvertently discovers the disobedience and shows up at the play to observe. Despite the acclaim and evident success, he fiercely pulls his son away from the backstage party. After a confrontation at home, where among other things, the mother’s disappointment is invoked to punish the son, the son is forbidden to act again or at least until he goes on and finishes medical school. Faced with this paternal injunction, he takes his own life.

The death of the student presents a moral catastrophe that overpowers Keating’s privileged text of spontaneity and impunity. These are now recoded as degenerate improprieties and their “unproductive” forms of expenditure are tallied against the teacher as infractions within the Calvinistic ledgers of the schoolmasters. The conflicting father is able to easily organize the dismissal of the teacher.

As Keating collects his things from the classroom, the students respond by pledging their allegiance to the teacher and the teachings of the Dead Poets Society. They stand on top of their class desks and recite, “My captain, my captain,” from Walt Whitman’s 1865 tribute to the recently assassinated U.S. president Abraham Lincoln. With this they honor Keating’s role as their navigator through the uncharted course of adolescent squanderings and discoveries.

The Dead Poets Society reflects the profound symbolic and historic investments structuring traditional education and how the currency of the teacher can facilitate new types of energetic and intellective exchanges. What will occur in new virtual environments of the Metaverse? If educational space is to become cyberspace in a socially and politically responsive way, than it behooves us to mark its inception with at least one strategy that is sensitive to the “economies” which mediate and control its symbolic investments.

Suppose we view education as the inscription of subject sovereignties and the socialization of new moral and administrative subjectivities required by the post-industrial information society (“proto-sovereignties”). In that case, the virtual classroom presents an alluring new vehicle for liberating expressive capabilities, massaging sensory intelligences, and prescribing new competencies in terms of workplace requirements or prevailing art and intellectual practices.

The Dead Poets Society reflects the profound symbolic and historic investments structuring traditional education and how the currency of the teacher can facilitate new types of energetic and intellective exchanges.

An instructive approach has been taken up by writers developing a history of computer technology around the theme of the “military information society.”[4] They rightly point to the military’s significant influence on the development of computer-generated simulation environments and information technology in general. Noble, for example, writes about the militarization of learning and the production of what he calls “mental materiel.”[5]

The merging of educational technology and the cognitive sciences received its impetus from recognizing that behaviorist theory had reached diminishing returns. Technical advances in cognitive/instructional technology would be more fruitful.[5] This combination emerged metaphorically in popular culture as the ‘cyborg’ imagery. Machinery is directly implanted into the corporeality of the docile body or, in most cases, designed to interface effectively. Cognitive science since its beginnings has been the “science of the artificial,” with the production of prescribed mental processes modeled on computer procedures and systems foremost on its laboratorial agenda.[6]

The film Lawnmower Man presented a “cyberpunk” vision of the new technology. While the film has been criticized for its overbearing Frankensteinish narrative, its visual and technological settings drew from industry leaders and became a showcase for the potential of virtual reality VR technology. Its poster subheading, “Nature made him an idiot, science made him a god” allows us a foray into the disciplining aspect of the new technology.

Virtual reality uses computer-controlled 3-D graphics and an interactive environment which is oriented from a learner or viewer perspective and which tends to suspend the viewer’s belief that the environment is produced. In virtual reality, the body is encased in a computer-mediated and what Zuboff called an informated environment that continuously records performative data. The user wears a sensor-laden set of goggles and often gloves tied to a megacomputing system capable of tracking and responding to the movements and commands of the user. This system is still in the process of transformation and it is likely that the variety of user interfaces will be marketed and brought into use.

In this story, Dr. Angelo of Virtual Space Industries has major contracts with the US government to experiment with VR to produce better fighting and technology-competent soldiers. His initial work is with chimpanzees, who are fitted into a sensory bodysuit and helmet and hang suspended in a gyroscopic device that allows the body to turn 360 degrees in any direction. In combination with constant injections of vitamins and neurotropic drugs, the chimp is subjected to long training hours of fighting within various electronically simulated environments.

When Dr. Angelo’s chimp escapes and kills a guard, it is hunted down and killed. The investigator then turns to a human subject to continue his work “on the evolution of the human mind.” Jobe is a dim-witted ward of St. Anthony’s Church who makes his living caring for the church and mowing lawns, one of which belongs to Dr. Angelo. Cajoled by the doctor’s argument that he could become smarter and thus avoid “people taking advantage of him,” Jobe agrees to undergo some tests and participate in the VR training.

Unfortunately, the government liaison tampers with the serums and computer learning programs. He installs “Project Five” formulas that were designed to produce extreme forms of aggression for warfare. The continuous work on Jobe had originally transformed him into an attractive, socially graceful, and intelligent subject, but the new program transforms him into a symbolic authority figure and a despotic shaman. Through his electronically enhanced and meticulous training, Jobe becomes a “cyberchrist” and enters the world’s telecommunications networks with the promise that he will give us what we yearn for — a figurehead to lead us.

The Lawnmower Man counters the mythic tendency that VR is becoming a liberation technology, that it will soothe our souls and free our consciousness. Instead, it suggests VR’s trajectory is one of efficiency and training that presents its own positivities and productions. The movie lacks the moral subtlety that might have made it more successful. Still, it serves to pick up on some of the discourse that VR has fit into and exposes a large audience to questions regarding VR technology.

Educational “visionaries” are “tripping over themselves to transform the schools, unwittingly, into a staging ground for playing out militarized scenarios.”[7] Combined with the new imperatives of international capital, which has become totally dependent on the new information technologies, mechanized learning “becomes a site for the actual production of ‘mental materiel’ – for the design and manufacture of ‘intellectual capital.” Public education is implicated as both a laboratory and a site of legitimization for the new technical learning. A new “cognitivist agenda” was responding to the demands of corporations with “problem-solving” skills and the ability to interpret and construct “abstract symbolizations.”

The lessons of film analyses textualize both the romantic and disciplinary notions of education to inform contemporary circulations and ideations of educational policy and practice. A long tradition of involving the viewer in a cinematic experience of suspended belief has resulted in a rich body of textual interpretation that may be helpful for the analysis of virtual reality applications in educational spaces.

Postscript

The Covid-19 era is giving VR new life due to the urgency of social distancing and the possibilities of technology. In Synthetic Worlds: The Business and Culture of Online Games (2005), author Edward Castronova pointed to three trends transforming the gaming version of VR that are relevant to education. Most people associated VR with hardware: goggles, gloves, and other haptic devices. But the advancements in software and network protocols have given confidence to its developments; particularly, software engines like Unreal have accelerated speeds and increased resolution for both augmented and virtual reality. The other development was the enhancement of communities and collaboration. It’s not just about individuals, but individuals working together. Another is the development of commercial markets for virtual environments, items, and even avatars. Particularly with the Metaverse now the focus of corporations like Facebook and Nvidia, we are entering a wild west of virtual life.

Citation APA (7th Edition)

Pennings, A.J. (2021, Nov 04) Symbolic Economies in the Virtual Classroom: Dead Poets and the Lawnmower Man. apennings.com https://apennings.com/meaningful_play/symbolic-economies-in-the-virtual-classroom-dead-poets-and-the-lawnmower-man/

Share

Notes

[1] Hiltz, R.S. (1986) “The Virtual Classroom,” Journal of Communication. Spring.
[2] Gebauer, G. (1989) “The Place of Beginning and End: Caves and Their Systems of Symbols,” In Kamper & Wulf (eds.) Looking Back on the End of the World. (NY: Semiotext(e) Foreign Agents Series). p. 28.
[3] See Chapter 5.
[4] Levidow, L. and Robins, K. (1989) Cyborg Worlds: The Military Information Society. (London: Free Association Books).
[5] Noble, D. D. (1989) “Mental Materiel: The Militarization of Learning and Intelligence in US Education,” in Levidow, L. and Robins, K. Cyborg Worlds: The Military Information Society. (London: Free Association Books). p. 22.
[6] Quote from H.A. Simon (1981) “Cognitive science: the newest science of the artificial,” in D.A. Norman, ed. Perspectives on Cognitive Science. (Hillsdale, NJ: Ablex/Erlbaum). pp. 13-25.
[7] Noble, D. D. (1989) “Mental Materiel: The Militarization of Learning and Intelligence in US Education,” in Levidow, L. and Robins, K. Cyborg Worlds: The Military Information Society. (London: Free Association Books). p. 35.
ibid, p, 34.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught in the Digital MBA program at St. Edwards University in Austin, Texas. Most of his career was in New York at Marist College in New York for three years, and New York University for 10 years. His first academic position was at Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Building Dystopian Economies in Facebook’s Metaverse

Posted on | October 31, 2021 | No Comments

Facebook’s new name “Meta” and its interest in a VR metaverse has me flashing back to a talk I gave on April 17, 2008 about Second Life, an online virtual world that was popular before social media began to dominate the Web. This post examines the dynamics and economies of virtual worlds while considering the hybrid conversions and interactions with the world outside it.

It’s no mistake that I’m writing this on Halloween, the holiday of expressive meanings and facades. Virtual environments like Fortnite, Minecraft, and Roblox explode with creative representations, not just as places to visit virtually but opportunities to interact, play, and conduct commerce. Now Metaverse promises new online services like working virtually and digital goods like designable avatars and transactable NFTs (Non-Fungible Tokens). But are people ready to don VR helmets and live, play, and work in a Ready Player One universe?

The talk was held in downtown New York City at the Woolworth Building, known as the “Cathedral of Commerce” when it was built in 1913. The location was strangely appropriate given the topic, a wrap-up of a year-long project at New York University on Second Life. The project involved an animation class taught by Mechthild Schmidt-Feist, and my class, the Political Economy of Digital Media. I still have the tee-shirt my students gave me that says “Got Linden?” a reference to Second Life’s currency, the Linden. Click on the image for a larger view of the mind map notes from my talk:

Symbolic Economies of Second Life talk

The talk gave me a chance to tie in discussions from my PhD dissertation, Symbolic Economies and the Politics of Global Cyberspaces (1993). I went back to my dissertation because it looked at money in fictional virtual worlds, from Sir Thomas Moore’s (1516) Utopia to Samuel Butler’s Erewhon (1872) and William Gibson’s cyberspace trilogy. His classic Neuromancer (1984)[1] originally conceived “cyberspace” as a digital world, the “matrix,” that someone could “jack into” with a neural plug in their skull. I also discussed Neal Stephenson’s Snow Crash (1992) that coined the term “Metaverse.”

At the time, I was looking for a way to understand how money could become electronic. My Masters thesis was on banking and telecommunications deregulation and how it led to the “Third World debt crisis” and the privatization of public assets worldwide. But it was empirical and largely descriptive. Cyberpunk was intriguing as a way to theorize money in some “future” settings.

So it was useful to examine historical texts regarding money. Sir Thomas More’s Utopia (1516) was central as it imagined a place WITHOUT MONEY. Utopia, which means “no place,” was an imaginary land ruled by its leader Utopus. The island nation outlawed gold and often ridiculed it. But it was not within certain symbolic formations, including the significance of its leader Utopus. So I used the notion of symbolic thirds from Jean-Joseph Goux to analyze the political economy of utopias and the “dys”-topias of these cyberpunk genre novels.[2] This strategy also involved a reversal. In this case, what happens to a place with an excess of money and other symbolic forms?

Dystopias are places with excesses – of money and other significations. Meanings circulate, slide about, and proliferate. Symbolic economies refer to society’s tendency to both fix meanings and elevate certain things into “symbolic thirds.” Social pressures tend to isolate and elevate a member of a category to a higher position that is then used to judge the other members of that class. Money is the most recognizable symbolic third, but others include certain artists, captains, creative works, monarchs, and other and political leaders. Donald Trump, it can be argued, rose to a type of symbolic currency for the MAGA movement. These thirds evaluate relationships between things, assign and reconcile corresponding values, and facilitate exchanges. I decided to revisit my old work that examined Second Life and now turn to Facebook’s Metaverse to investigate how economies and monies can emerge in these electronic/digital environments.

I had a helpful bridge, the work of Cory Ondrejka, a Second Life co-founder and Chief Technical Officer for Linden Labs, the developer of Second Life. His “Escaping the Gilded Cage: User Created Content and Building the Metaverse” also used the cyberpunk genre as a point of departure and demonstrated how online infrastructures can provide opportunities for different kinds of online worlds and economies to emerge and harness the power of player creativity. He focused on the costs and strategies of producing video games (particularly massively multiplayer online games-MMOGs) and virtual environments to build a online spaces to match the richness and complexity of the real world.[3]

Ondrejka addressed problems that plagued virtual reality and identified four issues with creating content in digital worlds like Second Life and also video games for the consoles such as PlayStation and Xbox. They were difficulties in:

  1. creating first-class art;
  2. the lengthy development cycles needed;
  3. the hours of gameplay that had to be produced;
  4. the many players that needed to be accommodated, and;
  5. the large teams that had to be hired and managed effectively to create digital content.

These technical issues continued to be addressed by software engines like Unity and Unreal. They have combined with innovations in hardware such as the Oculus Rift headset goggles and the HTC Vive Virtual Reality System to create significant advances virtual environment and interaction. Increases in computation speed and facility in scripting languages have also empowered professional designers and players to create new immersive experiences.

The resultant tools and techniques of these VR engines include:

  1. better scripts for avatar and player locomotion;
  2. responsive user interface synchronization and haptic (feeling) controls;
  3. editor techniques to create virtual 3D environments such as landscapes and urban architectures;
  4. adjustable stereo media for VR Head Mounted Displays (HMD);
  5. advanced plugins to expand and facilitate VR interactions and gameplay elements, and;
  6. high-volume high-speed/low-latency data networking that connects the temporal and spatial dimensions of a VR/AR environment with the user navigation actions.

They can be combined to continue to improve the desired quality of the virtual experience for the end user, despite expected limitations in available system resources. However, they are insufficient to maintain the gameplay and psychic investments necessary to ensure long-term engagements in a virtual world like the Metaverse.

Creating successful virtual worlds and economies with the traditional model of a single group is highly unlikely. User-created content is a key ingredient for a dynamic online world with a vibrant economy and that requires vesting user participation. Do it yourself (DIY) tools and techniques need to be tied to online and real world rewards.

One of Ondrejka’s inspirations was the The Mystery of Capital by Hernando De Soto, a popular economist throughout the “Third World,” who focused on capitalism being a system of representations and rights. He argued that connecting poor people to property via a system of legal representations was the most effective way to empower them to build wealth. Connecting the construction of online wealth to individuals would be crucial to successful virtual economies.[4]

Ondrejka suggested that online virtual worlds with vibrant economies are really only possible if:

  1. users are given the power to collaboratively create the content within it;
  2. each of those users receive broad rights to their creations. These would be primarily property rights over virtual land, in-world games, avatar clothes, etc.;
  3. they also need to convert those creations into real world capital and wealth.

Online virtual worlds need a system of incentives and symbolic currencies to propel them. The software tools to create and program virtual content are readily available but the legal systems need to be refined for online environments to protect property rights both within the virtual sphere and outside it.

Player-created content was not entirely new to virtual environments. id Software, a small Texas-based company, used the ego-centric perspective to create the first-person shooter (FPS) game, Wolfenstein, in May of 1991. id followed with the extraordinarily successful DOOM in December 1994. DOOM combined a shareware business model with the distribution capabilities of the emerging Internet. Just two months after Netscape introduced its first browser as freeware over the Web, DOOM enthusiasts by the droves were downloading the game by FTP to their PCs, many of them with just a 14.4 kb modem. In a prescient move, id decided to make DOOM’s source code available to its users. Making the code available allowed new modifications to the game called “mods.”

This innovation allowed their fans to create their own 2.5D (not quite 3-D) levels and distribute them to other players. A popular one involved using the characters from the Simpsons’ animated TV show running around the DOOM environment. Homer Simpson was able to renew his health by finding and eating donuts. The US military created a version called Marine DOOM designed to introduce soldiers to urban fighting and the idea of killing. Many of the company’s new employees were recruited because of the excellence of their mods, and the extra help allowed them to create the next stage of their innovative online gameplay, QUAKE.

Second Life was born in June 2003 and offered users the ability to create content using built-in tools. They could develop objects and give them scripted behaviors (i.e., a tree and its leaves swaying in the wind). They could create their own avatar (representation of themselves or an entirely fictitious persona) and architectural structures. They could buy and sell land and any other objects they made because they had an in-world currency and sought to protect intellectual property. Some 99% of the new world was user-created, and no permits, pre-approval processes, or separate submission were required. The key was the ability to perform transactions and maintain property rights.

It will be interesting to watch these dystopian virtual worlds emerge. Gibson used the motif of “the biz” to refer to the dangerous circulations of various currencies in his cyberspace trilogy, both online and in the gritty urbanscape where natural bodies existed. But virtual markets for currency exchanges had existed since at least the 1970s when Reuters introduced its Money Monitor Rates.

We’ve been living in this online/offline world for a while. Databases and spreadsheets structure the “real” world, but we don’t often perceive the changes or attribute causality to online dynamics. Amazon’s impact on retailing is one of the more pronounced effects, but you must go to the mall or see a local store close to conceptualize its effects in the offline world. Amazon excels in its offline infrastructure, but its online capabilities’ integration gives it a special power. Big data collection and search, recommendation algorithms, one-click payment capabilities (licensed from Apple), combined with its delivery information and extensive logistics and warehouse system, give it massive economic power. These trends suggest that the Metaverse, or Nvidia’s Omniverse, will have synergistic economic effects in the offline world.

The dynamism of virtual worlds will depend on the ability of participants to communicate, collaborate and sell items to each other for in-game virtual currency or barter effectively for such things. This market environment will require a legal framework and rules for purchasing and owning in-game items and properties. Converting virtual currencies for real-world currencies and vice-versa are also crucial in the dystopic economies of the metaverse vision.

The type of personal interactions via avatars are what makes the metaverse intriguing, and controversial. Second Life ultimately wasn’t that impressive because people were just walking around in their avatar “costumes” most of the time and wound up insulting or hitting on people with sexual innuendos. I think successful participation will be a matter of how these environments and interactions are organized. Multiplayer games illicit a lot of “trash talk,” but they are structured around the gameplay. Productive online places might need to have dress codes and other parameters for behavior, depending on their purpose.

Ironically, it was Facebook (and the digital camera) that largely displaced Second Life. People quickly became bored of living through avatars in a world of impersonal, jerky, low-resolution graphics. Second Life became a sort of continuous online Halloween party with people hidden in costumes and behind constructed facial and body facades. Instead, people gravitated to Facebook and then Instagram because they preferred a representational style that was closer to real-life. They wanted to post pictures of friends and family, and of course, their cats, dogs, and dinner foods. They wanted to construct stories of themselves and share memes and narratives about what they found funny and important in the world.

So are people ready for the Ready Player One universe? That remains to be seen.

Citation APA (7th Edition)

Pennings, A.J. (2021, Oct 31) Building Dystopian Economies in Facebook’s Metaverse https://apennings.com/meaningful_play/dystopian-economies-in-facebooks-metaverse/

Share

Notes

[1] Gibson, W. (1984) Neuromancer. New York: Ace Books.
[2] Goux, J. (1990) Symbolic Economies. Ithaca: Cornell University Press).
[3] Ondrejka, Cory R., Escaping the Gilded Cage: User Created Content and Building the Metaverse. Available at SSRN: https://ssrn.com/abstract=538362
[4] Soto, Hernando de, (2000) The Mystery of Capital : Why Capitalism Triumphs in the West and Fails Everywhere Else. New York: Basic Books.

© ALL RIGHTS RESERVED

AnthonybwAnthony J. Pennings, PhD is a Professor at the State University of New York, Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He has been examining the emergence of digital money and monetary policy since he wrote his Masters thesis on telecommunications and banking deregulation and how it led to the “Third World Debt Crisis” in the early 1980s.

Hypertext, Ad Inventory, and the Use of Behavioral Data

Posted on | October 14, 2021 | No Comments

Artificial Intelligence (AI) and the collection of “big data” are quickly transforming from technical and economic challenges to governance and political problems. This post discusses how the World Wide Web (WWW) protocols became the foundation for new advertising techniques based initially on hypertext coding and online auction systems. It also discusses how the digital ad economy became the basis of a new means of economic production based on the wide-scale collection of data and its processing into extrapolation products and recommendation engines that influence and guide user behaviors.

As Shoshana Zuboff points out in her book, Surveillance Capitalism (2019), the economy expands by finding new things to commodify, to make into products or services that can be sold.[1] When the Internet was opened to commercial traffic in the early 1990s and the World Wide Web established the protocols for hypertext and webpages, a dramatic increase in content and ad space became available. New virtual “worlds” of online real estate emerged. These digital media spaces were made profitable by placing digitized ads on them.

Then, search engines emerged that commodified popular search terms for advertising and also began to produce extraordinary amounts of new data to improve internal services and monitor customer behaviors. Data was increasingly turned into prediction products for e-commerce and social entertainment. Much of the data is collected via advertising processes, but also purchasing behaviors and general sentiment analysis based on all the online activity that can be effectively monitored and registered. The result is a systemic expansion of a new system of wealth accumulation that is dramatically changing the conditions of the prevailing political economy.

The Internet’s Killer App

The World Wide Web was the “killer app” of the Internet and became central to the modern economy’s advertising, data collection, e-commerce, and search industries. Killer apps are computer applications that make the technology worth purchasing. Mosaic, Netscape, Internet Explorer, Firefox, Opera, Chrome were the main browsers for the WWW that turned Internet activities into popular and sometimes profitable pastimes.

In addition, computer languages made the WWW malleable. Markup languages like HTML were utilized to display text, hypertext links, and digital images on web pages. Programming languages like JavaScript, Java, Python, and others made web pages dynamic. First by working with the browser and then servers that could determine the assembly and content of a web page, including where to place advertisements.

Hypertext and the Link Economy

The World Wide Web actualized the dream of hypertext, linking a “multiverse” of documents long theorized by computer visionaries such as Vannevar Bush and Ted Nelson. Hypertext provides digital documents with links to other computer resources. What emerged from these innovations was the link economy and the meticulous collection and tracking of information based on the words, numbers, graphics or images what that people “clicked.”

Apple’s HyperCard in the late 1980s created documents with “hot buttons” that could access other documents within that Apple Macintosh computer. Tim Berners-Lee at CERN used one of Steve Jobs’ Next Computers to create the hypertext environment grafted on the Internet to become the World Wide Web. The HyperText Transfer Protocol (HTTP) allowed links in a cyber document on a web browser to retrieve information from anywhere on the Internet, thus the term World Wide Web.

The “click” within the WWW is an action with a finger on a mouse or scratchpad that initiates a computer request for information from a remote server. For example, online advertising entices a user to click on a banner to an advertiser’s website. The ability to point and click to retrieve a specific information source created an opportunity to produce data trails that could be registered and analyzed for additional value. This information could be used for quality improvement and also probabilities of future behaviors

All these new websites, called “publishers” by the ad industry, contained the potential for “impressions” – spaces on the website that contained code that called to an ad server to place a banner ad on the website. The banner presented the brand and allowed visitors to click on the ad to go to a designated site. Over the years, this process became quite automated.

Ad Metrics

When a web page retrieves and displays an ad, it is called an impression. Cost per impression (CPM) is one monetization strategy that measures an advertiser’s costs when their ad is shown. It is based on the number of times the ad is called to the site per one thousand impressions. Online ads have undergone a bit of a resurgence because they do more for branding than previously recognized.

A somewhat different strategy is based on the click-through rate or CTR. In the advertising world, CTR is a fundamental metric. It is the number of clicks that a link receives divided by the number of times the ad is shown:

clicks ÷ impressions x 100 = CTR

For example, if an ad has 1,000 impressions and five clicks, then your CTR would be 0.5%. A high CTR is a good indication that users find your ads intriguing enough to click. Averages closer to 0.2 or 0.3 percent are considered quite successful as banner popularity has decreased.

The Monetization of Search

An advertiser can also pay the publisher when they specifically drive traffic to a website. This is called Pay-per-click (PPC) or cost per click (CPC). PPC is now used by search engines as well as some website publishers.

PPC can be traced to 1996 when Planet Oasis launched the first pay-per-click advertising campaign. A year later, the Yahoo! search engine and hundreds of other content providers began using PPC as a marketing strategy. Pricing was based on a flat-rate cost per click ranging from $0.005 to $0.25. Companies vied for the prime locations on host websites that attracted the most web traffic. As competition increased for preferred online ad spaces, the click-based payment system needed a way to arbitrate the advertisers’ interest.

This led to the first auction systems based on PPC. A company called Overture.com was created at Idealabs, a Pasadena-based incubator run by Bill Gross. Later called GoTo.com, they launched the first online auction system for search in 1998.

Gross thought the concept of Yellow Pages could be applied to search engines. These large books were significant money makers for telephone companies. Businesses would pay to have their names and telephone numbers listed or purchase an ad listed under a category like bookstore, car insurance, or plumber.

Many words entered into online searches were also strongly connected to commercial activities and potential purchases. Therefore, it made sense that advertisers might pay to divert a keyword search to their proprietary websites. How much they would pay was the question.

Overture.com’s real-time keyword bidding system paid online publishers a specific price each time their link was clicked. They even developed an online marketplace so advertisers could bid against one another for better placement. They started with clicks worth only 1 cent but planning that valuable keywords would be worth far more. They invented PPC to emphasize that it was more important that the link be clicked than seen. By the end of the dot.com bubble in 2001, Overture was making a quarter of a billion dollars a year.

The tech recession in the early 2000s put new pressures on Internet companies to develop viable revenue models. Google had developed the best search engine with its PageRank system but wasn’t making enough money to cover its costs. PageRank ordered search results based on how many valid websites linked to a website. So a company like Panasonic would have many valid sites connected to them. Sites that attracted other search engines just because they listed the names of major companies would not get the same priority as with Google. But good search did not mean good profits.

The dominant strategy at the time was to build a portal to other sites. People would come to the website for the content, and the banner ads would provide revenues. Companies would license search capabilities from DEC’s AltaVista or Inktomi and build content around it. This is how companies like HotBot and Yahoo! progressed. So it was a mystifying surprise when Google rolled out its website with no content or banners. Just a logo with an empty form line for entering search terms.

Informed by Overture, Google rolled out a new advertising model called AdWords in late 2000. Initially a CPM (Cost-per-thousand-impressions) model, it developed into a subscription model that allowed Google to manage marketing campaigns. Then, in 2002, a revamped AdWords Select incorporated PPC advertising with an auction-based system based on the work at Idealabs.

Overture sued Google for infringement of their intellectual property but eventually settled. They had changed their name to Goto.com and were acquired by Yahoo! At Goto.com, advertisers “bid” against each other to be ranked high for popular words. When someone searches and clicks on a word like “insurance,” the sites for the highest bidders will appear according to the highest bids. They also automated the subscriber account process. A settlement was agreed to for millions of Google shares in exchange for intellectual property rights to their bidding and pay-per-click and systems. The move marked an offshoot of the digital ad economy. Emerging powerfully with keyword search and auctioning, and combined with MapReduce and Hadoop-driven “big data” processing, Google’s AdWords became an immediate revenue driver.

How Does the AdWords Auction Work?

From Visually.

Google also bought YouTube in 2006 and eventually created a new ad market for streaming videos. They used a new advertising product called AdSense that was offered in 2003 after Google acquired Applied Semantics. AdSense served advertisements based on site content and audience. It placed ads around or on the video based on what it sensed the content was about. Monetization depended on the type of ad, the cost of views or CPM, and the number of views.

Using Behavioral Data

Facebook’s social media platform started its ascent in 2005, but it also needed a way to monetize its content. It first focused on gathering users and building its capital base. That it used, in part, to acquire several companies for their technical base, such as the news-gatherer FriendFeed. By 2009, it had determined that advertising and data-gathering would necessarily be its profit-making strategy with Facebook Ads and Pages.

Facebook started as a more traditional advertising medium, at least conceptually. It would provide content designed to capture the user’s awareness and time, and then sell that attention to advertisers. Advertising had always merged creativity and metrics to build its business model. Facebook capitalized on the economies of the user-generated content model (UGC) and added user feedback experiences such as the “like” button. Also, sharing and space for comments to provided a more interactive experience, i.e., adding dopamine hits.

Facebook had the tools and capital to build an even more elaborate data capturing and analysis system. They started to integrate news provided via various feeders using coding techniques and XML components to move beyond just a users’ friends’ content. Facebook built EdgeRank, an algorithm that decided which stories appeared in each user’s newsfeed. It used hundreds of parameters to determine what would show up at the top of the user’s newsfeed based on their clicking, commenting, liking, sharing, tagging, and, of course, friending.

Facebook then moved to more dynamic machine learning-based algorithms. In addition to the Affinity, Weight, and Time Decay metrics that were central to EdgeRank, some 100,000 individual weights were factored into the new algorithms. Facebook began using what we can call artificial intelligence to curate the pictures, videos, and news that users visitors saw. This aggressive curation has raised concerns and increased scrutiny of Facebook and its algorithms’ impact on teens, democracy, and society.

Frances Haugen, a former Facebook employee testified in October 2021 to Congress about the potential dangers of Facebook’s algorithms. Legal protections allowed her to present thousands of pages of data from Facebook research.[2] The new scrutiny has raised questions about how much Facebook, and other platforms like it, can operate opaque “black box” AI systems outside of regulatory oversight.

Summary

This post discussed how the hypertext protocols created an opportunity to gather useful data for advertisers and also became money makers for web publishers and search engines. The Internet and the World Wide Web established the protocols for hypertext and webpages allowing for a dramatic increase in content to be made available and with it, ad space. The web’s click economy not only allowed users to “surf” the net but collected information on those activities to be tallied and processed by artificial intelligence.

Subsequently, information on human actions, emotions, and sentiments were technologically mined as part of a new means of economic production and wealth accumulation based on advanced algorithmic and data science techniques used to gather and utilize behavioral data to predict and groom user behaviors.

Citation APA (7th Edition)

Pennings, A.J. (2021, Oct 14). Hypertext, Ad Inventory, and the Use of Behavioral Data. apennings.com https://apennings.com/global-e-commerce/hypertext-ad-inventory-and-the-production-of-behavioral-data/

Notes

[1] Zuboff, S. (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.

[2] Legal protections based on federal laws including the Dodd-Frank Act, a 2010 Wall Street reform law, and the Sarbanes-Oxley Act, a 2002 legal reaction to the Enron scandal give some protections to corporate “whistleblowers.”

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, Ph.D. is Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 he was on the faculty of New York University where he taught economics of media, and a course on New Technologies in Advertising and PR. He keeps his American home in Austin, Texas and has taught in the Digital Media MBA program at St. Edwards University He joyfully spent 9 years at the East-West Center in Honolulu, Hawaii.

The North-South Politics of Global News Flows

Posted on | October 3, 2021 | No Comments

“Free flow was at once an eloquent democratic principle and an aggressive trade position on behalf of US media interests.”
– Herman, E.S., and McChesney, R. W., The Global Media [1]

The “free-flow” of transborder news and data communications became a hot topic for international governance and politics in the 1970s after the US went off the global gold standard. It was the dawning of a new age of digital monetarism. Freeing the flow of capital from the constraints of individual nation-states (including the US since the New Deal) was one of the foremost global issues of the Reagan Administration in the 1980s, and news became a contentious part of the process. Major areas of disagreement emerged between the South’s Non-Aligned Movement (NAM) and the North (Group of 5) that would shape the future of the global economy.

This research summarizes and discusses the globalization of capital movement and the information and news that lubricates its transactional systems. An international information divide, literally a divide by national boundaries, needed to be transcended for this globalization to work. I examine the perspective of what has been called the “South” – the countries that inhabit the southern hemisphere (except for Australia and New Zealand) and their historic struggles with the more developed “North.” These countries organized into a ninety-member Non-Aligned Movement (NAM) and subsequently began to attack what they considered a new type of “neo-colonialism.” While quite different, almost all these countries were concerned about corporate power’s growing strength and influence for the First World “North” countries.[2]

Governments became increasingly concerned that the computerized society with its international data flows could affect its citizens and interfere with national security, cultural sovereignty, and economic success. One of the concerns was exemplified by the debates over what was called “Transborder Data Flow” (TDF).[3] TDF was first used in discussions on privacy protection by the OECD in June of 1974. Then in a subsequent OECD seminar in 1977, the definition expanded to include nonpersonal information. South countries also expressed concerns about social and cultural information flowing in from developed countries. In contrast, news about their countries often focused on natural disasters, political instability, and other topics that did not show them in a positive light.

The South called for both a New International Economic Order (NIEO) and a New World Information Order (NWIO), which would provide a collective voice and address these issues. Galtung and Vincent listed the NIEO’s five basic points. The first was better terms of trade for the Third World. Countries of the South wanted improved and/or decreased trade with the countries of the North. Tariffs and other restrictions in the First and Second Worlds were a significant concern, as was the tendency for the South to export raw materials to the North. In contrast, the opposite flow of trade tended to be value-added products such as cars, processed foods, military arms, and electronics. Second, South countries wanted more control over productive assets in their own countries. Capital, nature, labor, technology, and management of foreign corporate branches tended to elude local concerns.[4]

These countries wanted to set up locally controlled industries leading to “import-substitution,” replacing foreign-produced products with those made within the nation. South countries also wanted more Third World interaction. This meant increased South-South trade and economic/technical cooperation between developing countries. Fourth, they also wanted more Third World counter-penetration such as financial investment in “rich” countries. Lastly, they wanted more Third World influence in world economic institutions, such as the World Bank, the IMF, and UNCTAD, as well as in the policies and activities of transnational corporations. In 1974, the NIEO was adopted by a special session of United Nations General Assembly.

These concerns were followed up by calls for a “New World Information and Communication Order” (NWICO), an important but largely rhetorical attack on the global news media, particularly the newswires like Associated Press. In 1976, UNESCO (United Nations Economic, Social, and Cultural Organization) established what later became known as the MacBride Commission, after its chair, Sean MacBride, from Ireland. The commission was charged with studying global communications. The commission’s report, Many Voices, One World (MacBride Commission 1980/2004), outlined the main international problems in communication and summarized NWICO’s primary philosophical thrust. Two years later, UNESCO passed the Mass Media Declaration that spelled out the ethical and professional obligations of the mass media and its journalists. After the MacBride Commission “vaguely” endorsed the NWICO in 1980, UNESCO passed a resolution calling for the elimination of international media imbalances and stressing the importance of news serving national development.[5]

These issues are relevant because they foreshadowed problems inherent in the internationalization of the Internet. They are also indicative of the increasing tensions building between the two as the North attempted to use the “Third World debt crisis” to institute a set of structural reforms designed to open these countries to the flows of money-capital, data processing, and finance related news. Going off gold and the oil crises in the 1970s resulted in most of these countries going heavily into debt. This left them vulnerable to pressure by the North. The Reaganomic response was swift and effective.

The MacBride Report reflected UNESCO’s traditional concerns with the “free flow” of information and calls for a “plurality” of communication channels, but it was released at a time when the new Reagan and Thatcher governments were setting out their own agendas for an international order of communications and capital flows. In the wake of international money’s new demands for information and news, they wanted to maintain a strong neoliberal stance on international communication. Despite strong international opposition, the US redrew from UNESCO and stopped paying membership dues to the United Nations.

But the primary strategy was a structural adjustment of domestic economies to open them up to money-capital and news. By making additional lending subject to the scrutiny of the International Monetary Fund (IMF), the North pressured the South to liberalize their economies and protect the free flow of information moving through their borders. Thus, in conjunction with the financial industry’s need for international data communications, the debt crisis allowed the North to pave the way for the globalization of news and, eventually, the Internet.

Consequently, the main thrust of this research argues that the road for the international Internet and e-commerce was substantially paved by the attempts to free up the global flows of financial news and information needed for the new regime of digital monetarism. Share markets were opened to international investments, and governments were pressured to privatize public utilities and other government assets. A new era of spreadsheet capitalization was emerging that allowed for inventorying and valuing of assets. Turning these assets into tradeable securities was heavily reliant on information and news flows. News became a contentious issue during the 1970s, especially for the “Third World,” which tied it to other issues of economic and informational importance.

This post argues that international flows of information and news were substantially altered in the late 1970s and early 1980s. Freeing the flow of capital from the constraints of individual nation-states (including the US government) was the foremost international issue of the Reagan Administration outside of the Cold War. The securitization of assets required information and news to adequately price and sell on global sharemarkets. Reagan’s tax cuts became the new foreign aid as the US deindustrialized, and capital flows created a global system of digital finance supply chains. By the 1990s, the digital global system had entrenched itself, and a condition of pan-capitalism developed, with South countries becoming “emerging markets” in the global order.[6]

Notes

[1] Herman, E.S., and McChesney, R. W. (1997) The Global Media: The New Missionaries of Global Capitalism. London: Cassell. p. 17.
[2] Chilote, R.H. (1984) Theories of Development and Underdevelopment. Boulder, CO: Westview Press.
[3] R. Turn, “An Overview of Transborder Data Flow Issues,” in null, Oakland, CA, USA, 1980 pp. 3-3.doi: 10.1109/SP.1980.10010
https://doi.ieeecomputersociety.org/10.1109/SP.1980.10010
[4] Galtung, J. and Vincent, R. (1992) Global Glasnost: Towards a New World Information and Communication Order. NJ: Hampton Press.
[5] Jussawalla, M. (1981) Bridging Global Barriers: Two New International Orders. Papers of the East-West Communications Institute. Honolulu, Hawaii.
[6] Tehranian, M. (1999) Global Communication and World Politics: Domination, Development, and Discourse. Boulder, CO: Lynne Rienner Publishers. p. 83.

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Originally from New York, he started his academic career Victoria University in Wellington, New Zealand before returning to New York to teach at Marist College and spending most of his career at New York University. He has also spent time at the East-West Center in Honolulu, Hawaii. When not in the Republic of Korea, he lives in Austin, Texas.

ARPA and the Formation of the Modern Computer Industry, Part 2: Memex, Personal Computing, and the NSF

Posted on | September 26, 2021 | No Comments

With World War II winding down, President Roosevelt asked Vannevar Bush, his “czar” of all federally funded scientific research, for a set of recommendations on the application of the lessons learned during the war. The President was particularly interested in how the scientific and technological advances achieved in the war effort could improve issues like national employment, the creation of new industries, and the health of the nation’s population. This post looks at Bush’s contribution to the formation of the National Science Foundation and computing developments that lead to interactivity and networking.

Bush managed the Office of Scientific Research and Development under Roosevelt and later became President Truman’s science advisor. While not actually stationed in New Mexico, Bush had the overall supervisory responsibility for building the first atomic bomb and was uniquely positioned to understand the new technologies and scientific advances coming out of the war. As a result, Bush took up the President’s challenge and wrote two articles that would have far-reaching consequences.

Bush’s articles provided the rationale for funding a wide range of technological and scientific activities and inspired a new generation of researchers. His first written response, “Science, the Endless Frontier,” led to the formation of the National Science Foundation (NSF). The NSF provided widespread funding for scientific and technological research throughout the country’s universities and research institutes.[1] In the mid-1980s, it would take control of the non-military part of the ARPANET and link up supercomputing centers in response to the Japanese economic-technological threat. The NSFNET, as it was called, would also standardize the TCP/IP protocols and lead to the modern Internet.

Bush’s second response, “As We May Think,” was published in the Atlantic Monthly in July 1945, just a few months after Hitler committed suicide and a month before Bush’s atomic bombs were dropped on Japan. The article received lukewarm attention at first, but it persisted and inspired many people, including J.C.R. Licklider, who pursued a vision of computing interactivity and communications based on Bush’s ideas.

In a war that increasingly turned to science and technology to provide the ultimate advantage, Bush’s responsibilities were crucial to the outcome of World War II. These burdens also placed him in a troublesome position of needing to read, digest and organize an enormous amount of new scientific information. This responsibility led him to develop and forward the idea of an information technology device known as the “memex,” something he had been working on in the late 1930s while he was a professor and the Vice-President of MIT.[2]

The memex is arguably the model for the personal computer and was a distinct vision of man-machine interactivity that motivated Licklider’s interest in time-sharing technologies and networking. Bush’s conception of a new device aimed at organizing and storing information at a personal level led to a trajectory of government-sponsored research projects that aimed to realize his vision. In 1960, Licklider, a lecturer at MIT, published “Man-Computer Symbiosis,” a theoretical article on real-time interactive computing. [3] Licklider, a psychologist by training, later moved to Department of Defense’s Advanced Research Projects Agency (ARPA) in 1962 to become the first director of its Information Processing Techniques Office (IPTO).

It was the intersection of his vision with the momentum of the Cold War that led to the fruition of Bush’s ideas, largely through the work of Licklider. The first timesharing systems were constructed at MIT with funding from ARPA, as well as the Office of Naval Research. Constructed over the years 1959 to 1962, these efforts led to a working model called Compatible Time-Sharing System (CTSS). Using the new IBM 7090 and 7094 computers, CTSS proved that the time-sharing concept could work, even though it only linked three computers.

The military later supplied MIT with a $3 million grant to develop man-machine interfaces. By 1963 Project MAC, as it was called, connected some 160 typewriter consoles throughout the campus and in some faculty homes with up to 30 users active at any one time. It allowed for simple calculations, programming and eventually what became known as word processing. In 1963 the project was refunded and expanded into a larger system called MULTICS (Multiplexed Information and Computing Service) with Bell Labs also collaborating in the research. MULTICS demonstrated the capacity to handle 40-50 users and use cathode ray tube (CRT) graphic devices, and accommodate users that were not professional programmers.[4]

As the cases of computing and timesharing show, the military-industrial tie drove early computing developments and created the foundation for the Internet to emerge. Funding for a permanent war economy introduced the information-processing regime to the modern world. In conjunction with research institutes like MIT, MITRE, and RAND, and corporations such IBM, GE, as well as the Bell System, IT got its start.

Licklider’s notion of an “Inter-Galactic Computer Network” began to circulate as a vague idea through a like-minded group of computer scientists who were beginning to see the potential of connected computers. The IPTO was beginning to seed the literal invention of computer science as a discipline and its establishment in universities around the country. In Licklider’s memo of April 25, 1963, he addressed the “members and affiliates” of the network that had coalesced around his vision, and the money of ARPA. His concern was that computers should be able to communicate with each other easily and provide information on demand. The project was posed in terms of cross-cultural communications. The concept helped ARPA change its focus from what went on inside the computer to what went on between computers.

The technology was not quite there yet but the expertise was coming together that would change computing and data communications forever. Using military money Licklider began the support of actual projects to create computer technologies that expanded Bush’s vision. A little-known corporation called Bolt, Beranek, and Newman (BBN) was one of the most significant to come out of a new complex of agencies and companies working on computing projects. The bond between this small corporation, MIT, and ARPA produced a packet-switched network that became the precursor to today’s modern Internet.

In conjunction with the National Science Foundation, ARPA pursued human-computer interactivity and subsidized the creation of computer science departments throughout the country. It funded time-sharing projects and funded the first packet-switching technology and would be the foundational technology of the Internet.

Notes

[1] Bush stayed with the government throughout the 1940s directing science funding and then becoming the first head of the National Science Foundation after it was established in 1950.
[2] Information on Bush’s early conception of the memex from M. Mitchell Waldrop’s (2001) The Dream Machine: J.C.R. Licklider and the Revolution that Made Computing Personal. New York: The Penguin Group. Particularly useful is the second chapter that focuses on Bush.
[3] Licklider, J.C.R (1960) “Man-Machine Symbiosis,” IRE TRANSACTIONS ON HUMAN FACTORS IN ELECTRONICS. March.
[4] Denicoff, M. (1979) “Sophisticated Software: The Road to Science and Utopia,” in Dertouzos, M.L. and Moses, J.(1979) The Computer Age: A Twenty Year View. Cambridge, Massachusetts: The MIT Press. p. 370-74.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at St. Edwards University in Austin, Texas. His first academic job was at Victoria University in Wellington, New Zealand. Most of his career was at New York University. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii at also supported his Ph.D.

Engineering the Politics of TCP/IP and the Enabling Framework of the Internet

Posted on | September 22, 2021 | No Comments

The Internet was designed with a particular architecture – an open system that would accept any computer and connect it to any other computer. A set of data networking protocols allowed any application on any device to communicate through the network to another application on another device. Email, web files, text messages, and data from sensors could be sent quickly over the Internet without using significant power or other resources from the device. The technical “architecture” of the Internet was designed to empower the network’s edges – the users and their hardware. Its power has been borne out as those edges are no longer just large mainframes and supercomputers but continued to incorporate new devices like PCs, laptops, smartphones, and the tiniest of sensors in the emerging Internet of Things (IoT).

This post explores the “political engineering” of the Internet protocols and the subsequent policy framework for a national and global data communications network that empowered users and created an open environment for competition, social interaction, and innovation. This system has been challenged over the years by programmatic advertising, oligopolistic ISPs, security breaches, and social media. But it’s still a powerful communications system that has changed commerce, entertainment, and politics worldwide.

The Power of Protocols

What gives communicative power to the Internet’s architecture are the protocols that shape the flows of data. With acronyms like TCP, IMAP, SMTP, HTTP, FTP, as well as UDP, BGP, and IP, these protocols formed the new data networks that would slowly become the dominant venue for social participation, e-commerce, and entertainment. These protocols were largely based on a certain philosophy – that computer hosts should talk to computer hosts, that networks were unreliable and prone to failure, and that hosts should confirm with other hosts that the information was passed to them successfully. The “TCP/IP suite” of protocols emerged to enact this philosophy and propel the development of the Internet.[1]

TCP/IP protocols allow packets of data to move from application to application, or from web “clients” to “servers” and back again. They gather content such as keystrokes from an application and package them for transport through the network. Computer devices use TCP to turn information into packets of data – 1s and 0s – sent independently through the web using Internet Protocol. Each packet has the address of its destination, the source of origination, and the “payload,” such as part of an email or video.

The nodes in the network “route” the packets to the computer where they are headed. Destinations have an IP address that are included in routing tables that are regularly updated in routers on the Internet. This involves some “handshaking” and acknowledging the connections and packets received between what we have been calling alternatively the edges/devices/hosts/applications/processes.

More specifically, a “process” on an application on one device talks to a “process” on an application on another device. So, for example, a text application like Kakao, Line, WhatsApp, or WeChat communicates to the same application on another device. Working with the device’s operating system, TCP takes data from the application and sends it into the Internet.

There, it gets directed through network routers to its final destination. The data is checked on the other side, and if mistakes are found, it requests the data be sent again. IMAP and SMTP retrieve and move email messages through the Internet, and most people will recognize HTTP (Hypertext Transfer Protocol) from accessing web pages. This protocol requests a file from a distant server, sets up a connection, and then terminates that connection when the files download successfully. Connecting quickly to a far off resource, sometimes internationally, and being able to sever the link when finished is one of the features that makes the web so successful.

HTTP is at the center of what has been called the World Wide Web (WWW). Mostly called the ‘web” these days, it combined the server with the “browser” to provide a powerful new utility – the website. Hypertext Markup Language (HTML) enabled the browser to present text and images on a 2-D color screen. The WWW empowered the “dot.com” era and allowed many people to develop computer skills to produce websites. Every organization had to have an online “presence” to remain viable, and new organizations were started to take advantage of the fantastic reach of the web. Soon, server-side software empowered a myriad of new possibilities on the net, including browser-based email, e-commerce, search, and social media.

Devices connect or “access” an Internet Service Provider (ISP), either from a home, school, or Wi-Fi connections at a café or public network in a train or park. Mobile subscriptions allow access to a wireless cell tower with a device antenna and SIM card. Satellite service is becoming more available, primarily through HughesNet, ViaSat, and increasingly SpaceX’s Starlink as more low-orbit satellites are launched. Starlink is teaming up with T-Mobile in the US to connect a smartphone directly to the low-orbit satellite network.

Physical media make a difference in good Internet access by providing the material access to the ISP. Various types of wires and fiber optic cables or combinations provide the critical “last mile” connection from the campus, home premise, or enterprise. Ethernet connections or wireless routers connect to a modem and router from your cable company or telco ISP to start and end communication with the edge devices.

Conceptually, the Internet has been divided into layers, sometimes referred to as the protocol stack. These are:

    Application
    Transport
    Network
    Link
    and Physical layers.

The Internet layers schematic survived the Open Systems Interconnection (OSI) model with a more efficient representation that simplified the process of developing applications. Layers help conceptualize the Internet’s architecture for instruction, service and innovation. They visualize the services that one layer of the Internet provides to another using the protocols and Application Programming Interfaces (APIs). They provide discrete modules that are distinct from the other levels and serve as a guideline for application development and network design and maintenance.

The Internet’s protocol stack makes creating new applications easier because the software that needs to be written only for the applications at the endpoints (client and server) and not for the network core infrastructure. Developers use APIs to connect to sockets, a doorway from the Application layer to the next layer of the Internet. Developers have some control of the socket interface software with buffers and variables but do not have to code for the network routers. The network is to remain neutral to the packets running through it.

The Network layer is where the Internet Protocol (IP) does its work. At this layer, the packets are repackaged or “encapsulated” into larger packets called datagrams. These also have an address on them that might look like 192.45.96.88. The computers and networks only use numerical names, so they need to use a Domain Name Service (DNS) if the address is an alphabetical name like apennings.com.

Large networks have many possible paths, and the router’s algorithms pick the best routes for the data to move them along to the receiving host. Cisco Systems became the dominant supplier of network routers during the 1990s.

Although the central principle of the Internet is the primacy of the end-to-end connection and verification – hosts talk to hosts and verify the successful movement of data, the movement of the data through the network is also critical. The network layer in the TCP/IP model transparently routes packets from a source device to a destination device. The job of the ISPs are to take the data encapsulated at transport and network and transport it – sometimes over long distances via microwave towers, fiber optic cables, or satellites. The term “net neutrality” has emerged to protect the end-to-end principle and restrict ISPs from interfering with the packets at the network layer. If ISPs are allowed to examine data from the Application layer, they could alter speed, pricing, or even content based on different protocols.

The diffusion of the TCP/IP protocol was not inevitable. Computer companies like IBM, Honeywell, and DEC developed their own proprietary data communications systems. Telecommunications companies had already established X.25 protocols for packet-switched data communications with X.75 gateway protocols used by international banks and other major companies. TCP looked like a long shot, but the military’s subsequent decisions in 1982 to mandate it and National Science Foundation’s NSFNET support secured momentum for TCP/IP. Then, in 1986, the Internet Advisory Board (IAB) began to promote TCP/IP standards with publications and vendor conferences about its features and advantages. By the time the NSFNET was decommissioned in 1995, the protocols were well established.

The Philosophy of TCP

The military began to conceptualize the decentralized network as part of its defense against nuclear attack in the early 1960s. Conceived primarily by Paul Baran at RAND, packet-switching was developed as way of moving communications around nodes in the network that were destroyed or rendered inoperable by attack. Packets could be routed around any part of the network that was congested or disabled. If packets going from San Francisco in California to New York City could not get through a node in Chicago, they could be routed around the Windy City through nodes in other cities. As networks were being considered for command and control operations they had to consider that eventually computers would not only be in fixed installations but in airplanes, mobile vehicles, and ships at sea. The Defense Advanced Research Projects Agency (DARPA) funded Vint Cerf and others to create what became the TCP and IP protocols to connect them.

The Internet was also informed by a “hacker ethic” that emerged at MIT in the late 1950s and early 1960s as computers moved away from punch-cards and began to “time-share” their resources. Early hacking stressed openness, decentralization, and sharing information. In addition, hackers championed merit, digital aesthetics, and the possibilities of computers in society. Ted Nelson’s Computer Lib/Dream Machines (1974) was influential as the computer world moved to California’s Silicon Valley.

The counter-culture movement, inspired by opposition to the Vietnam War was also important. Apple founders Steve Jobs and Wozniak were sympathetic to the movement, and their first invention was a “Bluebox” device to hack the telephone system. Shortly after, the Apple founders merged hacktivism with the entrepreneurial spirit as they emphasized personal empowerment through technology in developing the Apple II and Macintosh.

The term hackers has fallen out of favor because computers are so pervasive and people don’t like to be “hacked” and their private data stolen or vandalized. But the hacker movement that started with noble intentions and continues to be part of the web culture. [2]

Developing an Enabling Policy Framework

Although the Internet was birthed in the military and nurtured as an academic and research network, it was later commercialized with an intention to provide an enabling framework for economic growth, education, and new sources of news and social participation. The Clinton-Gore administration was looking for a strategy to revitalize the struggling economy. “It’s the Economy, Stupid” was their mantra in the 1992 campaign that defeated President George H. Bush and they needed to make good on the promise. Their early conceptualization as information highways framed them as infrastructure and earned the information and telecommunications sectors both government and private investment.

Initially, Vice-President Gore made the case for “information highways” as part of the National Information Infrastructure (NII) plan and encouraged government support to link up schools and universities around the US. He had been supporting similar projects as one of the “Atari Democrats” since the early 1980s, including the development of the NSFNET and the supercomputers it connected.

As part of the National Information Infrastructure (NII) plan, the US government handed over interconnection to four Network Access Points (NAPs) in different parts of the country. They contracted with big telecommunications companies to provide the backbone connections. These allowed ISPs to connect users to a national infrastructure and provide new e-business services, link classrooms, and electronic public squares for democratic debate.

The US took an aggressive stance in both controlling the development of the Internet and pressing that agenda around the world. After the election, Gore pushed the idea of the Global Information Infrastructure (GII) worldwide that was designed to encourage competition in both the US and globally. This offensive resulted in a significant decision by the World Trade Organization (WTO) that reduced tariffs on IT and network equipment. Later the WTO encouraged the breakup of national post and telegraph agencies (PTTs) that dominated national telecommunications systems. The Telecommunications Act of 1996 and the administration’s Framework for Global E-Commerce were additional key policy positions on Internet policy. The result of this process was essentially the global Internet structure that gives us relatively free international data, phone, and video service.

Summary

As Lotus and Electronic Freedom Frontier founder Mitch Kapor once said: “Architecture is politics”. He added, “The structure of a network itself, more than the regulations which govern its use, significantly determines what people can and cannot do.” The technical “architecture” of the Internet was primarily designed to empower the network’s edges – the users and their hardware. Its power has been borne out as those edges are no longer large mainframes and supercomputers but laptops, smartphones, and the tiniest of sensors in the emerging Internet of Things (IoT). Many of these devices have as much or more processing power than the computers the Internet was invented and developed on. The design of the Internet turned out to be a unique project in political engineering.

Citation APA (7th Edition)

Pennings, A.J. (2021, Sep 22). Engineering the Politics of TCP/IP and the Enabling Framework of the Internet. apennings.com. https://apennings.com/telecom-policy/engineering-tcp-ip-politics-and-the-enabling-framework-of-the-internet/

Notes

[1] Larsen, Rebekah (2012) “The Political Nature of TCP/IP,” Momentum: Vol. 1 : Iss. 1 , Article 20.
Available at: https://repository.upenn.edu/momentum/vol1/iss1/20

[2] Levy also described more specific hacker ethics and beliefs in chapter 2, Hackers: Heroes of the Computer Revolution. These include openness, decentralization, free access to computers, and world improvement and upholding democracy.

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Originally from New York, he started his academic career Victoria University in Wellington, New Zealand before returning to New York to teach at Marist College and spending most of his career at New York University. He has also spent time at the East-West Center in Honolulu, Hawaii. When not in the Republic of Korea, he lives in Austin, Texas.

ARPA and the Formation of the Modern Computer Industry, Part I: Transforming SAGE

Posted on | September 12, 2021 | No Comments

In response to the Russian Sputnik satellites launched in late 1957, US President Dwight D. Eisenhower formed the Advanced Research Projects Agency (ARPA) within the Department of Defense (DoD). As the former leader of the Allied forces during D-Day and the invasion of the European theater, he was all-to-aware of the problems facing the military in a technology-intensive era. ARPA was created, in part, to research and develop high technology for the military and bridge the divide between the Air Force, Army, Marines, and Navy.

Under pressure because of the USSR’s continuous rocket launches, the Republican President set up ARPA despite considerable Congressional and military dissent. Although it scaled back some of its original goals, ARPA went on to subsidize the creation of computer science departments throughout the country, funded the Internet, and consistently supported projects that enhanced human/computer interactivity.

Forming ARPA

Headquartered in the Pentagon, ARPA was established to develop the US lead in science and technology innovations applicable to the military and help it respond quickly to any new challenges. Eisenhower had multiple suspicions about the military and its industrial connections. However, he did believe in basic research and appointed a man with similar notions, Neil McElroy, the head of Proctor & Gamble, as his Secretary of Defense. McElroy pushed his vision of a “single manager” for all military-related research through Congress. Despite objections by the heads of the various armed forces, Eisenhower sent a request to Congress on January 7, 1958, for startup funds to create ARPA and appointed its director, a vice-president from General Electric. Shortly after, Congress appropriated funds for ARPA as a line item in an Air Force appropriations bill.[1]

Roy Johnson came to head ARPA from GE, dreaming of human-crewed space stations, military moon bases, orbital weapons systems, global surveillance satellites, and geostationary communications satellites. But by the end of ARPA’s first year, Eisenhower had established NASA, dashing Johnson’s space fantasies. Space projects moved to the new civilian agency or back to the individual military services, including the covert ones like those of the CIA’s spy planes and satellites. ARPA desperately searched for a new mission and argued effectively for going into “basic research” areas that were considered too “far out” for the other services and agencies.

With the Kennedy Administration taking office and its appeal for the nation’s “best and brightest” to enter government service, ARPA found its prospects improving. It looked aggressively for talent to develop the best new technologies. Behavioral research, command and control, missile defense, and nuclear test detection were some of the newest projects taken on by ARPA, although not necessarily “basic” research. The new agency also got increasingly involved with computers, especially after Joseph Carl Robnett “JCR” Licklider joined the staff in October 1962.[2]

ARPA’s Information Processing Techniques Office (IPTO)

The IPTO emerged in the early 1960s with the charge of supporting the nation’s advanced computing and networking projects. Initially called the Office of Command and Control Research, its mandate was to extend the knowledge gained by researching and developing the multi-billion dollar SAGE (Semi-Automatic Ground Environment) project and extend it to other command and control systems for the military.

SAGE was a joint project by MIT and IBM with the military to computerize and network the nation’s air defense system. It linked a wide array of radar and other sensing equipment throughout Canada and the US to what was to become the Colorado-based NORAD headquarters. SAGE was meant to detect aircraft (bombers and later ICBMs) coming over the Artic to drop nuclear bombs on Canada and the US. The “semi-automatic” in SAGE meant that humans would be a crucial component of the air defense system, and that provided an opening for Licklider’s ideas.

SAGE consisted of some 50 computer systems located throughout North America. Although each was a 250-ton monster, SAGE computers had many innovations that further sparked the dream of man-machine interactivity. These included data communications over telephone lines, cathode ray terminals to display incoming data, and light pens to pinpoint potential hostile aircraft on the screen. ARPA’s IPTO helped transform SAGE innovations into the modern IT environment.

From Batch to Timesharing

Throughout the 1960s, three directors at IPTO poured millions of dollars into projects that created the field of computer science and got computers “talking” to people and to each other. Licklider had the Office of Command and Control Research changed to Information Processing Techniques Office (IPTO) when he moved from BBN to ARPA to become its first director. Licklider was also from MIT, but what made him unusual was that he was a psychologist amongst a majority of engineers. He got his Ph.D. from the University of Rochester in 1942 and lectured at Harvard University before working with the Air Force. Foremost on his agenda was to encourage the transition from “batch processing” to a new system called “timesharing” to promote a more real-time experience with computers, or at least a delay measured in seconds rather than hours or days.

These new developments meant the opportunity for new directions, and Licklider would provide the guidance and government’s cash. During the mid-1950s, Licklider worked on the SAGE project focusing mainly on the “human-factors design of radar console displays.”[3] From 1959 to 1962, he was a Vice-President for BBN, overseeing engineering, information systems, and psycho-acoustics projects. He was also involved in one of the first time-sharing cases at BBN with a DEC PDP-1 before taking a leave of absence to join ARPA for a year.[4]

Licklider swiftly moved IPTO’s agenda towards increasing the interactivity of computers by stressing Vannevar Bush’s ideas and the notion of a more personal and interactive computing experience. An influential military project at MIT was the TX-2, one of the first computers to be built with transistors and a predecessor to the PDP line of computers. It also had a graphics display, unlike most computers that used punch cards or a teletypewriter. The TX-2 was located at MIT’s Lincoln Laboratories and had a major influence on Licklider. The brilliant psychologist would ride the waves of Cold War grant monies and champion research and development for man-machine interactivity, including a radical new computer-communications technology called timesharing.

Early computer users submitted their requests and punch cards to a receptionist at a computer center. Then a team of computer operators would run several (or a “batch”) of these programs at a time. The results were usually picked up a day or two after submitting the requests. After Bell Labs developed transistor technology, individual transistors were wired into circuit boards, creating the “second generation” computer series. This new technology allowed vacuum tubes to be replaced by a smaller, cheaper, and more reliable technology and produced an exciting increase in processing speeds. Faster technology eventually led to machines that could handle several different computing jobs at one time – timesharing.

Time-sharing would allow several users to use a computer by taking advantage of the increasing processing speeds. It also used enhanced computer communications by allowing users to connect via teletype and later cathode-ray terminals. Rather than punching out programs on stacks of paper cards and submitting them for eventual processing, time-sharing made computing a more personal experience by making it more immediately interactive. Users could interact with a large mainframe computer via teletypewriters used originally for telex communications and the cathode-ray terminals used for televisions.

Timesharing emerged from the MIT environment and its support by the US government. Sets of procedures used for timesharing originated at MIT after receiving an IBM 704 in 1957, a version of the AN/FSQ-7 developed for SAGE. John McCarthy, a Sloan Fellow from Dartmouth, recognized some possibilities of sharing the computer’s capabilities among several users. As the keyboard replaced punch cards and magnetic-tape-to-magnetic-tape communication as the primary source of data entry, it became easier for the new computers to switch their attention to various users.[5]

As its human users paused to think or look up new information, the computer could handle the requests of other users. Licklider pressed the notion of timesharing to increase the machine’s interactivity with humans, but the rather grandiose vision would not be immediately accepted throughout the military-related sphere of ARPA. It was still in a relatively primitive state of computing in the early 1960s, but ARPA would soon be won over.

First on Licklider’s list was Systems Development Corporation (SDC), a RAND spin-off that had done most of the programming for the SAGE project. ARPA had inherited SDC, and a major part of the IPTO budget was set to help them transition from the SAGE air defense project to command and control computing. SDC had been given one of SAGE’s ANSFQ-32 mainframes, but to Licklider’s chagrin, they used it for batch processing. Licklider thought it ridiculous to use it in this manner, where responses often took hours or even days to help a commander react to battle situations.[6] Licklider immediately went to work to persuade SDC to switch from batch processing to time-sharing, including bringing in his allied colleagues such as Marvin Minsky for seminars to cajole SDC.

Soon they were convinced, and Licklider moved on to other time-sharing projects, pouring ARPA money into like-minded projects at MIT and Carnegie Mellon. Luckily, he had joined ARPA the same month as the Cuban Missile Crisis. The event raised concerns about the ability of the President and others high on the chain of command to get effective information. In fact, Kennedy had been pushing for better command and control support in the budget, reflecting his concerns about being the Commander-in-Chief of a major nuclear power.

In the next part I will examine timesharing and the first attempts to commercialize it as a utility.

Notes

[1] Background on ARPA from Hafner, K. and Lyon, M. (1998) Where Wizards Stay Up Late. New York: Touchstone. pp. 20-27.
[2] A much more detailed version of these events can be found in a chapter called “The Fastest Million Dollars,” in Hafner, K. and Lyon, M. (1998) Where Wizards Stay Up Late. New York: Touchstone. pp. 11-42.
[3] Information on Licklider’s involvement with SAGE from Campbell-Kelly, M. and Aspray, W. (1996) Computer: A History of the Information Machine. Basic Books, pp. 212-213.
[4] Information on JCR Licklider’s background at BBN from the (2002) Computing Encyclopedia Volume 5: People. Smart Computing Reference Series.
[5] Evans, B.O. “Computers and Communications” in Dertouzos, M.L. and Moses, J.(1979) The Computer Age: A Twenty Year View. Cambridge, Massachusetts: The MIT Press. p. 344.
[6] A good investigative job on Licklider and SDC was done by Waldrop, M. Mitchell (2001) The Dream Machine: J.C.R. Licklider and the Revolution that Made Computing Personal. New York: The Penguin Group.

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

ICT and Sustainable Development: Some Origins

Posted on | August 17, 2021 | No Comments

I teach a course called ICT for Sustainable Development (ICT4SD) every year. It refers to information and communications technologies (ICT) enlisted in the service of cities, communities, and countries to help them be economically and environmentally healthy. An important consideration for sustainability is that they don’t impose on conditions or compromise resources that will be needed for future generations. Sustainable Development (SD) is an offshoot of traditional “development” that dealt primarily with national economies that were organizing to “take off” into a westernized, pro-growth, industrial scenarios, and had some consideration of the colonial vestiges they needed to take into account.

While development was also cognizant of the need to support agriculture, education, governance, and health activities, SD put a major focus on related environmental issues and social justice. (See Heeks) SD has been embraced by the United Nations (UN) that came out with seventeen Sustainable Development Goals (SDGs) that were adopted by all UN organizations in 2015.

SDGs

In this post, I briefly introduce ICT4D and it’s connection to SD. Also, how it emerged and why it is beneficial. Of particular importance are the economic benefits of ICT and recognizing them in the renewable energies so crucial to sustainable development.

ICT was not well understood by development economists and largely ignored by funding agencies, except for telephone infrastructure. Literacy and education were early concerns. Book production, radio, and then television sets were monitored as crucial indicators of development progress. Telephones and telegraphs helped transact business over longer distances but were installed and managed by government agencies called Post, Telephone, and Telegraph (PTTs) entities. PTTs found funding difficult and were challenging to manage, given their technical complexity and enormous geographical scope. Satellites were used in some countries like India and Indonesia and facilitated better mass communications as well as distance education and disaster management.

Most of the economic focus in “developing countries” was on the extraction and growing of various commodities, utilizing low-cost labor for manufacturing, or adding to the production processes of global supply chains. It was only when television and films became important domestic industries that “information products” were recognized economically in the development process.

New dynamics to development and economic processes were introduced with computerization and ICTs. I began my career as an Intern on a National Computerization Policy program at the East-West Center in Honolulu, Hawaii. Computerization and Development in SE AsiaInspired by the Nora-Minc Report in France, it was part of the overall emphasis on development at their Communications Institute. I had an office next to Wilbur Schramm, who was one of the most influential development pioneers with his Mass Media and National Development: The Role of Information in the Developing Countries (1964).[1]

With my mentor, Syed Rahim, I co-authored Computerization and Development in Southeast Asia (1987) that serves as a benchmark studies in understanding the role of ICT in development. One objective of the book was to study the mainframe computers that were implemented, starting in the mid-1960s, for development activities. These “large” computers some of them with RAM of merely 14K, were implemented in many government agencies dealing with development activities: agriculture, education, health, and some statistical organizations. We also looked at what narratives were being created to talk about computerization at that time. For example, the term “Information Society” was becoming popular. Also, with the rise of the “microcomputer” or personal computer (PC), the idea of computer technology empowering individuals was diffusing through advertisements and other media.

Information economics opened up some interesting avenues for ICT4D and sustainable development. Initially, it was concerned with measuring different industrial sectors and how many people were employed in each area, such as agriculture, manufacturing, information, and services. Fritz Machlup, wrote the The Production and Distribution of Knowledge in the United States in 1973 that showed that the information goods and services accounted for nearly 30 percent of the U.S. gross national product. A major contributor to information economics, he concluded the “knowledge industry” employed 43 percent of the civilian labor force.

Machlup was also a student of Ludwig von Mises, known today as the founder of the so-called “Austrian School of Economics.” But he was soon overshadowed by fellow “members” Friedrich von Hayek and Milton Friedman, and the resurgence of Von Mises himself. While this debate was primarily against mainstream Keynesian economics, it was also significant for development studies as these economists saw government activities as running counter to the dynamics of the market. The main nemesis of the Austrian school was socialism and government planning activities. While most developing countries were not communist countries, the Cold War was a significant issue that was playing out in countries worldwide.

The Austrian movement had a significant impact in the 1970s and 1980s. Transactions in the economy were seen as knowledge-producing activities and they focused on the use of prices as communication or signaling devices in the economy. It led to a new emphasis on markets and Hayek and Friedman both received Nobel Prizes for their work.

For context, President Nixon had taken the US off the gold standard in August 1971 and the value of the US dollar dropped sharply. But currency markets were free to operate on market principles. It was also a time when the microprocessor was invented and computers were becoming more prominent. In 1973, Reuters set up its Money Monitor Rates, the first virtual market for foreign exchange transactions. They used computer terminals to display news and currency prices and charged banks to both subscribe to the prices and to post them. With the help of the Group of 5 nations, it brought order to international financial markets, especially after the Arab-Israel War broke out in late 1973. The volatility of the war ensured the economic success of the Reuters technology and currency markets have been digitally linked ever since.

Many development theorists by that time were becoming frustrated by the slow progress of capitalism in the “Third World.” Although the Middle East war was short, it resulted in increasing prices for oil around the world. This was a major strain on developing countries that had bought into mechanized development and the “Green Revolution” of the 1960s that emphasized petroleum-based fertilizers and pesticides. The Arab-dominated Organization of Petroleum Exporting Countries (OPEC) began an embargo of western countries for their support of Israel that refused to withdraw from the occupied territories. Prices of oil increased by 70 percent and the US suffered additional setbacks as they ended the war in Vietnam and inflation raged.

A split occurred between traditional development studies and market fundamentalists. British Prime Minister Margaret Thatcher and US President Ronald Reagan were strong advocates of the Austrian School. Both had been taken by Hayek’s Road to Serfdom (1949) and stressed a pro-market approach to development economics. The IMF was mobilized to pressure countries to undergo “structural adjustment” towards more market-oriented approaches to economic development. The PTTs were a primary target and investment strategies were utilized to turn them into state-owned enterprises (SEOs) and parts sold off to domestic and international investors.

Researchers began to focus on the characteristics or “nature” of information. As the economies became more dependent on information, more scholarship was conducted. It became understood that information was not diminished by use or by sharing. Certainly the value of information varied, often by time. The ability to easily share information by email and FTP created interest in network effects and the viral diffusion of information.

These characteristics became particular important after the development of the Internet that quickly globalized. Vice-President Gore’s Global Information Infrastructure (GII) became the foundation for the World Trade Organization’s Information Technology Agreements (ITA) and the privatization of telecommunications services. Tariffs on information and communications technologies decreased significantly. Countries that had gotten into debt in the 1970s were pressured into selling off their telecommunications infrastructure to private interests and they quickly adopted TCP and Internet Protocols (IP).

Other studies focused on efficiencies of production brought on by science and technology, specifically reducing the marginal costs of producing additional units of a product. Marginal costs have been a major issue in media economics because electronic and then digital technologies have allowed the increasing efficiency of producing these types of products. Media products have historically had high production costs, but decreasing marginal costs on the “manufacture” or reproduction of each additional unit of that product.

If we start with books for example, we know it is time-consuming to write a book and the first physical copies of the book are likely to be expensive, especially if only a small number of them are actually printed. But as traditional economies of scale are applied, the cost of each additional book becomes cheaper. Electronic copies of books in particular have become very cheap to produce, and even distribute through the Internet. Although that hasn’t necessarily resulted in major price decreases.

Digital outputs are generally unique economic products. They have unusual characteristics that make it difficult to exclude people from using them, and they are also not used up in consumption. Microsoft faced this problem in the early days of the microcomputer when it was getting started. It criticized computer hobbyists for sharing cassette tapes of their computer programs. Later, their investment in the MS-DOS operating system and subsequently Windows paid off handsomely when they were able to sell it with enormous margins for IBM PCs and then “IBM Compatibles” such as Acer, Compaq, and Dell. That is how Bill Gates became the richest man in the world (or one of them).

The issue of marginal costs have resonated with me for a long time, due to my work on media economics and what economists call “public goods.” In some of my previous posts, I addressed the taxonomy of goods based on key economic characteristics. Public goods and such as digital and media products are misbehaving economic goods in that they are not used up in consumption and are difficult to exclude from use. These writings examined what kind of products are conducive to reduced marginal costs and what social systems are conducive to managing these different types of goods. Originally, the focus was more on media products like film, radio and television, but then digital products like games and operating systems. Will these efficiencies apply to sustainable development?

Can the economics of media products apply to other products. More recently sustainable technologies like solar and wind are being examined for their near-zero marginal costs. A major voice on this topic is Jeremy Rifkin, who is most noted for his book The Third Industrial Revolution (2011) that refers to the importance of concurrent communications, energy, and transportation transitions. We have moved from an integrated political economy based on telephone/telex communications, and carbon combustion energy and transportation to a digital, clean energy. Two books by Jeremy Rifkin, The Near Zero Marginal Cost Society and The Green New Deal are significant points of departure for sustainable development.

Sustainable development initiatives by definition look to economize and reduce costs for the future. It is important to analyze the characteristics of economic goods and their social implications. This level of understanding is important to understand the market structure and types of regulation.

ICT4D has struggled to claim a strong narrative and research stake in the trajectory of development. The Earth Institute’s ICTs for SDGs: Final Report: How Information and Communications Technology can Accelerate Action on the Sustainable Development Goals (2015) and the World Bank’s (2016) World Development Report were significant boosts for ICT4D, especially for economic development, and the move towards sustainable development.

Citation APA (7th Edition)

Pennings, A.J. (2021, Aug 21) ICT and Sustainable Development: Some Origins. apennings.com https://apennings.com/how-it-came-to-rule-the-world/digital-monetarism/ict-and-sustainable-development-marginal-costs/

Share

Notes

[1] Mass Media and National Development: The Role of Information in the Developing Countries. Stanford University Press. 1964.

[2] Sachs J et al (2016) ICT & SDGs: How Information and Communications Technology can Accelerate Action on the Sustainable Development Goals. The Earth Institute: Columbia University. Accessed at https://www.ericsson.com/assets/local/about-ericsson/sustainability-and-corporate-responsibility/documents/ict-sdg.pdf. 15 Jan 2019

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Originally from New York, he started his academic career Victoria University in Wellington, New Zealand before returning to New York to teach at Marist College and spending most of his career at New York University. He has also spent time at the East-West Center in Honolulu, Hawaii. When not in the Republic of Korea, he lives in Austin, Texas.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    December 2024
    M T W T F S S
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.