Anthony J. Pennings, PhD


Russia and the Era of Pan-Capitalism

Posted on | October 9, 2020 | No Comments

The fall of the Berlin Wall and the disintegration of the Communist Union of Soviet Socialist Republics (USSR) bloc meant the world was no longer significantly divided by Cold War antagonisms. Communist China had already embraced market dynamics and global trade and a pan-capitalist condition of free flows of information and money was spreading globally. This post briefly discusses the breakup of the USSR and the globalization of digital capitalism.[1]

Although a Communist bloc, the USSR had become deeply indebted to the global banking system, exerting additional pressure on a system already addicted to military spending. The term “Eurodollar” reportedly gets its name from a Russian bank in France that was laundering dollars after the Communist Revolution in 1949. The cable address of the bank happened to be “Eurobank.” The Russians also placed their dollars in the Paris-based Russian Banque Commerciale pour l’ Europe du Nord and the Moscow Narodny Bank in London. It was soon traded by other European banks and purportedly took the name “Eurodollar” from the cable address in Paris.

As mentioned in my earlier work on digital monetarism, Eurodollars were the prime credit vehicle for recycling OPEC’s petrodollars worldwide during the 1970s. Through syndicated lending, banks lent Eurodollars excessively to many nation-states, including those in the USSR.

The result of petrodollar recycling was the “Third World Debt Crisis,” that created havoc throughout the 1970s and into the 1980s. Debt put pressure on public resources that were often transformed into state-owned enterprises (SOEs) and eventually sold off to investors. Excessive debt led to an era of imposed deregulation, liberalization, and privatization. Although painful, it opened up the telecommunications world to the Internet and its World Wide Web applications.

The “official” start of the world debt crisis can be traced to the March 1981 announcement by Poland that it could not meet its payment commitments. Previously, banks found it easy to reschedule old debt and lend them new Eurodollars. An “umbrella theory” circulated which held that the Soviet Union would guarantee the debts of the countries in its sphere. But the USSR was having economic problems that went unnoticed by the banks. They still retained a high credit rating and the banks continued to pour money into it.

By 1984, the Communist bloc was gathering significant debt and the economy was faltering, largely due to the drop in oil prices. Defecting spies were reporting that the USSR was a mess. Workers were unmotivated because the store shelves were empty. Lines to purchase scarce goods were everywhere. Nearly half the Russian economy was devoted to military spending and the other half producing shoddy and scarce consumer goods, determined and designed by Communist committees.

With Reagan’s “Star Wars,” Premier and Communist Secretary-General Mikhail Gorbachev knew that the USSR could not keep up with the capitalist world’s innovation and spending. He pleaded with Ronald Reagan at a Reykjavík Summit of 1986 to give up the militarization of space and instead work to reduce nuclear weapons. But Reagan refused. So Gorbachev instead began a public relations campaign to encourage more debate about the USSR, and it’s political and economic future.

In June of 1989, Gorbachev made the call to Poland to tell their Communist party leaders to accept the results of their democratic election. The decision started the processes of glasnost and perestroika throughout the Soviet system. The first term meant political openness – media freedom, democratization of power, and the release of political prisoners. The second meant gradually allowing entrepreneurial activities, reforming state-owned industries, and privatizing government assets.

Deng Xiaoping had already started to reform Communism in China during the 1980s with his “socialism with Chinese characteristics.” Deng and other post-Mao Communist leaders argued that “China had mistakenly entered into Communism during its feudal stage instead of waiting until advanced capitalism, as Marx had theorized. Private ownership and a market economy were suddenly embraced as solutions, not problems. This allowed the Chinese Communist Party to legitimize both its turn to market capitalism and the continuance of its political control over the country through Marxist ideology.”[2]

The fall of the Berlin Wall in 1989 began the process of dismantling the Warsaw Pact and, with it, the USSR Communist bloc. Started in 1955, the Warsaw Pact was initially a defense treaty among Albania, East Germany, Poland, Hungary, Romania, Bulgaria, Czechoslovakia, and Russia. But after East Germany left, the other countries clamored to leave as well.

Czechoslovakia and the Baltic states (Estonia, Lithuania, and Latvia) soon declared their independence from the USSR along with the Republic of Belarus and the Ukraine. Some joined Azerbaijan, Kazakhstan, Kyrgyzstan, Moldova, Turkmenistan, Tajikistan, and Uzbekistan to create the Commonwealth of Independent States (CIS).

President George H. Bush met with Gorbachev in early December 1989, just a month after Europe’s “9/11” dissolving of the barriers between East and West Germany. Meeting in Malta, they resumed START negotiations on nuclear arms control as well as came to agreement on how conventional forces would dismantled in Europe. Gorbachev’s decision to allow a multi-party system and presidential elections in Russia also began to destabilize Communist control and contributed to the collapse of the Soviet Union.

A coup was engineered by Communist hardliners in August of 1991 and although it failed, Gorbachev resigned by Christmas. But not before dissolving the Central Committee of the Communist Party of the Soviet Union and resigning as its Secretary General. Also in 1991, the Soviet military relinquished control over the other militaries. Russia also agreed to take on USSR debt held by the USSR, in excess of US$70 billion.

By promising glasnost and perestroika, Gorbachev changed the political dynamic of a dying system. It was the promise of a political and legal infrastructure for a democratic and market political economy integrated into the world system. The process accelerated in Russia with the election of Boris Yeltsin in 1991 as the first President of the new country. Yegor Gaidar, an economist known for pushing free markets, became the Prime Minister. Yeltsin worked with a group of opportunistic Russians to outmaneuver the Communist directors of the USSR economy to take control of major industries, many going on to become billionaires, the so-called “oligarchs.”

In the first few months of 1992, the new government freed prices and legalized entrepreneurial activity in a process called “shock therapy” – rapid liberalization of the economy. By 1994, Yeltsin worked with Russian banks to raise cash to help privatize major companies. The Russian “loans for shares” program lent the government money in exchange for temporary shares in state-owned companies. When the government defaulted in 1995, they auctioned off major stakes in companies involved in aluminum, oil, nickel, and other important resources as well as food production, telecommunications, and media.

The end of the Warsaw Pact signaled a new liberalization by Moscow and the satellite countries of the USSR. However, it was displaced by economic “shock therapy” – severe austerity and privatization that crippled economic recovery. The resultant chaos led to the return of the Russian “strong man.” Vladimir Putin became the President of Russia in 1999.


[1] I use the term digital capitalism here as many parameters operate to shape types of capitalism.
[2] Pennings, A. (2014, April 22). E-Commerce with Chinese Characteristics. Retrieved from



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

The International Politics of Domain Name Governance, Part One: The Czar

Posted on | October 1, 2020 | No Comments

An ongoing global concern deals with the control and management of the World Wide Web’s Domain Name System (DNS). The basic issue is whether the address you put into your browser connects to the right computer and retrieves the right information. When you type in, for example, how does it get to my blog, and how can you be sure you are getting the right site? What if you typed in, and it was directed to (just an example) Or if you typed in and it was directed to a site for Barnes and Noble or some other bookseller? These scenarios are possible if the domain name system is not managed correctly.

The Domain Name System (DNS) is a server service that matches website addresses to the right computer. As the Internet has grown exponentially and globally, governance and management issues continue to be complicated and contentious. What is at stake? In this post I look at the beginning of the DNS and the influence of Jon Postel.

It was recognized early on that managing Internet addresses would be a global concern. Internet traffic was increasing domestically and across borders. Decentralization provided the technical and operational strategy to globalize the Internet and its World Wide Web (WWW). It would provide quicker responses and decrease network traffic congestion. Maintenance issues, including redundancy and backing up systems, were easier to manage. Globalization of the Internet, however, raised other issues.

Daniel Drezner identified three reasons to be concerned about the governance of the Internet. The first was that an “actor” such as a government, corporation, or NGO could take over the Internet. Any actor that could benefit from controlling the connections between users and the sites they want to visit should would certainly undergo scrutiny on the matter. Second, it was important for a legal system to be created to ensure that trademarked names were not captured and monopolized by “cybersquatters,” who could withhold or use important trademarked names such as “” or “” Also, a lot of money was at stake in the creation of domain names. Little cost is involved in the production of domain names. Providing domain names is like printing money in some respects.[1]

So when you type in the address of the website you want to access, DNS makes sure you make the connection and find the right file. ARPANET, the original Internet that came to life in September 1969, first addressed the issue in the early 1970s. Jon Postel of the University of Southern California (USC) in Los Angeles took up the challenge and was eventually given the nickname “God” because of his power over the early Internet’s addressing system. Postel started with writing addresses on scraps of paper and would continue until a global network was established.

Postel’s influence ranged from its inception to his death in 1998. On March 26, 1972, Postel started collating a catalog of numerical addresses like He asked network administrators to submit information on socket numbers and network service activities at each host computer. He worked with the Stanford Research Institute (now SRI International) to develop a simple text file called HOSTS.txt that tracked hostnames and their numerical addresses. Published as RFC 433 in December 1972, it proposed a registry of port number assignments to network services. He also called himself the “czar” of socket numbers as he pledged to keep a list of all addresses. SRI would distribute the list to all Internet hosts.

The Domain Name System (DNS) was primarily designed by Paul Mockapetris of the Information Sciences Institute at the USC. It was adopted by ARPANET in 1984. The ARPA DNS originally consisted of six different Top Level Domain (TLDs) types: .com (commercial), .edu (education), .gov (government), .mil (military), .net (network provider), and .org (organization). The designation of domain names below them, like or, were left to the discretion of the administrators of the various networks. As the Internet expanded globally, a two-letter suffix such as .nl for the Netherlands, or .nz for New Zealand and .kr for South Korea was allowed individual countries. The first domain name was reportedly, registered through the DNS on March 15, 1985.

In 1988, the U.S. gave the DNS contract to USC’s Information Sciences Institute (ISI). This gave Mockapetris and Postel the opportunity to continue to work together and with SRI International. They continued the functions of address management in what became known as The Internet Assigned Numbers Authority (IANA) that continues to this day. IANA was funded by the U.S. government under a contract with the Defense Advanced Research Projects Agency (DARPA). At this time, the Internet started to expand rapidly in the U.S., and abroad.


[1] Drezner, D. (2004). The Global Governance of the Internet: Bringing the State Back In. Political Science Quarterly, 119(3), 477-498. doi:10.2307/20202392



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Oliver Stone’s Platoon: Jesus Christ Superstar vs. the Marlboro Man

Posted on | August 12, 2020 | No Comments

In Oliver Stone’s award-winning film, Platoon (1986), Charlie Sheen plays Chris Taylor, a “coming of age” infantry soldier trying to reconcile his identity between the influences of two sergeants in his US Army platoon. The setting is the Vietnam War circa late 1967. The sergeants, played William Dafoe and Tom Berenger, were directed to represent two mythical poles of ideology and belief that have come to heavily influence American political culture. I refer to these two men and the contrasting themes they represent as “Jesus Christ Superstar” vs. “the Marlboro Man.”

Platoon (1986) won Best Picture at the 1987 Academy Awards received additional awards for Best Film Editing, Best Sound, and a nomination for Best Cinematography. Oliver Stone won an “Oscar” for Directing and was nominated for Writing. Both sergeants were nominated for Best Actor in a Supporting Role, which brings me back to the two conflicting myths.

With Sgt. Elias (William Dafoe) representing “Jesus Christ Superstar” and Sgt Barnes (Tom Berenger) “the Marlboro Man,” the movie condenses a series of meanings into the two contending perspectives. These viewpoints divided America and haunt to this day its view of the war. Barnes characterizes the tension succinctly at one point, “there’s the way it ought to be, and there’s the way it is.” Barnes, who was shot seven times, including in the face, has the physical scars to represent “the way it is.”

Jesus Christ Superstar was a rock opera album that was released in 1970 on Broadway and as a movie in 1973. The film was shot in Israel and other Middle Eastern locations and was the eighth highest-grossing film of that year. It reconciled different gospels of the Bible and focused on the relationship between Jesus, Judas, and Mary Magdalene, emphasizing betrayal and picking one’s battles. It was in some ways an anthem of the time as its roll and roll music and energy resonated with the “hippie” counterculture that emerged during the height of the Vietnam War. Jesus of Nazareth, with his long hair, certainly looked the part. Stone “paints” Elias and several other soldiers with iconography from the era. Peace symbols, headbands, drugs, and Sixties music like Jefferson Airplane’s “White Rabbit” are used to represent this counter-culture.

It’s hard to portray Elias as a Jesus-like pacifist when he volunteered for three tours in the “Nam” and was a skilled and experienced soldier. But from the first scene, we see him carrying a machine gun on his shoulders like a cross and climbing up a mountainside like Jesus ascending Calvary. As Sergeant O’Neil from a third squad says about Elias after an argument, “Guy’s is in three years and he thinks he is Jesus fuckin’ Christ or something.”

Elias is portrayed as the more sensitive leader. We next encounter him helping Chris and offering to carry much of the load from his amateurishly stuffed backpack. Most importantly, he is the voice of restraint when the platoon is searching a Vietnamese village for guns and ammunition. When Sgt. Barnes shoots a Vietnamese village woman during an interrogation, Elias confronts him and initiates a fistfight.

This scene creates a tension between the two as Barnes faces a potential court marital for the murder. The conflict eventually ends up with Barnes shooting Elias during a battle with the Viet Cong. The shots don’t kill him though and as the platoon is being evacuated by helicopters he is sighted from the air being chased by Vietnamese troops. He is shot several times in the back but struggles to continue. Finally, as he falls to his knees, writhing in pain, a medium shot shows him with his arms outstretched and gaze towards the heavens, as if he was being crucified.

The Marlboro Man was another iconic figure of the Vietnam era. It became the masculine symbol of the famous cigarette brand. Invented to subvert the early impression that Marlboro cigarettes were for women, it successfully became the icon of rugged individualism and aggressive patriarchy. The first scene of Barnes shows him in the jungle with a pack of Marlboro cigarettes strapped to his helmet.

Barnes was clearly the leader of the platoon, as even the lieutenant deferred to his judgment. His first words in the movie were “Get a move on, boy” to Chris, in his Southern accent. He is regularly portrayed as the tough but competent, no-nonsense leader. At one point, while criticizing the pot smokers for what he calls their attempt to “escape reality,” he says, “I am reality.”

Oliver Stone served in Vietnam and was awarded the Bronze Star medal. The story was based roughly on his experience there. In Stone’s interview with Joe Rogan, he speaks to his respect for both sergeants. While Stone clearly favors Elias, his portrayal of Barnes is surprisingly sympathetic, and we see how both men influence Chris.

Chris arrives in Vietnam as a “cherry,” a virgin to the war experience. But after he recovers from being shot during their first ambush, he befriends a black man named King and a “surfer dude” from California named Crawford. They are all assigned to cleaning the latrines and the scene allows Chris to tell his story of why he quit college and enlisted in the Army. “I wasn’t learning anything. I figured why should just the poor kids go off to war and the rich kids always get away with it?” The others laugh off his naivety but invite him to the Feel Good Cave, a bunker where they “party” by playing music and smoking pot.

King introduces Taylor as the resurrected “Chris” to the “heads,” including those soldiers played by Johnny Depp and Forrest Whitaker. Elias is there smoking pot as well and welcomes Chris with a “hit” of marijuana blown through the barrel of a rifle. You can hear Grace Slick singing “feed your head” as Chris says he feels good and can’t feel the pain from his injury. Elias responds, “feeling good is good enough.”

Tom Berenger is masterful in his performance as Sgt Barnes. While Elias is “partying” with the “stoners,” Barnes is listening to country music and playing cards while drinking whiskey and beer with his group. Later, after Elias is dead, Barnes goes the Feel Good bunker to confront Elias’ friends in the platoon. With a Jack Daniels Tennessee whiskey in hand, he goes on to criticize the recently departed Elias.

    Elias was full of shit. Elias was a Crusader. Now, I got no fight with any man who does what he’s told, but when he don’t, the machine breaks down. And when the machine breaks down, we break down. And I ain’t gonna allow that in any of you. Not one.

The scene ends with Chris attacking Barnes, who quickly subdues the young soldier. He is convinced not to kill him as he would face ten years in military prison for killing an enlisted man.

In a later battle, the platoon is overrun with Viet Cong, and an airstrike is called in to bomb the camp. Barnes takes advantage of the chaos to try to kill Chris, but the sergeant is knocked out by the bombing concussion. Chris and Barnes barely survive. When Barnes asks Chris to get a medic, Chris shoots him in retaliation for Elias’ death.

As Chris is airlifted from the battleground, his voice-over narrates an inner conflict:

    I think now, looking back, we did not fight the enemy; we fought ourselves. And the enemy was in us. The war is over for me now, but it will always be there, the rest of my days as I’m sure Elias will be, fighting with Barnes for what Rhah called possession of my soul. There are times since, I’ve felt like the child born of those two fathers. But, be that as it may, those of us who did make it have an obligation to build again, to teach to others what we know, and to try with what’s left of our lives to find a goodness and a meaning to this life.



AnthonybwAnthony J. Pennings, Ph.D. is Professor at the Department of Technology and Society, State University of New York, Korea and from 2002-2012 was on the faculty of New York University. He has also taught at Hannam University in South Korea, Marist College in New York, and Victoria University in New Zealand. He keeps his American home in Austin, Texas and has taught there in the Digital Media MBA program at St. Edwards University He joyfully spent 9 years at the East-West Center in Honolulu, Hawaii.

Memes, Propaganda, and Virality

Posted on | July 31, 2020 | No Comments

Today we are faced with a new and potentially ominous form of manipulation, an insidious form of propaganda dissemination. I’m talking about the viral spread of memes. The “meme” has emerged as a powerful communication device in the modern world of social media such as Facebook, Instagram, and Twitter. A meme here refers to a digital image, usually a jpeg or PNG, with a short text caption that is easily posted and shared. Usually, they imply a joke or political message that can be diffused quickly and widely to a broad audience.

In this post, I examine the proliferation of memes and their potentially damaging effect on political culture. I discuss the rhetoric of memes and particularly the viral spread of memes as a new form of propaganda. Propaganda utilizes elements of powerful media techniques to have specific effects on the political and social consciousness of individuals.

What is propaganda? One of my colleagues at New York University, had an apt description. Media ecologist Neil Postman called propaganda an “intentionally designed communication that invites us to respond emotionally, immediately and in an either-or manner.” Propaganda is the use of powerful rhetorical forms that work on the individual to energize, promote, stimulate, and determine ideologies. Propaganda can mobilize support for political action as well as pressure for the legislation and implementation of specific policies.

    Meme wars are a consistent feature of our politics, and they’re not just being used by internet trolls or some bored kids in the basement, but by governments, political candidates, and activists across the globe. Russia used memes and other social-media tricks to influence the US election in 2016, using a troll farm known as the Internet Research Agency to seed pro-Trump and anti-Clinton content across various online platforms. Both sides in territorial conflicts like those between Hong Kong and China, Gaza and Israel, and India and Pakistan are using memes and viral propaganda to sway both local and international sentiment. – Joan Donovan

What makes memes more insidious is that the propaganda is administered by some of a person’s most trusted friends. The goal of a meme is to to spread rapidly through a population, to go viral. This is a diffusion process where a meme is shared from person-to-person or person-to-group, despite the existence of weak links. The goal is to reach a point of exponential growth for the meme’s exposure and influence.

The success of the virality depends on a high “pass-along rate” where individuals are motivated to share their meme to others. A common communication network facilitates virality as a meme may be “retweeted” in the Twitter environment or “shared” on Facebook. It helps if the process is easy – clicking a button rather than cutting and pasting. This is setting the bar low, but decisions are made very quickly and dependent on relatively weak motivations.

An important measure is the virality rate, the number of people who went on to share your meme compared to the number of unique views or impressions it had during a certain period. You can get this metric by dividing the number of total shares of the meme by the number of impressions. Multiply that figure by 100 to get your virality rate percentage. The higher the percentage, the better the meme.

Memes can be created with a program like Adobe’s Photoshop or Adobe Spark. You can also use a specialized meme generator application on the Internet or your mobile phone, such as Word Swag.

Memes are designed to crystallize or fix sets of meanings that Postman argued causes us react quickly and emotionally. They draw on bits of culture, such as slogans, cartoon images, and brand items that are small and easily remembered. They are packaged semiotically with images and text juxtaposed in ways that invite us to construct more complex associations. They are usually structured enough to draw us into some preferred meanings, yet evocative yet enough to draw on the reader’s string of cultural, economic or political associations. Memes are usually vague enough to leave much for our imaginations to interject.

Memes are much like posters in that they remove authorship. The effect can be ominous, creating an anonymous yet authoritative “voice of God.” No one “has to answer for transgressive or hateful ideas.” Memes can weaponize half-truths, lies, and inappropriate material easily and diffuse them quickly through society.



AnthonybwAnthony J. Pennings, Ph.D. is Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University. He has also taught at Hannam University in South Korea, Marist College in New York, Victoria University in New Zealand. He keeps his American home in Austin, Texas and has taught there in the Digital Media MBA program at St. Edwards University He joyfully spent 9 years at the East-West Center in Honolulu, Hawaii.

Letteracy and Logos

Posted on | May 5, 2020 | No Comments

More than ever, people are interested in how visual design influences the production of meaning as well as its intellectual emotional effects. Typography is the design and arrangement of letters and text so that the writing is easy to understand, appealing, and conveys an appropriate set of feelings and meanings. A logo is the graphic signature of a person or organization that is meant to encapsulate and communicate the preferred symbolic meanings of an organization.

Below is one of my favorite TED talks about typography.

This blog post discusses both the importance of typography in visual design such as in a magazine or webpage layout as well as in the use of logos. A new type of literacy in the new media age has emerged that Seymour Papert (1993) and others began to call “letteracy.” Papert was critical of the idea of introducing letters too early in a child’s development, but recognized that connecting with culture and history required alphabetical literacy.

“Letteracy” suggests a larger conversation about global visual culture and why people are increasingly more interested in the impact of typography in our media world. A twist on “literacy,” it points to discrepancy between a world in which reading is pervasive, and the relative ignorance of how letters are designed and have an influence on us. One of the first questions to ask is “What are letters?”

Capturing Sound

Letters are phonographic – they code the sounds of language in scripted figures. A few writing systems like Chinese characters are ideographic, they code ideas into their figures. Phonographic writing has the advantage of coding everyday language in their letters while being flexible enough to incorporate new words. Ideographic writing requires extensive memorization and social mentoring to enforce meanings and consistency in sound reproduction.

Asian societies like Korea, and to a lesser extent, Japan, have replaced Chinese characters with the phonographic characters. Korea instituted “Hangul” that is phonographic but with some iconic aspects. The characters represent oral movements of the tongue and lips used to achieve those sounds. The change allowed Korea to achieve a high rate of the population, achieving reading literacy. Japan has two sets of phonographic characters, hiragana, and katakana. These both are sound based, but each character represents a whole syllable – the vowel and the consonant. To make the situation a bit more complicated, they still use “Kanji” ideographic characters borrowed from China.

From Printing Press to Desktop Publishing

Johannes Gutenberg is credited with inventing both the printing press and the production of durable typefaces around 1460 AD. The technology had also been developed in China and Korea, but conditions in Europe were better for its expansion. Printing presses in China and Korea were state-based projects that eventually withered. Conversely, religious, market, and political conditions in Europe improved their chances of success.

The first best-seller? The Christian Bible. In 1517, the Protestant revolution began that emphasized reading of the Bible over the services of the Catholic church and its priests. It also helped Europe develop separate nation-states as people became more literate in their local languages. Printed materials in different dialects began to coagulate community identities. People began to identify with others who spoke the same dialect and recognize them as sharing the same national values. Benedict Anderson called these “imagined communities” in the book by the same name.

Thanks to Steve Jobs and the Apple Macintosh graphical user interface, different typefaces were added to computers. Along with WYSIWYG display, the new GUI enabled desktop publishing. This democratized the printing “press.” Consequently, understanding the importance of different styles of letters became an important literacy of the digital age.

Part of this literacy is an understanding the various meanings associated with typography. Type fonts can be designed and used with various purposes in mind. The “Power of Typography” video above explains in more detail.



AnthonybwAnthony J. Pennings, Ph.D. is Professor of the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University. Previously, he taught at Hannam University in South Korea, Marist College in New York, Victoria University in New Zealand. He keeps his American home in Austin, Texas and has taught there in the Digital Media MBA program at St. Edwards University He joyfully spent 9 years at the East-West Center in Honolulu, Hawaii.


Posted on | March 24, 2020 | No Comments

This post is second in a series that I am producing during the COVID-19 pandemic about the importance of telecommunications policy in ensuring the widespread availability of affordable high-speed Internet access. Teaching online and working from home have gone from fringe activities to be central components of life. As we move to a Smart New Deal to transform American life, how we structure our digital environments will be central. This post discusses the transition from dial-up modems in the early days of the Internet to high-speed broadband connections. With that technical transition and the FCC’s 2005 decision, the competitive environment that the Clinton-Gore administration built, collapsed into the cable-telco duopoly we see today and made “net neutrality” an issue.

The Internet Service Providers (ISPs) mentioned in US Internet Policy, Part 1 facilitated the process of getting individuals, businesses and government “online” by linking to the Internet backbone and going “retail” with dial-up modems and then high-speed broadband connections. The term “online” emerged as way to distinguish data communications from telephony, which was highly regulated by the Communications Act of 1934. ISPs offered businesses and consumers high-speed data services for accessing the World Wide Web, hosting websites, and providing large file transfers (FTP). The key was accessing the rapidly expanding Internet packet-switching backbone network that had been developed by the National Science Foundation (NSF).

The National Science Foundation’s backbone network (NSFNET) began data transmissions at 56 kilobits per second (Kbit/s) but was upgraded to T1 lines in 1988, sending at 1.544 megabits per second (Mbit/s). It eventually consisted of some 170 smaller networks connecting research centers and universities. In 1991, the NSFNET backbone was upgraded to T3 lines sending data at 45 Mbit/s. From 1993, NSFNET was privatized, and a new backbone architecture was solicited, that incorporated the private sector.

The next-generation very-high-performance Backbone Network Service (vBNS) was developed as the successor to the NSFNet. vBNS began operation in April 1995 and was developed with MCI Communications, now a part of Verizon. The new backbone consisted primarily of glass Optical Carrier (OC) lines, each of which had several fiber-optic cables banded together to increase the total amount of capacity of the line. The interconnected Optical Carrier (OCx) lines operated at 155 Mbit/s and higher. These high-speed trunk lines soon multiplied their capabilities from OC-3 operating at 155 Mbit/s, to OC-12 (622 Mbit/s), OC-24 (1244 Mbit/s), and OC-48 (2488 Mbit/s). By 2005, OC-48 was surpassed by OC-192 (9953.28 Mbit/s) and 10 Gigabit Ethernet.

As part of NSFNET decommission in 1995, these backbone links connected to the four national network access points (NAPs) in California, Chicago, New Jersey, and Washington D.C. The backbone expanded to multiple carriers that coordinated with ISPs to provide high-speed connections for homes and businesses.

At first consumers used analog dial-up modems over the telephone lines at speeds that increased to 14.4 kilobits per second (Kbit/s or just k) by 1991 to 28.8 kbit/s in 1994. Soon the 33.6 Kbit/s was invented that many thought to be the upper limit for phone line transmissions. But the 56K modem was soon available and a new set of standards continue to push speeds of data over the telephone system. The 56K modem was invented by Dr. Brent Townshend for an early music streaming service. This new system avoided the analog to digital conversion that seriously hampered data speeds and allow content to be switched digitally to the consumer’s terminal device, usually a PC.

Also, during the 1990s, the telcos were conducting tests using a new technology called ADSL (Asynchronous Digital Subscriber Line). It was initially designed to provide video over copper lines to the home. Baby Bells, in particular, wanted to offer television services to compete with cable television. It was called asynchronous because it could send data downstream to the subscriber faster (256 kbit/s-9 Mbit/s) than upstream (64 Kbit/s-1.54 Kbit/s) to the provider.

ADSL was able to utilize electromagnetic frequencies that telephone wires carry, but don’t use. ADSL services separated the telephone signals into three bands of frequencies-one for telephone calls and the other two bands for uploading and downloading Internet activities. Different versions and speeds emerged based on the local telco’s ability and willingness to get an optical fiber link close to the neighborhood or “to the curb” next to a household or business location.

They were soon called Digital Subscriber Lines (DSL), and they began to replace dial-up modems. High demand and competition from cable companies with high-speed coaxial lines pressured ISPs and telcos to adapt DSL technologies. DSL and new cable technologies that carried Internet traffic, as well as television, came to be collectively called “broadband” communications.

Internet traffic grew at a fantastic rate during the late 1990s as individuals and corporations rushed to “get on the web.” The rhetoric of the “new economy” circulated and fueled investments in web-based companies and telecommunications providers.

A temporary investment bubble emerged as many companies lacked the technology or business expertise to obtain profits. Dot.coms such as,,, GeoCities,,,,, and failed for a variety of reasons but mainly flawed business plans and the premature expenditure of investment capital.

Similarly, many carriers such as Global Crossing, WorldCom, and ISPs overestimated web traffic and built excess capacity. In the wake of the crash in 2000 and the telecom crash in 2002, many ISPs filed for bankruptcy, including Wall Street darlings like Covad, Excite@home, NorthPoint, PSINet, Rhythms NetConnections, and Winstar Communications.

The broadband industry changed significantly after the 2000 election. The technological infrastructure was significantly devastated by the crash of 2000 and the telecom crash of 2002.

Furthermore, Internet policy changed when the Bush administration was reelected in 2004. The FCC revoked Computer II in 2005 when it redefined carrier-based broadband as an information service.

This meant that broadband was effectively not regulated and telcos could go on to compete with the ISPs. Instead of offering backbone services and being required to interconnect with the ISPs, they became ISPs and were no longer required to provide ISPs their connection to the Internet. The competitive environment that nurtured Internet growth was effectively decimated.



AnthonybwAnthony J. Pennings, PhD is Professor of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

US Internet Policy, Part 1: The Rise of ISPs

Posted on | March 15, 2020 | No Comments

Much of the early success of the Internet in the USA can be attributed to the emergence of a unique organizational form, the Internet Service Provider or “ISP,” which became the dominant provider of Internet and broadband services in the 1990s. These organizations resulted from a unique set of regulatory directives that pushed the Internet’s development and created a competitive environment that encouraged the proliferation of ISPs and the spread of the World Wide Web.

In this series on Internet policy, I look at the rise of the World Wide Web, the shift to broadband, deregulation, and consolidation of broadband services. Later in the series, I address the issue of net neutrality and raise the question, “Is Internet service a utility? Is it an essential service that should be made universally available to all Americans at regulated prices?

The first ISPs began as US government-funded entities that served research and education communities of the early Internet. Secured by Al Gore in 1991, legislation signed by President George H. Bush created the model of the National Research and Education Network (NREN), a government-sponsored internet service provider dedicated to supporting the needs of the research and education communities within the US. Internet2, Merit, NYSERNET, OARnet, and KanRen were a few of the systems that provided schools, and other non-profit organizations access to the World Wide Web. While dialup services like Compuserve existed in the early 1980s, only later were the ISPs released for commercial traffic and services.

While telecommunications carriers had been moving some Internet traffic since the late 1980s, their role expanded dramatically after the Internet began to allow commercial activities. In June of 1992 Congressman Rick Boucher (D-Va) introduced an amendment to the National Science Act of 1950 that allowed commercial activities on the US National Science Foundation Network (NSFNET). “A few months later, while waiting for Arkansas Governor William Jefferson Clinton to take over the Presidency, outgoing President George Bush, Sr. signed the Act into law.” The amendment allowed advertising and sales activities on the NSFNET.

As part of the National Information Infrastructure (NII) plan, the US government decommissioned the US National Science Foundation Network (NSFNET) in 1995. It had been the publicly financed backbone for most IP traffic in the US. The NII handed over interconnection to four Network Access Points (NAPs) in different parts of the country to create a bridge to the modern Internet of many private-sector competitors.

These NAPS contracted with the big commercial carriers such as Ameritech, Pacific Bell, and Sprint for new facilities to form a network-of-networks, anchored around Internet Exchange Points (IXPs). The former regional Bell companies were to be primarily wholesalers, interconnecting with ISPs. This relatively easy process of connecting routers was to put the “inter” in the Internet but also became sites of performance degradation and unequal power relations.

As the Internet took off in the late 1990s, thousands of new ISPs set up business to commercialize the Internet. The major markets for ISPs were: 1) access services, 2) wholesale IP services, and 3) value-added services offered to individuals and corporations. Access services were provided for both individual and corporate accounts and involved connecting them to the Internet via dial-up, ISDN, T-1, frame-relay or other network connections. Wholesale IP services were primarily offered by facilities-based providers like MCI, Sprint, and WorldCom UUNET (a spinoff of a DOD-funded seismic research facility) and involved providing leased capacity over its backbone networks. Value-added services included web-hosting, e-commerce, and networked resident security services. By the end of 1997, over 4,900 ISPs existed in North America, although most of them had fewer than 3,000 subscribers.[2] See the below video and this response for how much things have changed.

FCC policy had allowed unlimited local phone calling for enhanced computer services and early Internet users connected to their local ISP using their modems over POTS (Plain Old Telephone System). ISPs quickly developed software that was put on CD-ROMs that could be easily installed on a personal computer. The software usually put an icon on the desktop screen of the computer that when clicked on would dial the ISP automatically, provide the password, and connect the user to Internet. A company called Netscape created a popular “browser” that allowed text and images to be displayed on the screen. The browser used what was called the World Wide Web, a system of accessing files quickly from computer servers all over the globe.

The ISPs emerged as an important component to the Internet’s accessibility and were greatly aided by US government policy. The distinctions made in the FCC’s Second Computer Inquiry in 1981 allowed ISPs to bypass many of the regulatory roadblocks experienced by traditional communication carriers. Telcos were to provide regulated basic services and “enhanced services” were to stay unregulated. Schiller explained:

    Under federal regulation, U.S. ISPs had been classed as providers of enhanced service. This designation conferred on ISPs a characteristically privileged status within the liberalized zone of network development. It exempted them from the interconnection, or access, charges levied on other systems that tie in with local telephone networks; it also meant that ISPs did not have to pay into the government’s universal service fund, which provided subsidies to support telephone access in low-income and rural areas. As a result of this sustained federal policy, ISPs enjoyed a substantial cross-subsidy, which was borne by ordinary voice users of the local telecommunications network.[3]

ISPs looked to equip themselves for potential new markets and also connect with other companies. For example, IBM and telecom provider Qwest hooked up to offer web hosting services. PSINet bought Metamor to not only transfer data but to host, design, and move companies from the old software environment to the new environment. ISPs increasing saw themselves as not only providers of a transparent data pipe but also as a provider of value-added services such as web hosting, colocation, and support for domain name registration.

The next part of this series will discuss the shift to higher speed broadband capabilities. Later, the consolidation of the industries starting in 2005 when the FCC changed the regulatory regime for wireline broadband services.


[1] Hundt, R. (2000) You Say You Want a Revolution? A Story of Information Age Politics. Yale University Press. p. 25.
[2] McCarthy, B. (1999) “Introduction to the Directory of Internet Service Providers,” Boardwatch Magazine’s Directory of Internet Service Providers. Winter 1998-Spring 1999. p. 4.
[3] Schiller, D. (1999) Digital Capitalism. The MIT Press. p. 31.


AnthonybwAnthony J. Pennings, PhD is Professor and Undergraduate Director at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Five Stages of ICT for Global Development

Posted on | February 19, 2020 | No Comments

Summarized remarks from “Five Stages of Global ICT4D: Governance, Network Transformation, and the Increasing Utilization of Surplus Behavioral Data for Prediction Products and Services,” presented at the 27th AMIC Annual Conference on 17-19 June 2019 at Chulalongkorn University, Bangkok, Thailand.

This presentation will explore and outline the following stages of development utilizing information and communications Computerization and Development in SE Asia technologies (ICT). The ICT acronym has emerged as one of the most popular monikers for the digital revolution and is often combined with “development” to form ICT4D. Development is a contested term with currency in several areas. Still, in global political economy, it refers to the process of building environments and infrastructure needed to improve the quality of human life. Often this means enhancing a nation’s agriculture, education, health, and other public goods that are not strictly economy-related but improve well-being and intellectual capital. Of particular interest is the transformation of the network for WiFi and mobile use as well as data-intensive solutions to address many development issues. A growing concern is that data could be used intrusively to manipulate behaviors.

The stages are:

1) Development/Modernization; 2) New World Economic and Information Orders; 3) Structural Adjustment and Re-subordination, 4) Global Integration, and; 5) Smart/Sustainable Mobile and Data-Driven Development.

The explication of these stages will provide historical context for understanding trends in ICT innovation and implementation. This research will also provide insights into the possibilities of ICT diffusion in developing environments.

1) Development/Modernization

“Development” and “modernization” characterized the post-World War II prescription for economic development around the world, and especially in newly decolonized nation-states. Scholars talked of an eventual “takeoff” if the proper prescriptions were followed, particularly the adoption of new agricultural techniques termed the “Green Revolution.”[1] Information and Communications Technologies (ICTs) consisted primarily of the “five communication revolutions” (print, film, radio, television, and later satellite) that were utilized to spread information about modern development techniques in agriculture, health, education, and national governance.

Telegraphy and telephones were strangely absent from much of the discussion but were important for government activities as well as large-scale plantations, mining operations, and transportation, and shipping. Because of their large capital requirements and geographic expanse, countries uniformly developed state-controlled Post, Telephone, and Telegraph (PTTs) entities. Organized with the help and guidance of the International Telecommunications Union (ITU), the oldest United Nations entity, PTTs struggled to provide basic voice and telegraphic services.

Wilbur Schramm’s (1964) book Mass Media and National Development made crucial links between media and national development. Published by Stanford University Press and UNESCO, it examined the role of newspapers, radio, and television. Its emphasis on the role of information in development also laid the foundation for the analysis of computerization and ICT in the development process. I had an office next to Schramm for many years at the East-West Center while we worked on the National Computerization Policy project that resulted in the benchmark study Computerization and Development in Southeast Asia (1987).

2) New World Economic and Information Orders

Rising frustrations and concerns about neo-colonialism resulted in a collective call by developing countries for various conceptions of a “New World Economic and Communication Order.” It was echoed by UNESCO in the wake of OPEC oil crises and the resulting rising Third World debt. The issue was primarily news flow and the imbalanced flow of information from North to South. Developing countries were concerned about the unequal flows of news and data from developing to developed countries. In part, it was the preponderance of news dealing with disasters, coups, and other calamities that many countries felt restricted flows of capital.The calls caused a backlash in the US and other developed countries concerned about the independence of journalism and the free flow of trade. [2]

It was followed by concerns about obstacles hindering communications infrastructure development and how telecommunications access across the world could be stimulated. In 1983, UNESCO’s World Communication Year, the Independent Commission met several times to discuss the importance of communication infrastructure for social and economic development and to make recommendations for spurring its growth.

The Commission consisted of seventeen members – communication elites from both private and public sectors and representing a number of countries. Spurred on by the growing optimism about the development potential of telecommunications, they investigated ways Third World countries could be supported in this area. They published their recommendations in The Missing Link (1984) or what soon was to be called the “Maitland Report” after its Chair, Sir Donald Maitland from the United Kingdom.

The transition from telegraph and telex machines to computers also resulted in concerns about data transcending national boundaries. The Intergovernmental Bureau for Informatics (IBI) that had been set up as the International Computation Centre (ICC) in 1951 to help countries get access to major computers, began to study national computerization policy issues in the mid-1970s.

They increasingly focused on transborder data flows (TDF) that moved sensitive corporate, government, and personal information across national boundaries. The first International Conference on Transborder Data Flows was organized in September 1980, followed by a second held in 1984; both were held in Rome (Italy). The increasing use of computers raised questions about accounting and economic data avoiding political and tax scrutiny. The concern was that these data movements could act like a “trojan horse” and compromise a country’s credit ratings and national sovereignty, as well as individual privacy.

3) Structural Adjustment and Re-subordination

Instead, a new era of “structural adjustment” enforced by the International Monetary Fund emerged that targeted national post, telephone, and telegraph (PTT) agencies and other aspects of government administration and ownership. Long considered agents of national development and employment, PTTs came under increasing criticism for their antiquated technologies and lack of customer service. The flow of petrodollars pressured PTTs to add new value-added data networks and undergo satellite deregulation. Global circuits of digital money and news emerged such as Reuters Money Monitor Rates that linked currency exchange markets around the world in arguably the first virtual market.

A new techno-economic imperative emerged that changed the relationship between government agencies and global capital. PC-spreadsheet technologies were utilized to value and privatize PTTs so they could be corporatized and listed on electronically linked share-market exchanges. Communications markets were liberalized to allow international competition for sales of digital switches and fiber optic networks. Developing countries became “emerging markets,” consistently disciplined by the “Washington Consensus” stressing a set of policy prescriptions to continue to open them up to transborder data flows and international trade.[3]

4) Global ICT Integration

Packet-switching technologies standardized into the ITU’s X.25 and X.75 protocols for PTT data networks, transformed into ubiquitous TCP/IP networks. Cisco Systems became the principal enabler with a series of multi-protocol routers designed for enterprises, governments and eventually telcos. Lucent, Northern Telecom, and other telecommunications equipment suppliers quickly lost market share as the Internet protocols, mandated by the US military’s ARPANET, and later by the National Science Foundation’s NSFNET, were integrated into ISDN, ATM, and SONET technologies in telcos around the world.

The Global Information Infrastructure (GII) introduced at the annual ITU meeting in Buenos Aires in March of 1994 by Vice President Gore targeted national PTT monopolies. He proposed a new model of global telecommunications based on competition instead of monopoly, the rule of law, and the interconnection of networks to existing networks at fair prices. Gore followed up the next month in Marrakesh, Morocco, at the closing meeting of the Uruguay Round of the GATT (General Agreement on Tariffs and Trade) negotiations which called for the creation of the World Trade Organization (WTO).

Formed in 1995, the WTO had two meetings in 1996 and 1997 that created a new era of global communications development. Members party to the new multilateral arrangement met quickly in Singapore in 1996 to reduce tariffs on the international sales of a wide variety of information technologies. The next year the WTO met in Geneva and established rules for the continued privatization of national telecommunications operations. Sixty-nine nations party to the WTO, including the U.S., signed the Agreement on Basic Telecommunications Services in 1997 that codified new rules for telecommunications deregulation where countries agreed to privatize and open their own telecommunications infrastructures to foreign penetration and competition by other telcos.

The agreements came at a crucial technological time. The World Wide Web (WWW) was a working technology, but it would not have lived up to its namesake if the WTO had not negotiated and reduced tariffs for crucial networking and computer equipment. The resultant liberalization of data and mobile services around the world made possible a new stage in global development.

5) Smart/Sustainable Mobile and Data-Centric Development

The aggressive trade negotiations and agreements in the 1990s significantly reduced the costs of ICT devices and communication exchanges throughout the world, making possible a wide variety of new commercial and development activities based on web capabilities. Blockchain technologies and cryptocurrencies, the Internet of Things (IoT), and the proliferation of web platforms are some of the current conceptions of how reduced costs for communications and information analysis are enhancing network effects and creating value from the collection and processing of unstructured data.

This project will expand on these stages and provide a context for a further investigation of ICT for development drawing on historical and current research. Of particular concern is the implementation of policies and practices related to contemporary development practices, but commercial and monetization techniques are important as well.


[1] Rostow, W.W. (1960) Stages of Economic Growth: A Non-Communist Manifesto. Cambridge: Cambridge University Press. See also Rostow W.W., (1965) Stages of Political Development. Cambridge: Cambridge University Press.
[2] An excellent discussion of the various New World Order discourses is found in Majid Tehranian’s (1999) Global Communication and World Politics: Domination, Development, and Discourse. Boulder, CO: Lynne Rienner Publishers. p. 40-41. Also, see Jussawalla, M. (1981) ) Bridging Global Barriers: Two New International Orders. Papers of the East-West Communications Institute. Honolulu, Hawaii.
[3] Wriston, W.W. (1992) The Twilight of Sovereignty : How the Information Revolution Is Transforming Our World.



AnthonybwAnthony J. Pennings, Ph.D. is a Professor in the Department of Technology and Society, State University of New York, Korea. From 2002-2012 he was on the faculty of New York University. Previously, he taught at Hannam University in South Korea, Marist College in New York, andVictoria University in Wellington, New Zealand. His American home is in Austin, Texas and taught there in the Digital Media MBA program atSt. Edwards University He joyfully spent 9 years at the East-West Center in Honolulu, Hawaii.

keep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor and Associate Chair at State University of New York (SUNY) Korea. Recently taught at Hannam University in Daejeon, South Korea. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, media economics, and strategic communications.

    You can reach me at:

    Follow apennings on Twitter

  • About me

  • Calendar

    October 2020
    M T W T F S S
  • Pages

  • October 2020
    M T W T F S S
  • Flag Counter
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.